I’ll address the rest in a bit, but about the notation:
Questions to you:
Is T → U the Cartesian product of T and U?
What is *?
T -> U is a function from set T to set U.P* means a list of elements in set P, where the difference from set is that elements in a list are in a specific order.
The notation as a whole was a somewhat fudged version of intelligent agent formalism. The idea is to set up a skeleton for modeling any sort of intelligent entity, based on the idea that the entity only learns things from its surroundings though a series of perceptions, which might for example be a series of matrices corresponding to the images a robot’s eye camera sees, and can only affect its surroundings by choosing an action it is capable of, such as moving a robotic arm or displaying text to a terminal.
The agent model is pretty all-encompassing, but also not that useful except as the very first starting point, since all of the difficulty is in the exact details of the function that turns the most likely massive amount of data in the perception history into a well-chosen action that efficiently furthers the goals of the AI.
Modeling AIs as the function from a history of perceptions to an action is also related to thought experiments like Ned Block’s Blockhead, where a trivial AI that passes the Turing test with flying colors is constructed by merely enumerating every possible partial conversation up to a certain length, and writing up the response a human would make at that point of that conversation.
Scott Aaronson’s Why philosophers should care about computational complexity proposes to augment the usual high-level mathematical frameworks with some limits to the complexity of the black box functions, to make the framework reject cases like Blockhead, which seem to be very different from what we’d like to have when we’re looking for a computable function that implements an AI.
I’ll address the rest in a bit, but about the notation:
T -> U
is a function from set T to set U.P*
means a list of elements in set P, where the difference from set is that elements in a list are in a specific order.The notation as a whole was a somewhat fudged version of intelligent agent formalism. The idea is to set up a skeleton for modeling any sort of intelligent entity, based on the idea that the entity only learns things from its surroundings though a series of perceptions, which might for example be a series of matrices corresponding to the images a robot’s eye camera sees, and can only affect its surroundings by choosing an action it is capable of, such as moving a robotic arm or displaying text to a terminal.
The agent model is pretty all-encompassing, but also not that useful except as the very first starting point, since all of the difficulty is in the exact details of the function that turns the most likely massive amount of data in the perception history into a well-chosen action that efficiently furthers the goals of the AI.
Modeling AIs as the function from a history of perceptions to an action is also related to thought experiments like Ned Block’s Blockhead, where a trivial AI that passes the Turing test with flying colors is constructed by merely enumerating every possible partial conversation up to a certain length, and writing up the response a human would make at that point of that conversation.
Scott Aaronson’s Why philosophers should care about computational complexity proposes to augment the usual high-level mathematical frameworks with some limits to the complexity of the black box functions, to make the framework reject cases like Blockhead, which seem to be very different from what we’d like to have when we’re looking for a computable function that implements an AI.