(The specifics of your postulated definition, especially that recursion->intelligence, seems like a not-very-useful way to define things, since Turing completeness probably means that once you clear a fairly low bar, your amount of possible recursion is just a measure of your hardware, when we usually want ‘intelligence’ to also capture something about your software. But the more standard information-theoretic notion of coding for a goal within a world-model would also say that bigger world models need (on average) bigger codes.)
So it might be a bit confusing, but by recursion here I did not mean like how many loops you do in a program, I meant what order of signs you can create and store, which is a statement of software. Basically, how many signs can you meaningfully connect to another. Not all hardware can represent higher order signs, easy example is a single layer vs multilevel perceptron. Perhaps recursion was the wrong word, but at the time I was thinking about how a sign can refer to another sign that refers to another sign and so on, creating a chain of signifiers which is still meaningful so long as the higher order signs refer to more than one lower order sign.
When we’re taking the human perspective, it’s fine to say “the smarter agent has such a richer and more complex conception of its goal,” without that implying that the smarter agent’s goal has to be different than the dumber agent’s goal.
The point of bringing semiotics into the mix here is to show that the meaning of a sign, such as a goal, is dependent on the things we associate with it. The human perspective is just a way of expressing that goal at one moment in time with our specific associations with it
a) Actions like exploration or “play” could be derived (instrumental) behaviors, rather than final goals. The fact that exploration is given as a final goal in many present-day AI systems is certainly interesting, but isn’t very relevant to the abstract theoretical argument.
In my follow up post I actually show the way in which it is relevant.
b) Even if you assume that every smart AI has “and also, explore and play” as part of its goals, doesn’t mean the other stuff can’t be alien.
The argument about alien values isn’t the logical one but the statistical one, any AI situated in human culture will have values that are likely to be related to the signs created and used by that culture, although we can expect outliers.
So it might be a bit confusing, but by recursion here I did not mean like how many loops you do in a program, I meant what order of signs you can create and store, which is a statement of software. Basically, how many signs can you meaningfully connect to another. Not all hardware can represent higher order signs, easy example is a single layer vs multilevel perceptron. Perhaps recursion was the wrong word, but at the time I was thinking about how a sign can refer to another sign that refers to another sign and so on, creating a chain of signifiers which is still meaningful so long as the higher order signs refer to more than one lower order sign.
The point of bringing semiotics into the mix here is to show that the meaning of a sign, such as a goal, is dependent on the things we associate with it. The human perspective is just a way of expressing that goal at one moment in time with our specific associations with it
In my follow up post I actually show the way in which it is relevant.
The argument about alien values isn’t the logical one but the statistical one, any AI situated in human culture will have values that are likely to be related to the signs created and used by that culture, although we can expect outliers.