I pretty much agree with you. Human intelligence may be high because it is used to predict/interpret the behaviour of others. Consciousness may be that same intelligence turned inward. But:
3. Given enough computational power and a compatible architecture, the agent will develop consciousness if and only if it needs to interact with other agents of the same kind, or at least of similar level of intelligence.
This does not automatically follow I think. There may be other ways that can lead to the same result.
An existing example would be cephalopods (octopus, squid & co.) From what I understand, they are highly intelligent, yet live very short lives, are not social (don’t live in large groups, like humans), and have no “culture” (tricks that are taught from generation to generation)[1].
Instead, their intelligence seems to be related to their complex bodies, which requires lots of processing power.
Which is why I think that interaction with other similar entities is not needed for consciousness to emerge. I think the interaction just has to be complex (which is more general than your requirement of interaction with complex beings) For example, a sufficient number of “simple” input/output channels (lots of suckers) can be just as complex as for example human language. Because it is efficient to model/simplify this complexity, intelligence and then consciousness may emerge.
I am therefore of the opinion that either octopi are already conscious, or that if you were to increase the number of their arms n, for n → infty they sooner or later should be.
In any case, they may dream
[1] This may not be completely correct. There seems to be some kind of hunting tactics, that involve 1 octopus and 1 grouper (fish), where they each drive prey towards the other in turn. The grouper, being longer lived, may teach this to to others?
I pretty much agree with you. Human intelligence may be high because it is used to predict/interpret the behaviour of others. Consciousness may be that same intelligence turned inward. But:
This does not automatically follow I think. There may be other ways that can lead to the same result.
An existing example would be cephalopods (octopus, squid & co.) From what I understand, they are highly intelligent, yet live very short lives, are not social (don’t live in large groups, like humans), and have no “culture” (tricks that are taught from generation to generation)[1].
Instead, their intelligence seems to be related to their complex bodies, which requires lots of processing power.
Which is why I think that interaction with other similar entities is not needed for consciousness to emerge. I think the interaction just has to be complex (which is more general than your requirement of interaction with complex beings) For example, a sufficient number of “simple” input/output channels (lots of suckers) can be just as complex as for example human language. Because it is efficient to model/simplify this complexity, intelligence and then consciousness may emerge.
I am therefore of the opinion that either octopi are already conscious, or that if you were to increase the number of their arms n, for n → infty they sooner or later should be.
In any case, they may dream
[1] This may not be completely correct. There seems to be some kind of hunting tactics, that involve 1 octopus and 1 grouper (fish), where they each drive prey towards the other in turn. The grouper, being longer lived, may teach this to to others?