I pretty much agree with you. Human intelligence may be high because it is used to predict/interpret the behaviour of others. Consciousness may be that same intelligence turned inward. But:
3. Given enough computational power and a compatible architecture, the agent will develop consciousness if and only if it needs to interact with other agents of the same kind, or at least of similar level of intelligence.
This does not automatically follow I think. There may be other ways that can lead to the same result.
An existing example would be cephalopods (octopus, squid & co.) From what I understand, they are highly intelligent, yet live very short lives, are not social (don’t live in large groups, like humans), and have no “culture” (tricks that are taught from generation to generation)[1].
Instead, their intelligence seems to be related to their complex bodies, which requires lots of processing power.
Which is why I think that interaction with other similar entities is not needed for consciousness to emerge. I think the interaction just has to be complex (which is more general than your requirement of interaction with complex beings) For example, a sufficient number of “simple” input/output channels (lots of suckers) can be just as complex as for example human language. Because it is efficient to model/simplify this complexity, intelligence and then consciousness may emerge.
I am therefore of the opinion that either octopi are already conscious, or that if you were to increase the number of their arms n, for n → infty they sooner or later should be.
In any case, they may dream
[1] This may not be completely correct. There seems to be some kind of hunting tactics, that involve 1 octopus and 1 grouper (fish), where they each drive prey towards the other in turn. The grouper, being longer lived, may teach this to to others?
I read the original post, and kind of liked it, but I also very much disagreed with it.
I am somewhat befuddled by the chain of reasoning in that post, as well as that of the community in general.
In mathematics, you may start from some assumptions, and derive lots of things, and if ever you come upon some inconsistencies, you normally conclude that one of your assumptions is wrong (if your derivation is okay).
Anyway, here it seems to me, that you make assumptions, derive something ludicrous, and then tap yourself on the shoulder and conclude, that obviously everything has to be correct. To me, that does not follow.
If you assume an omnipotent basilisk (if you multiply by infinity), then obviously you can derive anything you damn well please.
One concrete example (There were many more in the original post):
The way to recognize local extrema is exactly to walk away from them far enough. If you know of another way, please elaborate, because I’d very much like to sell it myself if you don’t mind.
I would argue that is the most important point in fact. You assume that you are looking for an optimum in a static potential landscape. The dinosaurs kind of did the same.
The only way to keep surviving in a dynamic potential landscape, is to keep optimizing, and not tap yourself on the shoulder for a job well done, and just stop.
A simple example: Kids during puberty kind of seem to be doing the opposite of whatever their parents tell them. Why? Because they know (somehow) that there are other, better minima in reach (even if your parents are the god-kings of the earth) (Who wants to be a carpenter, when you can be a Youtuber, famous for Idontreallycare...)
Anyway, in my opinion, boredom is a solution for the same class of problem, just not intergenerational, but instead more in a day-to-day manner.