The mind space of humans is vast. It is not determined by genetics, it is determined by memetics, and AI’s would necessarily inherit our memetics and thus will necessarily start as samples in our mindspace.
To put it in a LW lingo, AI’s will necessarily inherent our priors, assumptions, and our vast mountain of beliefs and knowledge.
The only way around this would be to evolve them in some isolated universe from scratch, but that is in fact more dangerous besides just being unrealistic.
So no, the eventual mindspace of AI’s may be vast, but that mindspace necessarily starts out as just our mindspace, and then expands.
Having human like AI’s is no more use to us than having… humans.
And this is just blatantly false. At the very least, we could have billions of Einstein level intelligences who all thought thousands of times faster than us. You can talk all you want about how much your non-human-like AI would be even so much better than that, but at that point we are just digressing into an imaginary pissing contest.
The mind space of humans is vast. It is not determined by genetics, it is determined by memetics, and AI’s would necessarily inherit our memetics and thus will necessarily start as samples in our mindspace.
The Kolomogrov complexity of humans is quite high. See this list of human universals; every one of the elements on that list cuts the size of humans in general mind space by a factor of at least two, probably much more (even those universals that are only approximately true do this).
Almost all of the linguistic ‘universals’ are universal to languages, not humans—and would necessarily apply to AI’s who speak our languages
Most of the social ‘universals’ are universal to societies, not humans, and apply just as easily to birds, bees, and dolphins: coalitions, leaders, conflicts?
AI’s will inherit some understanding of all the idiosynchronicities of our complex culture just by learning our language and being immersed in it.
Kolomogrov complexity is not immediately relevant to this point. No matter how large the evolutionary landscape is, there are a small number of stable attractors in that landscape that become ‘universals’, species, parallel evolution, etc etc.
We are not going to create AI’s by randomly sampling mindspace. The only way they could be truly alien is if we evolved a new simulated world from scratch with it’s own evolutionary history and de novo culture and language. But of course that is unrealistic and unuseful on so many levels.
They will necessarily be samples from our mindspace—otherwise they wouldn’t be so useful.
They will necessarily be samples from our mindspace—otherwise they wouldn’t be so useful.
Computers so far have been very different from us. That is partly because they have been built to compensate for our weaknesses—to be strong where we are weak. They compensate for our poor memories, our terrible arithmetic module, our poor long-distance communications skills—and our poor ability at serial tasks. That is how they have managed to find a foothold in society—before maastering nanotechnology.
IMO, we will probably be seeing a considerable amount more of that sort of thing.
Computers so far have been very different from us.
[snip]
Agree with your point, but so far computers have been extensions of our minds and not minds in their own right. And perhaps that trend will continue long enough to delay AGI for a while.
For for AGI, for them to be minds, they will need to think and understand human language—and this is why I say they “will necessarily be samples from our mindspace”.
The mind space of humans is vast. It is not determined by genetics, it is determined by memetics, and AI’s would necessarily inherit our memetics and thus will necessarily start as samples in our mindspace.
To put it in a LW lingo, AI’s will necessarily inherent our priors, assumptions, and our vast mountain of beliefs and knowledge.
The only way around this would be to evolve them in some isolated universe from scratch, but that is in fact more dangerous besides just being unrealistic.
So no, the eventual mindspace of AI’s may be vast, but that mindspace necessarily starts out as just our mindspace, and then expands.
And this is just blatantly false. At the very least, we could have billions of Einstein level intelligences who all thought thousands of times faster than us. You can talk all you want about how much your non-human-like AI would be even so much better than that, but at that point we are just digressing into an imaginary pissing contest.
The Kolomogrov complexity of humans is quite high. See this list of human universals; every one of the elements on that list cuts the size of humans in general mind space by a factor of at least two, probably much more (even those universals that are only approximately true do this).
This list doesn’t really help your point:
Almost all of the linguistic ‘universals’ are universal to languages, not humans—and would necessarily apply to AI’s who speak our languages
Most of the social ‘universals’ are universal to societies, not humans, and apply just as easily to birds, bees, and dolphins: coalitions, leaders, conflicts?
AI’s will inherit some understanding of all the idiosynchronicities of our complex culture just by learning our language and being immersed in it.
Kolomogrov complexity is not immediately relevant to this point. No matter how large the evolutionary landscape is, there are a small number of stable attractors in that landscape that become ‘universals’, species, parallel evolution, etc etc.
We are not going to create AI’s by randomly sampling mindspace. The only way they could be truly alien is if we evolved a new simulated world from scratch with it’s own evolutionary history and de novo culture and language. But of course that is unrealistic and unuseful on so many levels.
They will necessarily be samples from our mindspace—otherwise they wouldn’t be so useful.
Computers so far have been very different from us. That is partly because they have been built to compensate for our weaknesses—to be strong where we are weak. They compensate for our poor memories, our terrible arithmetic module, our poor long-distance communications skills—and our poor ability at serial tasks. That is how they have managed to find a foothold in society—before maastering nanotechnology.
IMO, we will probably be seeing a considerable amount more of that sort of thing.
Agree with your point, but so far computers have been extensions of our minds and not minds in their own right. And perhaps that trend will continue long enough to delay AGI for a while.
For for AGI, for them to be minds, they will need to think and understand human language—and this is why I say they “will necessarily be samples from our mindspace”.