An AI with chimp-level intelligent software will run ten million times faster than a chimp mind, but give a chimp ten million years to think about science, and it still can’t match a normal human (is that true, by the way? That’s one question).
But if you can bridge the gap between chimp and human level intelligent software, a human mind thinking ten million times faster can quickly improve itself and go FOOM.
I think there’s a moderate chance of this working out. One note about emulating a chimp mind is that you don’t have to let the chimp do its own optimization, you can do a (probably highly unethical) evolutionary algorithm to prune, hypertrophy, and reshape various parts of the chimp-mind-algorithm in order to boost its effective intelligence, generation by generation, and end up with something human-level or beyond. All this sort of depends on arbitrary computing power.
I admit I’m stealing all this from the Quantum Thief books, but, it would probably be easier to enhance even a human emulation by this iterative method rather than letting the human emulation try to learn all of neuroscience and start manually tinkering with itself. In other words—make an emulation of me, copy it 10,000 times and make minor modifications to the architecture of each one, subject them to an extensive battery of tests, take the top 100 performers and spawn another 10,000 copies based on the successful changes, repeat until you have something that started out as “me” but outperforms me by leaps and bounds. Since I’m already riffing on science fiction, I might as well point out that you could apply a forcing function to minimize the number of neurons and synapses with each generation, so that Moridinamael-Prime ends up not only smarter than Moridinamael-Baseline but also simpler and more efficient, in the sense of being easier to simulate.
And lastly, is an FAI possible for every possible kind of mind? Are there some kinds of minds for which you can’t have a superpowerful, superintelligent FAI? If there are, how do we know we’re not one of them?
I see no reason why humans should be particularly incompatible with the ideas behind FAI. If FAI boils down to “do what this mind would want if the mind thought about it for a long time”, I don’t immediately see anything permanently irreconcilable about that for humans.
I think there’s a moderate chance of this working out. One note about emulating a chimp mind is that you don’t have to let the chimp do its own optimization, you can do a (probably highly unethical) evolutionary algorithm to prune, hypertrophy, and reshape various parts of the chimp-mind-algorithm in order to boost its effective intelligence, generation by generation, and end up with something human-level or beyond. All this sort of depends on arbitrary computing power.
I admit I’m stealing all this from the Quantum Thief books, but, it would probably be easier to enhance even a human emulation by this iterative method rather than letting the human emulation try to learn all of neuroscience and start manually tinkering with itself. In other words—make an emulation of me, copy it 10,000 times and make minor modifications to the architecture of each one, subject them to an extensive battery of tests, take the top 100 performers and spawn another 10,000 copies based on the successful changes, repeat until you have something that started out as “me” but outperforms me by leaps and bounds. Since I’m already riffing on science fiction, I might as well point out that you could apply a forcing function to minimize the number of neurons and synapses with each generation, so that Moridinamael-Prime ends up not only smarter than Moridinamael-Baseline but also simpler and more efficient, in the sense of being easier to simulate.
I see no reason why humans should be particularly incompatible with the ideas behind FAI. If FAI boils down to “do what this mind would want if the mind thought about it for a long time”, I don’t immediately see anything permanently irreconcilable about that for humans.