Would brain emulation work as a potential shortcut to the singularity? Upload a mind, speed up its subjective time, let it work on the problem? What could EY do with a thousand years to work on FAI? Could he come back in a few days of our time with the answer?
Does an AI have to have a utility function? Can we just make it good at giving answers, instead of asking it to act on them?
Going over the Yudkowsky/Hanson AI-Foom debate, it seems like the basic issue is how much of a difference an insight or two can make in an AI.
An AI with chimp-level intelligent software will run ten million times faster than a chimp mind, but give a chimp ten million years to think about science, and it still can’t match a normal human (is that true, by the way? That’s one question).
But if you can bridge the gap between chimp and human level intelligent software, a human mind thinking ten million times faster can quickly improve itself and go FOOM.
So the question is, how big of a gap is there between chimp and human software? EY argues that since evolution achieved it in only five million years, it can’t be much of a difference.
So the whole world could get to chimp level software (which AI can’t do much), and then one little research group might do the work of five million years of dumb, blind evolution in a few days or weeks, cross the line from chimp to human level software, and go FOOM.
This is new to me, so the question about that is, are there any glaring bits I’ve got wrong? And, am I right in thinking that EY’s right, five million years of evolution can’t produce that much of a difference? Is there some reason why we think Hanson is wrong (or right), and the difference between humans and chimps isn’t mostly in the brain, it’s the brain learning to talk and socialize, and then most of the difference is in the cultural content which came from using that talking and socializing for five million years?
And lastly, is an FAI possible for every possible kind of mind? Are there some kinds of minds for which you can’t have a superpowerful, superintelligent FAI? If there are, how do we know we’re not one of them?
The sequence on reviewing “Why Everyone (Else) Is a Hypocrite” talks about our mind having different modules. We hold different beliefs and different utility functions simultaneously, and act according to which module is activated. If that’s the case, is it necessarily possible to have an FAI that can serve us, if “us” simultaneously wants different contradictory things?
An AI with chimp-level intelligent software will run ten million times faster than a chimp mind, but give a chimp ten million years to think about science, and it still can’t match a normal human (is that true, by the way? That’s one question).
But if you can bridge the gap between chimp and human level intelligent software, a human mind thinking ten million times faster can quickly improve itself and go FOOM.
I think there’s a moderate chance of this working out. One note about emulating a chimp mind is that you don’t have to let the chimp do its own optimization, you can do a (probably highly unethical) evolutionary algorithm to prune, hypertrophy, and reshape various parts of the chimp-mind-algorithm in order to boost its effective intelligence, generation by generation, and end up with something human-level or beyond. All this sort of depends on arbitrary computing power.
I admit I’m stealing all this from the Quantum Thief books, but, it would probably be easier to enhance even a human emulation by this iterative method rather than letting the human emulation try to learn all of neuroscience and start manually tinkering with itself. In other words—make an emulation of me, copy it 10,000 times and make minor modifications to the architecture of each one, subject them to an extensive battery of tests, take the top 100 performers and spawn another 10,000 copies based on the successful changes, repeat until you have something that started out as “me” but outperforms me by leaps and bounds. Since I’m already riffing on science fiction, I might as well point out that you could apply a forcing function to minimize the number of neurons and synapses with each generation, so that Moridinamael-Prime ends up not only smarter than Moridinamael-Baseline but also simpler and more efficient, in the sense of being easier to simulate.
And lastly, is an FAI possible for every possible kind of mind? Are there some kinds of minds for which you can’t have a superpowerful, superintelligent FAI? If there are, how do we know we’re not one of them?
I see no reason why humans should be particularly incompatible with the ideas behind FAI. If FAI boils down to “do what this mind would want if the mind thought about it for a long time”, I don’t immediately see anything permanently irreconcilable about that for humans.
Would brain emulation work as a potential shortcut to the singularity? Upload a mind, speed up its subjective time, let it work on the problem? What could EY do with a thousand years to work on FAI? Could he come back in a few days of our time with the answer?
Does an AI have to have a utility function? Can we just make it good at giving answers, instead of asking it to act on them?
Going over the Yudkowsky/Hanson AI-Foom debate, it seems like the basic issue is how much of a difference an insight or two can make in an AI.
An AI with chimp-level intelligent software will run ten million times faster than a chimp mind, but give a chimp ten million years to think about science, and it still can’t match a normal human (is that true, by the way? That’s one question).
But if you can bridge the gap between chimp and human level intelligent software, a human mind thinking ten million times faster can quickly improve itself and go FOOM.
So the question is, how big of a gap is there between chimp and human software? EY argues that since evolution achieved it in only five million years, it can’t be much of a difference.
So the whole world could get to chimp level software (which AI can’t do much), and then one little research group might do the work of five million years of dumb, blind evolution in a few days or weeks, cross the line from chimp to human level software, and go FOOM.
This is new to me, so the question about that is, are there any glaring bits I’ve got wrong? And, am I right in thinking that EY’s right, five million years of evolution can’t produce that much of a difference? Is there some reason why we think Hanson is wrong (or right), and the difference between humans and chimps isn’t mostly in the brain, it’s the brain learning to talk and socialize, and then most of the difference is in the cultural content which came from using that talking and socializing for five million years?
And lastly, is an FAI possible for every possible kind of mind? Are there some kinds of minds for which you can’t have a superpowerful, superintelligent FAI? If there are, how do we know we’re not one of them?
The sequence on reviewing “Why Everyone (Else) Is a Hypocrite” talks about our mind having different modules. We hold different beliefs and different utility functions simultaneously, and act according to which module is activated. If that’s the case, is it necessarily possible to have an FAI that can serve us, if “us” simultaneously wants different contradictory things?
I think there’s a moderate chance of this working out. One note about emulating a chimp mind is that you don’t have to let the chimp do its own optimization, you can do a (probably highly unethical) evolutionary algorithm to prune, hypertrophy, and reshape various parts of the chimp-mind-algorithm in order to boost its effective intelligence, generation by generation, and end up with something human-level or beyond. All this sort of depends on arbitrary computing power.
I admit I’m stealing all this from the Quantum Thief books, but, it would probably be easier to enhance even a human emulation by this iterative method rather than letting the human emulation try to learn all of neuroscience and start manually tinkering with itself. In other words—make an emulation of me, copy it 10,000 times and make minor modifications to the architecture of each one, subject them to an extensive battery of tests, take the top 100 performers and spawn another 10,000 copies based on the successful changes, repeat until you have something that started out as “me” but outperforms me by leaps and bounds. Since I’m already riffing on science fiction, I might as well point out that you could apply a forcing function to minimize the number of neurons and synapses with each generation, so that Moridinamael-Prime ends up not only smarter than Moridinamael-Baseline but also simpler and more efficient, in the sense of being easier to simulate.
I see no reason why humans should be particularly incompatible with the ideas behind FAI. If FAI boils down to “do what this mind would want if the mind thought about it for a long time”, I don’t immediately see anything permanently irreconcilable about that for humans.