‘We can also say, with greater confidence than for the AI path, that the emulation path will not succeed in the near future (within the next fifteen years, say) because we know that several challenging precursor technologies have not yet been developed. By contrast, it seems likely that somebody could in principle sit down and code a seed AI on an ordinary present-day personal computer; and it is conceivable—though unlikely—that somebody somewhere will get the right insight for how to do this in the near future.’ - Bostrom (p36)
Why is it more plausible that a person can sit down and invent a human-level artificial intelligence than that they can sit down and invent the technical means to produce brain emulations?
Because one requires only a theoretical breakthrough and the other requires engineering. Ideas iterate very quickly. Hardware has to be built. The machines that make the machines you want to use have to be designed, whole industries may have to be invented. A theoretical breakthrough doesn’t have the same lag time.
If I work as a theorist and I have a brilliant insight, I start writing the paper tomorrow. If I work as an experimentalist and I have a brilliant insight, I start writing the grant to purchase the new equipment I’ll need.
Making a brain emulation machine requires (1) the ability to image a brain at sufficient resolution, and (2) computing power in excess of the largest supercomputers available today. Both of these tasks are things which require a long engineering lead time and commitment of resources, and are not things which we expect to solved by some clever insight. Clever insight alone won’t ever enable you construct record-setting supercomputers out of leftover hobbyist computer parts, toothpicks, and superglue.
Why do we assume that all that is needed for AI is a clever insight, not the insight-equivalent of a long engineering time and commitment of resources?
Because the scope of the problems involved, e.g. searchspace over programs, can be calculated and compared with other similarly structured but solved problems (e.g. narrow AI). And in a very abstract theoretical sense today’s desktop computers are probably sufficient for running a fully optimized human-level AGI. And this is a sensible and consistent result—it should not be surprising that it takes many orders of magnitude more computational power to emulate a computing substrate running a general intelligence (the brain simulated by a supercomputer) than to run a natively coded AGI. Designing the program which implements the native, non-emulative AGI is basically a “clever insight” problem, or perhaps more accurately a large series of clever insights.
Why does this make it more plausible that a person can sit down and invent a human-level artificial intelligence than that they can sit down and invent the technical means to produce brain emulations?
We have the technical means to produce brain emulations. It requires just very straightforward advances in imaging and larger supercomputers. There are various smaller-scale brain emulation projects that have already proved the concept. It’s just that doing that at a larger scale and finer resolution requires a lot of person-years just to get it done.
EDIT: In Rumsfeld speak, whole-brain emulation is a series of known-knowns: lots of work that we know needs to be done, and someone just has to do it. Whereas AGI involves known-unknowns: we don’t know precisely what has to be done, so we can’t quantify exactly how long it will take. We could guess, but it remains possible that clever insight might find a better, faster, cheaper path.
Sorry for the pause, internet problems at my place.
Anyways, it seems you’re right. Technically, it might be more plausible for AI to be coded faster (higher variance), even though I think it’ll take more time than emulation (on average).
Why is it more plausible that a person can sit down and invent a human-level artificial intelligence than that they can sit down and invent the technical means to produce brain emulations?
Because one requires only a theoretical breakthrough and the other requires engineering. Ideas iterate very quickly. Hardware has to be built. The machines that make the machines you want to use have to be designed, whole industries may have to be invented. A theoretical breakthrough doesn’t have the same lag time.
If I work as a theorist and I have a brilliant insight, I start writing the paper tomorrow. If I work as an experimentalist and I have a brilliant insight, I start writing the grant to purchase the new equipment I’ll need.
This is the part of this section I find least convincing.
Can you elaborate?
How is theoretical progress different from engineering progress?
Is the following an example of valid inference?
In principle, it is also conceivable (but not probable), that someone will sit down and make a brain emulation machine.
Making a brain emulation machine requires (1) the ability to image a brain at sufficient resolution, and (2) computing power in excess of the largest supercomputers available today. Both of these tasks are things which require a long engineering lead time and commitment of resources, and are not things which we expect to solved by some clever insight. Clever insight alone won’t ever enable you construct record-setting supercomputers out of leftover hobbyist computer parts, toothpicks, and superglue.
Why do we assume that all that is needed for AI is a clever insight, not the insight-equivalent of a long engineering time and commitment of resources?
Because the scope of the problems involved, e.g. searchspace over programs, can be calculated and compared with other similarly structured but solved problems (e.g. narrow AI). And in a very abstract theoretical sense today’s desktop computers are probably sufficient for running a fully optimized human-level AGI. And this is a sensible and consistent result—it should not be surprising that it takes many orders of magnitude more computational power to emulate a computing substrate running a general intelligence (the brain simulated by a supercomputer) than to run a natively coded AGI. Designing the program which implements the native, non-emulative AGI is basically a “clever insight” problem, or perhaps more accurately a large series of clever insights.
I agree.
Why does this make it more plausible that a person can sit down and invent a human-level artificial intelligence than that they can sit down and invent the technical means to produce brain emulations?
We have the technical means to produce brain emulations. It requires just very straightforward advances in imaging and larger supercomputers. There are various smaller-scale brain emulation projects that have already proved the concept. It’s just that doing that at a larger scale and finer resolution requires a lot of person-years just to get it done.
EDIT: In Rumsfeld speak, whole-brain emulation is a series of known-knowns: lots of work that we know needs to be done, and someone just has to do it. Whereas AGI involves known-unknowns: we don’t know precisely what has to be done, so we can’t quantify exactly how long it will take. We could guess, but it remains possible that clever insight might find a better, faster, cheaper path.
Sorry for the pause, internet problems at my place.
Anyways, it seems you’re right. Technically, it might be more plausible for AI to be coded faster (higher variance), even though I think it’ll take more time than emulation (on average).