Making a brain emulation machine requires (1) the ability to image a brain at sufficient resolution, and (2) computing power in excess of the largest supercomputers available today. Both of these tasks are things which require a long engineering lead time and commitment of resources, and are not things which we expect to solved by some clever insight. Clever insight alone won’t ever enable you construct record-setting supercomputers out of leftover hobbyist computer parts, toothpicks, and superglue.
Why do we assume that all that is needed for AI is a clever insight, not the insight-equivalent of a long engineering time and commitment of resources?
Because the scope of the problems involved, e.g. searchspace over programs, can be calculated and compared with other similarly structured but solved problems (e.g. narrow AI). And in a very abstract theoretical sense today’s desktop computers are probably sufficient for running a fully optimized human-level AGI. And this is a sensible and consistent result—it should not be surprising that it takes many orders of magnitude more computational power to emulate a computing substrate running a general intelligence (the brain simulated by a supercomputer) than to run a natively coded AGI. Designing the program which implements the native, non-emulative AGI is basically a “clever insight” problem, or perhaps more accurately a large series of clever insights.
Why does this make it more plausible that a person can sit down and invent a human-level artificial intelligence than that they can sit down and invent the technical means to produce brain emulations?
We have the technical means to produce brain emulations. It requires just very straightforward advances in imaging and larger supercomputers. There are various smaller-scale brain emulation projects that have already proved the concept. It’s just that doing that at a larger scale and finer resolution requires a lot of person-years just to get it done.
EDIT: In Rumsfeld speak, whole-brain emulation is a series of known-knowns: lots of work that we know needs to be done, and someone just has to do it. Whereas AGI involves known-unknowns: we don’t know precisely what has to be done, so we can’t quantify exactly how long it will take. We could guess, but it remains possible that clever insight might find a better, faster, cheaper path.
Sorry for the pause, internet problems at my place.
Anyways, it seems you’re right. Technically, it might be more plausible for AI to be coded faster (higher variance), even though I think it’ll take more time than emulation (on average).
How is theoretical progress different from engineering progress?
Is the following an example of valid inference?
In principle, it is also conceivable (but not probable), that someone will sit down and make a brain emulation machine.
Making a brain emulation machine requires (1) the ability to image a brain at sufficient resolution, and (2) computing power in excess of the largest supercomputers available today. Both of these tasks are things which require a long engineering lead time and commitment of resources, and are not things which we expect to solved by some clever insight. Clever insight alone won’t ever enable you construct record-setting supercomputers out of leftover hobbyist computer parts, toothpicks, and superglue.
Why do we assume that all that is needed for AI is a clever insight, not the insight-equivalent of a long engineering time and commitment of resources?
Because the scope of the problems involved, e.g. searchspace over programs, can be calculated and compared with other similarly structured but solved problems (e.g. narrow AI). And in a very abstract theoretical sense today’s desktop computers are probably sufficient for running a fully optimized human-level AGI. And this is a sensible and consistent result—it should not be surprising that it takes many orders of magnitude more computational power to emulate a computing substrate running a general intelligence (the brain simulated by a supercomputer) than to run a natively coded AGI. Designing the program which implements the native, non-emulative AGI is basically a “clever insight” problem, or perhaps more accurately a large series of clever insights.
I agree.
Why does this make it more plausible that a person can sit down and invent a human-level artificial intelligence than that they can sit down and invent the technical means to produce brain emulations?
We have the technical means to produce brain emulations. It requires just very straightforward advances in imaging and larger supercomputers. There are various smaller-scale brain emulation projects that have already proved the concept. It’s just that doing that at a larger scale and finer resolution requires a lot of person-years just to get it done.
EDIT: In Rumsfeld speak, whole-brain emulation is a series of known-knowns: lots of work that we know needs to be done, and someone just has to do it. Whereas AGI involves known-unknowns: we don’t know precisely what has to be done, so we can’t quantify exactly how long it will take. We could guess, but it remains possible that clever insight might find a better, faster, cheaper path.
Sorry for the pause, internet problems at my place.
Anyways, it seems you’re right. Technically, it might be more plausible for AI to be coded faster (higher variance), even though I think it’ll take more time than emulation (on average).