Why do we assume that all that is needed for AI is a clever insight, not the insight-equivalent of a long engineering time and commitment of resources?
Because the scope of the problems involved, e.g. searchspace over programs, can be calculated and compared with other similarly structured but solved problems (e.g. narrow AI). And in a very abstract theoretical sense today’s desktop computers are probably sufficient for running a fully optimized human-level AGI. And this is a sensible and consistent result—it should not be surprising that it takes many orders of magnitude more computational power to emulate a computing substrate running a general intelligence (the brain simulated by a supercomputer) than to run a natively coded AGI. Designing the program which implements the native, non-emulative AGI is basically a “clever insight” problem, or perhaps more accurately a large series of clever insights.
Why does this make it more plausible that a person can sit down and invent a human-level artificial intelligence than that they can sit down and invent the technical means to produce brain emulations?
We have the technical means to produce brain emulations. It requires just very straightforward advances in imaging and larger supercomputers. There are various smaller-scale brain emulation projects that have already proved the concept. It’s just that doing that at a larger scale and finer resolution requires a lot of person-years just to get it done.
EDIT: In Rumsfeld speak, whole-brain emulation is a series of known-knowns: lots of work that we know needs to be done, and someone just has to do it. Whereas AGI involves known-unknowns: we don’t know precisely what has to be done, so we can’t quantify exactly how long it will take. We could guess, but it remains possible that clever insight might find a better, faster, cheaper path.
Sorry for the pause, internet problems at my place.
Anyways, it seems you’re right. Technically, it might be more plausible for AI to be coded faster (higher variance), even though I think it’ll take more time than emulation (on average).
Why do we assume that all that is needed for AI is a clever insight, not the insight-equivalent of a long engineering time and commitment of resources?
Because the scope of the problems involved, e.g. searchspace over programs, can be calculated and compared with other similarly structured but solved problems (e.g. narrow AI). And in a very abstract theoretical sense today’s desktop computers are probably sufficient for running a fully optimized human-level AGI. And this is a sensible and consistent result—it should not be surprising that it takes many orders of magnitude more computational power to emulate a computing substrate running a general intelligence (the brain simulated by a supercomputer) than to run a natively coded AGI. Designing the program which implements the native, non-emulative AGI is basically a “clever insight” problem, or perhaps more accurately a large series of clever insights.
I agree.
Why does this make it more plausible that a person can sit down and invent a human-level artificial intelligence than that they can sit down and invent the technical means to produce brain emulations?
We have the technical means to produce brain emulations. It requires just very straightforward advances in imaging and larger supercomputers. There are various smaller-scale brain emulation projects that have already proved the concept. It’s just that doing that at a larger scale and finer resolution requires a lot of person-years just to get it done.
EDIT: In Rumsfeld speak, whole-brain emulation is a series of known-knowns: lots of work that we know needs to be done, and someone just has to do it. Whereas AGI involves known-unknowns: we don’t know precisely what has to be done, so we can’t quantify exactly how long it will take. We could guess, but it remains possible that clever insight might find a better, faster, cheaper path.
Sorry for the pause, internet problems at my place.
Anyways, it seems you’re right. Technically, it might be more plausible for AI to be coded faster (higher variance), even though I think it’ll take more time than emulation (on average).