This is not how truly fundamental breakthroughs are made.
Hmm—now that you mention it, I realize my domain knowledge here is weak. How are truly fundamental breakthroughs made? I would guess that it depends on the kind of breakthrough—that there are some things that can be solved by a relatively small number of core insights (think Albert Einstein in the patent office) and some things that are big collective endeavors (think Human Genome Project). I would guess furthermore that in many ways AGI is more like the latter than the former, see below.
Why do you assume that AGI lies beyond the capabilities of any single intelligent person armed with a modern computer and a sufficiently unorthodox idea?
Only about two percent of the Linux kernel was personally written by Linus Torvalds. Building a mind seems like it ought to be more difficult than building an operating system. In either case, it takes more than an unorthodox idea.
Only about two percent of the Linux kernel was personally written by Linus Torvalds. Building a mind seems like it ought to be more difficult than building an operating system.
There is no law of Nature that says the consequences must be commensurate with their cause. We live in an unsupervised universe where a movement of butterfly’s wings can determine the future of nations. You can’t conclude that simply because the effect is expected to be vast, the cause ought to be at least prominent. This knowledge may only be found by a more mechanistic route.
You’re right in the sense that I shouldn’t have used the words ought to be, but I think the example is still good. If other software engineering projects take more than one person, then it seems likely that AGI will too. Even if you suppose the AI does a lot of the work up to the foom, you still have to get the AI up to the point where it can recursively self-improve.
Usually by accident, by one or a few people. This is a fine example.
ought to be more difficult than building an operating system
I personally suspect that the creation of the first artificial mind will be more akin to a mathematician’s “aha!” moment than to a vast pyramid-building campaign. This is simply my educated guess, however, and my sole justification for it is that a number of pyramid-style AGI projects of heroic proportions have been attempted and all failed miserably. I disagree with Lenat’s dictum that “intelligence is ten million rules.” I suspect that the legendary missing “key” to AGI is something which could ultimately fit on a t-shirt.
I personally suspect that the creation of the first artificial mind will be more akin to a mathematician’s “aha!” moment than to a vast pyramid-building campaign. [...] my sole justification [...] is that a number of pyramid-style AGI projects of heroic proportions have been attempted and failed miserably.
Hmm—now that you mention it, I realize my domain knowledge here is weak. How are truly fundamental breakthroughs made? I would guess that it depends on the kind of breakthrough—that there are some things that can be solved by a relatively small number of core insights (think Albert Einstein in the patent office) and some things that are big collective endeavors (think Human Genome Project). I would guess furthermore that in many ways AGI is more like the latter than the former, see below.
Only about two percent of the Linux kernel was personally written by Linus Torvalds. Building a mind seems like it ought to be more difficult than building an operating system. In either case, it takes more than an unorthodox idea.
There is no law of Nature that says the consequences must be commensurate with their cause. We live in an unsupervised universe where a movement of butterfly’s wings can determine the future of nations. You can’t conclude that simply because the effect is expected to be vast, the cause ought to be at least prominent. This knowledge may only be found by a more mechanistic route.
You’re right in the sense that I shouldn’t have used the words ought to be, but I think the example is still good. If other software engineering projects take more than one person, then it seems likely that AGI will too. Even if you suppose the AI does a lot of the work up to the foom, you still have to get the AI up to the point where it can recursively self-improve.
Usually by accident, by one or a few people. This is a fine example.
I personally suspect that the creation of the first artificial mind will be more akin to a mathematician’s “aha!” moment than to a vast pyramid-building campaign. This is simply my educated guess, however, and my sole justification for it is that a number of pyramid-style AGI projects of heroic proportions have been attempted and all failed miserably. I disagree with Lenat’s dictum that “intelligence is ten million rules.” I suspect that the legendary missing “key” to AGI is something which could ultimately fit on a t-shirt.
“Reversed Stupidity is Not Intelligence.” If AGI takes deep insight and a pyramid, then we would expect those projects to fail.
Fair enough. It may very well take both.