How is blindly looking for AGI in a vast search space better than stagnation?
No amount of aimless blundering beats deliberate caution and moderation (see 15th century China example) for maintaining technological stagnation.
How does working on FAI qualify as “stagnation”?
It is a distraction from doing things which are actually useful in the creation of our successors.
You are trying to invent the circuit breaker before discovering electricity; the airbag before the horseless carriage. I firmly believe that all of the effort currently put into “Friendly AI” is wasted. The bored teenager who finally puts together an AGI in his parents’ basement will not have read any of these deep philosophical tracts.
The bored teenager who finally puts together an AGI in his parents’ basement will not have read any of these deep philosophical tracts.
AGI is a really hard problem. If it ever gets accomplished, it’s going to be by a team of geniuses who have been working on the project for years. Will they be so immersed in the math that they won’t have read the deep philosophical tracts?---maybe. But your bored teenager scenario makes no sense.
It has successfully resisted solution thus far, but I suspect that it will seem laughably easy in retrospect when it finally falls.
If it ever gets accomplished, it’s going to be by a team of geniuses who have been working on the project for years
This is not how truly fundamental breakthroughs are made.
Will they be so immersed in the math that they won’t have read the deep philosophical tracts?
Here is where I agree with you—anyone both qualified and motivated to work on AGI will have no time or inclination to pontificate regarding some nebulous Friendliness.
But your bored teenager scenario makes no sense.
Why do you assume that AGI lies beyond the capabilities of any single intelligent person armed with a modern computer and a sufficiently unorthodox idea?
This is not how truly fundamental breakthroughs are made.
Hmm—now that you mention it, I realize my domain knowledge here is weak. How are truly fundamental breakthroughs made? I would guess that it depends on the kind of breakthrough—that there are some things that can be solved by a relatively small number of core insights (think Albert Einstein in the patent office) and some things that are big collective endeavors (think Human Genome Project). I would guess furthermore that in many ways AGI is more like the latter than the former, see below.
Why do you assume that AGI lies beyond the capabilities of any single intelligent person armed with a modern computer and a sufficiently unorthodox idea?
Only about two percent of the Linux kernel was personally written by Linus Torvalds. Building a mind seems like it ought to be more difficult than building an operating system. In either case, it takes more than an unorthodox idea.
Only about two percent of the Linux kernel was personally written by Linus Torvalds. Building a mind seems like it ought to be more difficult than building an operating system.
There is no law of Nature that says the consequences must be commensurate with their cause. We live in an unsupervised universe where a movement of butterfly’s wings can determine the future of nations. You can’t conclude that simply because the effect is expected to be vast, the cause ought to be at least prominent. This knowledge may only be found by a more mechanistic route.
You’re right in the sense that I shouldn’t have used the words ought to be, but I think the example is still good. If other software engineering projects take more than one person, then it seems likely that AGI will too. Even if you suppose the AI does a lot of the work up to the foom, you still have to get the AI up to the point where it can recursively self-improve.
Usually by accident, by one or a few people. This is a fine example.
ought to be more difficult than building an operating system
I personally suspect that the creation of the first artificial mind will be more akin to a mathematician’s “aha!” moment than to a vast pyramid-building campaign. This is simply my educated guess, however, and my sole justification for it is that a number of pyramid-style AGI projects of heroic proportions have been attempted and all failed miserably. I disagree with Lenat’s dictum that “intelligence is ten million rules.” I suspect that the legendary missing “key” to AGI is something which could ultimately fit on a t-shirt.
I personally suspect that the creation of the first artificial mind will be more akin to a mathematician’s “aha!” moment than to a vast pyramid-building campaign. [...] my sole justification [...] is that a number of pyramid-style AGI projects of heroic proportions have been attempted and failed miserably.
The bored teenager who finally puts together an AGI in his parents’ basement will not have read any of these deep philosophical tracts.
That truly would be a sad day.
Are you seriously suggesting hypothetical AGIs built by bored teenagers in basements are “things which are actually useful in the creation of our successors”?
Is that your plan against intelligence stagnation?
This is an answer to a different question. A plan is something implemented to achieve a goal, not something that is just more likely to work (especially against you).
I view the teenager’s success as simultaneously more probable and more desirable than that of a centralized bureaucracy. I should have made that more clear. And my “goal” in this case is simply the creation of superintelligence. I believe the entire notion of pre-AGI-discovery Friendliness research to be absurd, as I already explained in other comments.
You are using wrong terminology here. If the consequences of whatever AGI that got developed are seen as positive, if you are not dead as a result, it is already almost FAI, that is how it’s defined: that the effect is positive. Deeper questions play on what it means for the effect to be positive, and how one can be wrong about considering certain effect positive even though it’s not, but let’s leave it aside for the moment.
If the teenager implemented something that has a good effect, it’s FAI. The argument is not that whatever ad-hoc tinkering leads to is not within a strange concept of “Friendly AI”, but that ad-hoc tinkering is expected to lead to disaster, however you call it.
I am profoundly skeptical of the link between Hard Takeoff and “everybody dies instantly.”
ad-hoc tinkering is expected to lead to disaster
This is the assumption which I question. I also question the other major assumption of Friendly AI advocates: that all of their philosophizing and (thankfully half-hearted and ineffective) campaign to prevent the “premature” development of AGI will lead to a future containing Friendly AI, rather than no AI plus an earthbound human race dead from natural causes.
Ad-hoc tinkering has given us the seed of essentially every other technology. The major disasters usually wait until large-scale application of the technology by hordes of people following received rules (rather than an ab initio understanding of how it works) begins.
To discuss it, you need to address it explicitly. You might want to start from here, here and here.
I also question the other assumption of Friendly AI advocates: that all of their philosophizing and (thankfully half-hearted and ineffective) campaign to prevent the “premature” development of AGI will lead to a future containing Friendly AI, rather than no AI plus an earthbound human race dead from natural causes.
That’s a wrong way to see it: the argument is simply that lack of disaster is better than a disaster (note that the scope of this category is separate from the first issue you raised, that is if it’s shown that ad-hoc AGI is not disastrous, by all means go ahead and do it). Suicide is worse than pending death from “natural” causes. That’s all. Whether it’s likely that a better way out will be found, or even possible, is almost irrelevant to this position. But we ought to try to do it, even if it seems impossible, even if it is quite improbable.
Ad-hoc tinkering has given us the seed of essentially every other technology.
True, but if you expect a failure to kill civilization, the trial-and-error methodology must be avoided, even if it’s otherwise convenient and almost indispensable, and has proven itself over the centuries.
I firmly believe that all of the effort currently put into “Friendly AI” is wasted. The bored teenager who finally puts together an AGI in his parents’ basement will not have read any of these deep philosophical tracts.
No amount of aimless blundering beats deliberate caution and moderation (see 15th century China example) for maintaining technological stagnation.
It is a distraction from doing things which are actually useful in the creation of our successors.
You are trying to invent the circuit breaker before discovering electricity; the airbag before the horseless carriage. I firmly believe that all of the effort currently put into “Friendly AI” is wasted. The bored teenager who finally puts together an AGI in his parents’ basement will not have read any of these deep philosophical tracts.
AGI is a really hard problem. If it ever gets accomplished, it’s going to be by a team of geniuses who have been working on the project for years. Will they be so immersed in the math that they won’t have read the deep philosophical tracts?---maybe. But your bored teenager scenario makes no sense.
It has successfully resisted solution thus far, but I suspect that it will seem laughably easy in retrospect when it finally falls.
This is not how truly fundamental breakthroughs are made.
Here is where I agree with you—anyone both qualified and motivated to work on AGI will have no time or inclination to pontificate regarding some nebulous Friendliness.
Why do you assume that AGI lies beyond the capabilities of any single intelligent person armed with a modern computer and a sufficiently unorthodox idea?
Hmm—now that you mention it, I realize my domain knowledge here is weak. How are truly fundamental breakthroughs made? I would guess that it depends on the kind of breakthrough—that there are some things that can be solved by a relatively small number of core insights (think Albert Einstein in the patent office) and some things that are big collective endeavors (think Human Genome Project). I would guess furthermore that in many ways AGI is more like the latter than the former, see below.
Only about two percent of the Linux kernel was personally written by Linus Torvalds. Building a mind seems like it ought to be more difficult than building an operating system. In either case, it takes more than an unorthodox idea.
There is no law of Nature that says the consequences must be commensurate with their cause. We live in an unsupervised universe where a movement of butterfly’s wings can determine the future of nations. You can’t conclude that simply because the effect is expected to be vast, the cause ought to be at least prominent. This knowledge may only be found by a more mechanistic route.
You’re right in the sense that I shouldn’t have used the words ought to be, but I think the example is still good. If other software engineering projects take more than one person, then it seems likely that AGI will too. Even if you suppose the AI does a lot of the work up to the foom, you still have to get the AI up to the point where it can recursively self-improve.
Usually by accident, by one or a few people. This is a fine example.
I personally suspect that the creation of the first artificial mind will be more akin to a mathematician’s “aha!” moment than to a vast pyramid-building campaign. This is simply my educated guess, however, and my sole justification for it is that a number of pyramid-style AGI projects of heroic proportions have been attempted and all failed miserably. I disagree with Lenat’s dictum that “intelligence is ten million rules.” I suspect that the legendary missing “key” to AGI is something which could ultimately fit on a t-shirt.
“Reversed Stupidity is Not Intelligence.” If AGI takes deep insight and a pyramid, then we would expect those projects to fail.
Fair enough. It may very well take both.
That truly would be a sad day.
Are you seriously suggesting hypothetical AGIs built by bored teenagers in basements are “things which are actually useful in the creation of our successors”?
Is that your plan against intelligence stagnation?
I’ll bet on the bored teenager over a sclerotic NASA-like bureaucracy any day. Especially if a computer is all that’s required to play.
This is an answer to a different question. A plan is something implemented to achieve a goal, not something that is just more likely to work (especially against you).
I view the teenager’s success as simultaneously more probable and more desirable than that of a centralized bureaucracy. I should have made that more clear. And my “goal” in this case is simply the creation of superintelligence. I believe the entire notion of pre-AGI-discovery Friendliness research to be absurd, as I already explained in other comments.
You are using wrong terminology here. If the consequences of whatever AGI that got developed are seen as positive, if you are not dead as a result, it is already almost FAI, that is how it’s defined: that the effect is positive. Deeper questions play on what it means for the effect to be positive, and how one can be wrong about considering certain effect positive even though it’s not, but let’s leave it aside for the moment.
If the teenager implemented something that has a good effect, it’s FAI. The argument is not that whatever ad-hoc tinkering leads to is not within a strange concept of “Friendly AI”, but that ad-hoc tinkering is expected to lead to disaster, however you call it.
I am profoundly skeptical of the link between Hard Takeoff and “everybody dies instantly.”
This is the assumption which I question. I also question the other major assumption of Friendly AI advocates: that all of their philosophizing and (thankfully half-hearted and ineffective) campaign to prevent the “premature” development of AGI will lead to a future containing Friendly AI, rather than no AI plus an earthbound human race dead from natural causes.
Ad-hoc tinkering has given us the seed of essentially every other technology. The major disasters usually wait until large-scale application of the technology by hordes of people following received rules (rather than an ab initio understanding of how it works) begins.
To discuss it, you need to address it explicitly. You might want to start from here, here and here.
That’s a wrong way to see it: the argument is simply that lack of disaster is better than a disaster (note that the scope of this category is separate from the first issue you raised, that is if it’s shown that ad-hoc AGI is not disastrous, by all means go ahead and do it). Suicide is worse than pending death from “natural” causes. That’s all. Whether it’s likely that a better way out will be found, or even possible, is almost irrelevant to this position. But we ought to try to do it, even if it seems impossible, even if it is quite improbable.
True, but if you expect a failure to kill civilization, the trial-and-error methodology must be avoided, even if it’s otherwise convenient and almost indispensable, and has proven itself over the centuries.
You consider the creation of an unFriendly superinelligence a step on the road to understanding Friendliness?
Earlier:
In other words, Friendly AI is an ineffective effort even compared to something entirely hypothetical.