You are using wrong terminology here. If the consequences of whatever AGI that got developed are seen as positive, if you are not dead as a result, it is already almost FAI, that is how it’s defined: that the effect is positive. Deeper questions play on what it means for the effect to be positive, and how one can be wrong about considering certain effect positive even though it’s not, but let’s leave it aside for the moment.
If the teenager implemented something that has a good effect, it’s FAI. The argument is not that whatever ad-hoc tinkering leads to is not within a strange concept of “Friendly AI”, but that ad-hoc tinkering is expected to lead to disaster, however you call it.
I am profoundly skeptical of the link between Hard Takeoff and “everybody dies instantly.”
ad-hoc tinkering is expected to lead to disaster
This is the assumption which I question. I also question the other major assumption of Friendly AI advocates: that all of their philosophizing and (thankfully half-hearted and ineffective) campaign to prevent the “premature” development of AGI will lead to a future containing Friendly AI, rather than no AI plus an earthbound human race dead from natural causes.
Ad-hoc tinkering has given us the seed of essentially every other technology. The major disasters usually wait until large-scale application of the technology by hordes of people following received rules (rather than an ab initio understanding of how it works) begins.
To discuss it, you need to address it explicitly. You might want to start from here, here and here.
I also question the other assumption of Friendly AI advocates: that all of their philosophizing and (thankfully half-hearted and ineffective) campaign to prevent the “premature” development of AGI will lead to a future containing Friendly AI, rather than no AI plus an earthbound human race dead from natural causes.
That’s a wrong way to see it: the argument is simply that lack of disaster is better than a disaster (note that the scope of this category is separate from the first issue you raised, that is if it’s shown that ad-hoc AGI is not disastrous, by all means go ahead and do it). Suicide is worse than pending death from “natural” causes. That’s all. Whether it’s likely that a better way out will be found, or even possible, is almost irrelevant to this position. But we ought to try to do it, even if it seems impossible, even if it is quite improbable.
Ad-hoc tinkering has given us the seed of essentially every other technology.
True, but if you expect a failure to kill civilization, the trial-and-error methodology must be avoided, even if it’s otherwise convenient and almost indispensable, and has proven itself over the centuries.
You are using wrong terminology here. If the consequences of whatever AGI that got developed are seen as positive, if you are not dead as a result, it is already almost FAI, that is how it’s defined: that the effect is positive. Deeper questions play on what it means for the effect to be positive, and how one can be wrong about considering certain effect positive even though it’s not, but let’s leave it aside for the moment.
If the teenager implemented something that has a good effect, it’s FAI. The argument is not that whatever ad-hoc tinkering leads to is not within a strange concept of “Friendly AI”, but that ad-hoc tinkering is expected to lead to disaster, however you call it.
I am profoundly skeptical of the link between Hard Takeoff and “everybody dies instantly.”
This is the assumption which I question. I also question the other major assumption of Friendly AI advocates: that all of their philosophizing and (thankfully half-hearted and ineffective) campaign to prevent the “premature” development of AGI will lead to a future containing Friendly AI, rather than no AI plus an earthbound human race dead from natural causes.
Ad-hoc tinkering has given us the seed of essentially every other technology. The major disasters usually wait until large-scale application of the technology by hordes of people following received rules (rather than an ab initio understanding of how it works) begins.
To discuss it, you need to address it explicitly. You might want to start from here, here and here.
That’s a wrong way to see it: the argument is simply that lack of disaster is better than a disaster (note that the scope of this category is separate from the first issue you raised, that is if it’s shown that ad-hoc AGI is not disastrous, by all means go ahead and do it). Suicide is worse than pending death from “natural” causes. That’s all. Whether it’s likely that a better way out will be found, or even possible, is almost irrelevant to this position. But we ought to try to do it, even if it seems impossible, even if it is quite improbable.
True, but if you expect a failure to kill civilization, the trial-and-error methodology must be avoided, even if it’s otherwise convenient and almost indispensable, and has proven itself over the centuries.