I’d like to take my chances of being cooked vs. a world without furnaces, thermostats or even rooms—something I believe we’re headed for by default, in the very near future.
I will accept that “AGI-now” proponents should carry the blame for a hypothetical Paperclip apocalypse when Friendliness proponents accept similar blame for an Earth-bound humanity flattened by a rogue asteroid (or leveled by any of the various threats a superintelligence—or, say, the output of a purely human AI research community unburdened by Friendliness worries—might be able to counter. I previously gave Orlov’s petrocollapse as yet another example.)
ASCII—the onus is on you to give compelling arguments that the risks you are taking are worth it
Status quo bias, anyone?
I presently believe, not without justification, that we are headed for extinction-level disaster as things are; and that not populating the planet with the highest achievable intelligence is in itself an immediate existential risk. In fact, our current existence may well be looked back on as an unthinkably horrifying disaster by a superintelligent race (I’m thinking of Yudkowsky’s Super-Happies.)
Since your justification is omitted here, I’ll go ahead and suspect it’s at least as improbable as this one. The question isn’t simply “do we need better technology to mitigate existential risk”, it’s “are the odds that technological suppression due to friendliness concerns wipes us out greater than the corresponding AGI risk”.
If you assume friendliness is not a problem, AI is obviously a beneficial development. Is that really the major concern here? All this talk of the benefits of scientific and technological progress seems wasted. Take friendliness out of the picture, I doubt many here would disagree with the general point that progress mitigates long-term risk.
So please, be more specific. The argument “lack of progress contributes to existential risk” contains no new information. Either tell us why this risk is far greater than we suspect, or why AGI is less risky than we suspect.
I was going to reply, but it appears that someone has eloquently written the reply for me.
I’d like to take my chances of being cooked vs. a world without furnaces, thermostats or even rooms—something I believe we’re headed for by default, in the very near future.
This reminds me of the response I got when I criticized an acquaintance for excessive, reckless speeding: “Life is all about taking risks.”
The only difference was that he was mostly risking his own life, whereas asciilifeform is risking mine, yours and everyone else’s too.
ASCII—the onus is on you to give compelling arguments that the risks you are taking are worth it.
Actually, I fully intended the implication that he was risking more than his own life. Self-inflicted risks don’t concern me.
Now you’ve got me wondering what the casualty distribution for speeding-induced accidents looks like.
Well if ASCII has his way, there may be one data point at casualty level 6.6 billion …
I will accept that “AGI-now” proponents should carry the blame for a hypothetical Paperclip apocalypse when Friendliness proponents accept similar blame for an Earth-bound humanity flattened by a rogue asteroid (or leveled by any of the various threats a superintelligence—or, say, the output of a purely human AI research community unburdened by Friendliness worries—might be able to counter. I previously gave Orlov’s petrocollapse as yet another example.)
Status quo bias, anyone?
I presently believe, not without justification, that we are headed for extinction-level disaster as things are; and that not populating the planet with the highest achievable intelligence is in itself an immediate existential risk. In fact, our current existence may well be looked back on as an unthinkably horrifying disaster by a superintelligent race (I’m thinking of Yudkowsky’s Super-Happies.)
Since your justification is omitted here, I’ll go ahead and suspect it’s at least as improbable as this one. The question isn’t simply “do we need better technology to mitigate existential risk”, it’s “are the odds that technological suppression due to friendliness concerns wipes us out greater than the corresponding AGI risk”.
If you assume friendliness is not a problem, AI is obviously a beneficial development. Is that really the major concern here? All this talk of the benefits of scientific and technological progress seems wasted. Take friendliness out of the picture, I doubt many here would disagree with the general point that progress mitigates long-term risk.
So please, be more specific. The argument “lack of progress contributes to existential risk” contains no new information. Either tell us why this risk is far greater than we suspect, or why AGI is less risky than we suspect.