Aside from that: If I had been following your writings more carefully, I might already have learned the answer to this, but: just why do you prioritize formalizing Friendly AI over achieving AI in the fist place?
It seems to me that if any intelligence, regardless of its origin, is capable of wrenching the universe out of our control, it deserves it.
I don’t think you understand the paperclip maximizer scenario. An UnFriendly AI is not necessarily conscious; it’s just this device that tiles the light cone with paperclips. Arguably it helps to say “really powerful optimization process” rather than “intelligence.” Consider that we would not say that a thermostat deserves to control the temperature of a room, even if we happen to be locked in the room and are to be roasted to death.
I’d like to take my chances of being cooked vs. a world without furnaces, thermostats or even rooms—something I believe we’re headed for by default, in the very near future.
I will accept that “AGI-now” proponents should carry the blame for a hypothetical Paperclip apocalypse when Friendliness proponents accept similar blame for an Earth-bound humanity flattened by a rogue asteroid (or leveled by any of the various threats a superintelligence—or, say, the output of a purely human AI research community unburdened by Friendliness worries—might be able to counter. I previously gave Orlov’s petrocollapse as yet another example.)
ASCII—the onus is on you to give compelling arguments that the risks you are taking are worth it
Status quo bias, anyone?
I presently believe, not without justification, that we are headed for extinction-level disaster as things are; and that not populating the planet with the highest achievable intelligence is in itself an immediate existential risk. In fact, our current existence may well be looked back on as an unthinkably horrifying disaster by a superintelligent race (I’m thinking of Yudkowsky’s Super-Happies.)
Since your justification is omitted here, I’ll go ahead and suspect it’s at least as improbable as this one. The question isn’t simply “do we need better technology to mitigate existential risk”, it’s “are the odds that technological suppression due to friendliness concerns wipes us out greater than the corresponding AGI risk”.
If you assume friendliness is not a problem, AI is obviously a beneficial development. Is that really the major concern here? All this talk of the benefits of scientific and technological progress seems wasted. Take friendliness out of the picture, I doubt many here would disagree with the general point that progress mitigates long-term risk.
So please, be more specific. The argument “lack of progress contributes to existential risk” contains no new information. Either tell us why this risk is far greater than we suspect, or why AGI is less risky than we suspect.
This was addressed in “Value is Fragile.”
I don’t think you understand the paperclip maximizer scenario. An UnFriendly AI is not necessarily conscious; it’s just this device that tiles the light cone with paperclips. Arguably it helps to say “really powerful optimization process” rather than “intelligence.” Consider that we would not say that a thermostat deserves to control the temperature of a room, even if we happen to be locked in the room and are to be roasted to death.
I was going to reply, but it appears that someone has eloquently written the reply for me.
I’d like to take my chances of being cooked vs. a world without furnaces, thermostats or even rooms—something I believe we’re headed for by default, in the very near future.
This reminds me of the response I got when I criticized an acquaintance for excessive, reckless speeding: “Life is all about taking risks.”
The only difference was that he was mostly risking his own life, whereas asciilifeform is risking mine, yours and everyone else’s too.
ASCII—the onus is on you to give compelling arguments that the risks you are taking are worth it.
Actually, I fully intended the implication that he was risking more than his own life. Self-inflicted risks don’t concern me.
Now you’ve got me wondering what the casualty distribution for speeding-induced accidents looks like.
Well if ASCII has his way, there may be one data point at casualty level 6.6 billion …
I will accept that “AGI-now” proponents should carry the blame for a hypothetical Paperclip apocalypse when Friendliness proponents accept similar blame for an Earth-bound humanity flattened by a rogue asteroid (or leveled by any of the various threats a superintelligence—or, say, the output of a purely human AI research community unburdened by Friendliness worries—might be able to counter. I previously gave Orlov’s petrocollapse as yet another example.)
Status quo bias, anyone?
I presently believe, not without justification, that we are headed for extinction-level disaster as things are; and that not populating the planet with the highest achievable intelligence is in itself an immediate existential risk. In fact, our current existence may well be looked back on as an unthinkably horrifying disaster by a superintelligent race (I’m thinking of Yudkowsky’s Super-Happies.)
Since your justification is omitted here, I’ll go ahead and suspect it’s at least as improbable as this one. The question isn’t simply “do we need better technology to mitigate existential risk”, it’s “are the odds that technological suppression due to friendliness concerns wipes us out greater than the corresponding AGI risk”.
If you assume friendliness is not a problem, AI is obviously a beneficial development. Is that really the major concern here? All this talk of the benefits of scientific and technological progress seems wasted. Take friendliness out of the picture, I doubt many here would disagree with the general point that progress mitigates long-term risk.
So please, be more specific. The argument “lack of progress contributes to existential risk” contains no new information. Either tell us why this risk is far greater than we suspect, or why AGI is less risky than we suspect.