...aaaand that’s why I don’t go around discussing the danger paths until someone (who I can realistically influence) actually starts to advocate going down them. Plenty of idiots to take it as an instruction manual. So I discuss the safe path but make no particular advance effort to label the dangerous ones.
I am rather surprised that you accept all of the claimed achievements of Eurisko and even regard it as “dangerous”, despite the fact that no one save the author has ever seen even a fragment of its source code. I firmly believe that we are dealing with a “mechanical Turk.”
I am also curious why you believe that meaningful research on Friendly AI is at all possible without prior exposure to a working AGI. To me it seems a bit like trying to invent the ground fault interrupter before having discovered electricity.
Aside from that: If I had been following your writings more carefully, I might already have learned the answer to this, but: just why do you prioritize formalizing Friendly AI over achieving AI in the fist place? You seem to side with humanity over a hypothetical Paperclip Optimizer. Why is that? It seems to me that unaugmented human intelligence is itself an “unfriendly (non-A)I”, quite efficient at laying waste to whatever it touches.
There is every reason to believe that if an AGI does not appear before the demise of cheap petroleum, our species is doomed to “go out with a whimper.” I for one prefer the “bang” as a matter of principle.
I would gladly accept taking a chance at conversion to paperclips (or some similarly perverse fate at the hands of an unfriendly AGI) when the alternative appears to be the artificial squelching of the human urge to discover and invent, with the inevitable harvest of stagnation and eventually oblivion.
I accept Paperclip Optimization (and other AGI failure modes) as an honorable death, far superior to being eaten away by old age or being killed by fellow humans in a war over dwindling resources. I want to live in interesting times. Bring on the AGI. It seems to me that if any intelligence, regardless of its origin, is capable of wrenching the universe out of our control, it deserves it.
Why is the continued hegemony of Neolithic flesh-bags so precious to you?
Aside from that: If I had been following your writings more carefully, I might already have learned the answer to this, but: just why do you prioritize formalizing Friendly AI over achieving AI in the fist place?
It seems to me that if any intelligence, regardless of its origin, is capable of wrenching the universe out of our control, it deserves it.
I don’t think you understand the paperclip maximizer scenario. An UnFriendly AI is not necessarily conscious; it’s just this device that tiles the light cone with paperclips. Arguably it helps to say “really powerful optimization process” rather than “intelligence.” Consider that we would not say that a thermostat deserves to control the temperature of a room, even if we happen to be locked in the room and are to be roasted to death.
I’d like to take my chances of being cooked vs. a world without furnaces, thermostats or even rooms—something I believe we’re headed for by default, in the very near future.
I will accept that “AGI-now” proponents should carry the blame for a hypothetical Paperclip apocalypse when Friendliness proponents accept similar blame for an Earth-bound humanity flattened by a rogue asteroid (or leveled by any of the various threats a superintelligence—or, say, the output of a purely human AI research community unburdened by Friendliness worries—might be able to counter. I previously gave Orlov’s petrocollapse as yet another example.)
ASCII—the onus is on you to give compelling arguments that the risks you are taking are worth it
Status quo bias, anyone?
I presently believe, not without justification, that we are headed for extinction-level disaster as things are; and that not populating the planet with the highest achievable intelligence is in itself an immediate existential risk. In fact, our current existence may well be looked back on as an unthinkably horrifying disaster by a superintelligent race (I’m thinking of Yudkowsky’s Super-Happies.)
Since your justification is omitted here, I’ll go ahead and suspect it’s at least as improbable as this one. The question isn’t simply “do we need better technology to mitigate existential risk”, it’s “are the odds that technological suppression due to friendliness concerns wipes us out greater than the corresponding AGI risk”.
If you assume friendliness is not a problem, AI is obviously a beneficial development. Is that really the major concern here? All this talk of the benefits of scientific and technological progress seems wasted. Take friendliness out of the picture, I doubt many here would disagree with the general point that progress mitigates long-term risk.
So please, be more specific. The argument “lack of progress contributes to existential risk” contains no new information. Either tell us why this risk is far greater than we suspect, or why AGI is less risky than we suspect.
You just made me want to participate even more!
...aaaand that’s why I don’t go around discussing the danger paths until someone (who I can realistically influence) actually starts to advocate going down them. Plenty of idiots to take it as an instruction manual. So I discuss the safe path but make no particular advance effort to label the dangerous ones.
Eliezer,
I am rather surprised that you accept all of the claimed achievements of Eurisko and even regard it as “dangerous”, despite the fact that no one save the author has ever seen even a fragment of its source code. I firmly believe that we are dealing with a “mechanical Turk.”
I am also curious why you believe that meaningful research on Friendly AI is at all possible without prior exposure to a working AGI. To me it seems a bit like trying to invent the ground fault interrupter before having discovered electricity.
Aside from that: If I had been following your writings more carefully, I might already have learned the answer to this, but: just why do you prioritize formalizing Friendly AI over achieving AI in the fist place? You seem to side with humanity over a hypothetical Paperclip Optimizer. Why is that? It seems to me that unaugmented human intelligence is itself an “unfriendly (non-A)I”, quite efficient at laying waste to whatever it touches.
There is every reason to believe that if an AGI does not appear before the demise of cheap petroleum, our species is doomed to “go out with a whimper.” I for one prefer the “bang” as a matter of principle.
I would gladly accept taking a chance at conversion to paperclips (or some similarly perverse fate at the hands of an unfriendly AGI) when the alternative appears to be the artificial squelching of the human urge to discover and invent, with the inevitable harvest of stagnation and eventually oblivion.
I accept Paperclip Optimization (and other AGI failure modes) as an honorable death, far superior to being eaten away by old age or being killed by fellow humans in a war over dwindling resources. I want to live in interesting times. Bring on the AGI. It seems to me that if any intelligence, regardless of its origin, is capable of wrenching the universe out of our control, it deserves it.
Why is the continued hegemony of Neolithic flesh-bags so precious to you?
This was addressed in “Value is Fragile.”
I don’t think you understand the paperclip maximizer scenario. An UnFriendly AI is not necessarily conscious; it’s just this device that tiles the light cone with paperclips. Arguably it helps to say “really powerful optimization process” rather than “intelligence.” Consider that we would not say that a thermostat deserves to control the temperature of a room, even if we happen to be locked in the room and are to be roasted to death.
I was going to reply, but it appears that someone has eloquently written the reply for me.
I’d like to take my chances of being cooked vs. a world without furnaces, thermostats or even rooms—something I believe we’re headed for by default, in the very near future.
This reminds me of the response I got when I criticized an acquaintance for excessive, reckless speeding: “Life is all about taking risks.”
The only difference was that he was mostly risking his own life, whereas asciilifeform is risking mine, yours and everyone else’s too.
ASCII—the onus is on you to give compelling arguments that the risks you are taking are worth it.
Actually, I fully intended the implication that he was risking more than his own life. Self-inflicted risks don’t concern me.
Now you’ve got me wondering what the casualty distribution for speeding-induced accidents looks like.
Well if ASCII has his way, there may be one data point at casualty level 6.6 billion …
I will accept that “AGI-now” proponents should carry the blame for a hypothetical Paperclip apocalypse when Friendliness proponents accept similar blame for an Earth-bound humanity flattened by a rogue asteroid (or leveled by any of the various threats a superintelligence—or, say, the output of a purely human AI research community unburdened by Friendliness worries—might be able to counter. I previously gave Orlov’s petrocollapse as yet another example.)
Status quo bias, anyone?
I presently believe, not without justification, that we are headed for extinction-level disaster as things are; and that not populating the planet with the highest achievable intelligence is in itself an immediate existential risk. In fact, our current existence may well be looked back on as an unthinkably horrifying disaster by a superintelligent race (I’m thinking of Yudkowsky’s Super-Happies.)
Since your justification is omitted here, I’ll go ahead and suspect it’s at least as improbable as this one. The question isn’t simply “do we need better technology to mitigate existential risk”, it’s “are the odds that technological suppression due to friendliness concerns wipes us out greater than the corresponding AGI risk”.
If you assume friendliness is not a problem, AI is obviously a beneficial development. Is that really the major concern here? All this talk of the benefits of scientific and technological progress seems wasted. Take friendliness out of the picture, I doubt many here would disagree with the general point that progress mitigates long-term risk.
So please, be more specific. The argument “lack of progress contributes to existential risk” contains no new information. Either tell us why this risk is far greater than we suspect, or why AGI is less risky than we suspect.