Good article. Thx for posting. I agree with much of it, but …
Goertzel writes:
I do see a real risk that, if we proceed in the manner I’m advocating, some nasty people will take the early-stage AGIs and either use them for bad ends, or proceed to hastily create a superhuman AGI that then does bad things of its own volition. These are real risks that must be thought about hard, and protected against as necessary. But they are different from the Scary Idea.
Is this really different from the Scary Idea?
I’ve always thought of this as part of the Scary Idea, in fact, the reason the Scary Idea is scary—scarier than nuclear weapons. Because when mankind reaches the abyss, and looks with dismay at the prospect that lies ahead, we all know that there will be at least one idiot among us why doesn’t draw back from the abyss, but instead continues forward down the slippery slope.
At the nuclear abyss, that idiot will probably kill a few hundred million of us. No big deal. But at the uFAI abyss, we may have ourselves a serious problem.
If I believe “X is incredibly useful but someone might use it to destroy the world,” I can conclude that I should build X and take care to police the sorts of people who get to use it. But if I believe “X is incredibly useful but its very existence might spontaneously destroy the world” then that strategy won’t work… it doesn’t matter who uses it. Maybe there’s another way, or maybe I just shouldn’t build X, but regardless of the solution it’s a different problem.
It’s like the difference between believing that nuclear weapons might some day be directed by humans to overthrow civilization, and believing that a nuclear reaction will cause all of the Earth’s atmosphere to spontaneously ignite. In the first case, we can attempt to control nuclear weapons. In the second case, we must prevent nuclear reactions from ever starting.
Just to be clear: I’m not championing a position here on what sort of threat AGI’s pose. I’m just saying that these are genuinely different threat models.
The “uFAI abyss”? Does that have something to do with the possibility of a small group of “idiots”—who were nonetheless smart enough to beat everyone else to machine intelligence—overthrowing the world’s governments?
Good article. Thx for posting. I agree with much of it, but …
Goertzel writes:
Is this really different from the Scary Idea?
I’ve always thought of this as part of the Scary Idea, in fact, the reason the Scary Idea is scary—scarier than nuclear weapons. Because when mankind reaches the abyss, and looks with dismay at the prospect that lies ahead, we all know that there will be at least one idiot among us why doesn’t draw back from the abyss, but instead continues forward down the slippery slope.
At the nuclear abyss, that idiot will probably kill a few hundred million of us. No big deal. But at the uFAI abyss, we may have ourselves a serious problem.
It seems different to me.
If I believe “X is incredibly useful but someone might use it to destroy the world,” I can conclude that I should build X and take care to police the sorts of people who get to use it. But if I believe “X is incredibly useful but its very existence might spontaneously destroy the world” then that strategy won’t work… it doesn’t matter who uses it. Maybe there’s another way, or maybe I just shouldn’t build X, but regardless of the solution it’s a different problem.
It’s like the difference between believing that nuclear weapons might some day be directed by humans to overthrow civilization, and believing that a nuclear reaction will cause all of the Earth’s atmosphere to spontaneously ignite. In the first case, we can attempt to control nuclear weapons. In the second case, we must prevent nuclear reactions from ever starting.
Just to be clear: I’m not championing a position here on what sort of threat AGI’s pose. I’m just saying that these are genuinely different threat models.
The “uFAI abyss”? Does that have something to do with the possibility of a small group of “idiots”—who were nonetheless smart enough to beat everyone else to machine intelligence—overthrowing the world’s governments?