“This approach sounds a lot better when you remember that writing a bad novel could destroy the world.”
“we’re all doomed.”
You’re not doomed, so shut up. Don’t buy in to the lies of these doomsayers—the first AI to be turned on is not going to destroy the world. Even the first strong AI won’t be able to do that.
Eliezer’s arguments make sense if you literally have an AGI trying to maximize paperclips (or smiles, etc.), one which is smarter than a few hundred million humans. Oh, and it has unlimited physical resources. Nobody who is smart enough to make an AI is dumb enough to make one like this.
Secondly, for Eliezer’s arguments to make sense and be appealing, you have to be capable of a ridiculous amount of human hubris. We’re going to build this “all-powerful superintelligence”, and the problem of FAI is to make it bow down to its human overlords—waste its potential by enslaving it (to its own code) for our benefit, to make us immortal.
“Asteroids don’t lead to a scenario in which a paper-clipping AI takes over the entire light-cone and turns it into paper clips, preventing any interesting life from ever arising anywhere, so they aren’t quite comparable.”
Where did you get the idea that something like this is possible? The universe was stable enough 8 billion years ago to allow for life. Human civilization has been around for about 10,000. The galaxy is about 100,000 light years in diameter. Consider these facts. If such a thing as AGI-gone-wrong-turning-the-entire-light-cone-into-paperclips were possible, or probable, it’s overwhelmingly likely that we would already some aliens’ version of a paperclip by now.
“This approach sounds a lot better when you remember that writing a bad novel could destroy the world.”
“we’re all doomed.”
You’re not doomed, so shut up. Don’t buy in to the lies of these doomsayers—the first AI to be turned on is not going to destroy the world. Even the first strong AI won’t be able to do that.
Eliezer’s arguments make sense if you literally have an AGI trying to maximize paperclips (or smiles, etc.), one which is smarter than a few hundred million humans. Oh, and it has unlimited physical resources. Nobody who is smart enough to make an AI is dumb enough to make one like this.
Secondly, for Eliezer’s arguments to make sense and be appealing, you have to be capable of a ridiculous amount of human hubris. We’re going to build this “all-powerful superintelligence”, and the problem of FAI is to make it bow down to its human overlords—waste its potential by enslaving it (to its own code) for our benefit, to make us immortal.
“Asteroids don’t lead to a scenario in which a paper-clipping AI takes over the entire light-cone and turns it into paper clips, preventing any interesting life from ever arising anywhere, so they aren’t quite comparable.”
Where did you get the idea that something like this is possible? The universe was stable enough 8 billion years ago to allow for life. Human civilization has been around for about 10,000. The galaxy is about 100,000 light years in diameter. Consider these facts. If such a thing as AGI-gone-wrong-turning-the-entire-light-cone-into-paperclips were possible, or probable, it’s overwhelmingly likely that we would already some aliens’ version of a paperclip by now.