I agree, just because something MIGHT backfire, it doesn’t mean we automatically shouldn’t try it. We should weigh up the potential benefits and the potential costs as best we can predict them, along with our best guesses about the likelihood of each.
In this example, of course, the lessons we learn about “genies” are supposed to be applied to artificial intelligences.
One of the central concepts that Eliezer tries to express about AI is that when we get an AI that’s as smart as humans, we will very quickly get an AI that’s very much smarter than humans. At that point, the AI can probably trick us into letting it loose, and it may be able to devise a plan to achieve almost anything.
In this scenario, the potential costs are almost unlimited. And the probability is hard to work out. Therefore figuring out the best way to program it is very very important.
Because that’s a genie…
{CSI sunglasses moment}
… that we can’t put back in the bottle.
I agree, just because something MIGHT backfire, it doesn’t mean we automatically shouldn’t try it. We should weigh up the potential benefits and the potential costs as best we can predict them, along with our best guesses about the likelihood of each.
In this example, of course, the lessons we learn about “genies” are supposed to be applied to artificial intelligences.
One of the central concepts that Eliezer tries to express about AI is that when we get an AI that’s as smart as humans, we will very quickly get an AI that’s very much smarter than humans. At that point, the AI can probably trick us into letting it loose, and it may be able to devise a plan to achieve almost anything.
In this scenario, the potential costs are almost unlimited. And the probability is hard to work out. Therefore figuring out the best way to program it is very very important.
Because that’s a genie… {CSI sunglasses moment} … that we can’t put back in the bottle.