Define “sufficiently low”; with even a 99.9% chance of success, you’ve still got a .1% chance of killing every human alive; that’s morally equivalent to a 100% chance of killing 6.5 million people. Saying that if you’re not totally sure that if you’re not totally sure that your AI is Friendly when you start it up, you’re committing the Holocaust was not hyperbole in any way, shape, or form. It’s simply the result of shutting up and doing the multiplication.
If you calculate a .1% chance of killing every human alive if you start it right now, but also a .2% chance of saving the whole of humanity, that’s morally equivalent to a 100% chance of saving the lives of 6.5 million people—in which case you’re guilty of the Holocaust if you do NOT start it.
Negligible in terms of calculating the effect of the AI project on existential risk, because the other effects, positive and negative, would be so much larger.
If you’re reducing the expected risk of existential disaster by a larger amount, you’re in expectation net saving lives rather than net killing. If all options involve existential risk, including doing nothing, then all one can do is pick the option with the lowest.
A sufficiently low probability becomes negligible in light of other risks and risk-reduction opportunities.
Define “sufficiently low”; with even a 99.9% chance of success, you’ve still got a .1% chance of killing every human alive; that’s morally equivalent to a 100% chance of killing 6.5 million people. Saying that if you’re not totally sure that if you’re not totally sure that your AI is Friendly when you start it up, you’re committing the Holocaust was not hyperbole in any way, shape, or form. It’s simply the result of shutting up and doing the multiplication.
If you calculate a .1% chance of killing every human alive if you start it right now, but also a .2% chance of saving the whole of humanity, that’s morally equivalent to a 100% chance of saving the lives of 6.5 million people—in which case you’re guilty of the Holocaust if you do NOT start it.
“Shut up and multiply” works both ways.
This doesn’t hold if some extra work could improve those odds.
(IMO, the sense of moral urgency created by things like Holocaust analogies almost always does more harm than good.)
Negligible in terms of calculating the effect of the AI project on existential risk, because the other effects, positive and negative, would be so much larger.
Any other possible effects don’t negate that you’re killing six million people when you’re going ahead with a potentially UnFriendly AI.
If you’re reducing the expected risk of existential disaster by a larger amount, you’re in expectation net saving lives rather than net killing. If all options involve existential risk, including doing nothing, then all one can do is pick the option with the lowest.