If a particular situation poses a 1% risk if it comes up, one can lower the total risk by making that situation less likely
You only do that by changing the problem; a different problem will have different security properties. The new risk will still be a floor, the disjunctive problem hasn’t gone away.
a human facing the death penalty for a failed escape from a prison and a 1% success chance would not obviously try.
Many do try if the circumstances are bad enough, and the death penalty for a failed escape is common throughout history and in totalitarian regimes. I read just yesterday, in fact, a story of a North Korean prison camp escapee (death penalty for escape attempts goes without saying) where given his many disadvantages and challenges, a 1% success rate of reaching South Korea alive does not seem too inaccurate.
Even an autonomous AI with interests in conflict with humanity to some degree might be designed without such a risk-loving decision algorithm as to try an improbable escape attempt in the face of punishment for failure or reward for non-attempt.
You don’t have to be risk-loving to make a 1% attempt if that’s your best option; the 1% chance just has to be the best option, is all.
You only do that by changing the problem; a different problem will have different security properties. The new risk will still be a floor, the disjunctive problem hasn’t gone away.
Many do try if the circumstances are bad enough, and the death penalty for a failed escape is common throughout history and in totalitarian regimes. I read just yesterday, in fact, a story of a North Korean prison camp escapee (death penalty for escape attempts goes without saying) where given his many disadvantages and challenges, a 1% success rate of reaching South Korea alive does not seem too inaccurate.
You don’t have to be risk-loving to make a 1% attempt if that’s your best option; the 1% chance just has to be the best option, is all.
You try to make the 99% option fairly good.