I meant that low existential risk wagers are almost always available, regardless of the presence or absence of high existential risk wagers, and I claimed that those low risk wagers are preferable even when the cost of not taking the high risk wager is very high, provided that is that you have a long time horizon. The only time you should take a high existential risk wager is when your long-term future utility will be permanently and substantially decreased by your not doing so. That doesn’t apply to your example of the first nuclear test, as the alternative would not lead to the nightmare invasion scenario, but rather to a bloody but recoverable mess. So you haven’t proven the rationality of testing nukes (although only in that scenario, as you point out).
If you accept that it would be rational to perform the Trinity test in a hypothetical world in which the Germans and Japanese were winning the war, and in which it had a 3 in 1 million chance of destroying life on Earth, then I have made my point.
It actually still depends on the time horizon and the utility of a surrender or truce. I suppose a 30 year cut-off could be short enough to make it rational. Of course if being overrun by Nazis is assumed to lead to eternal hellish conditions...
What matters is whether we can find conditions under which perfectly rational beings, faced with the situation as I posed it, would choose not to conduct the Trinity test.
I really don’t think we can. If the utility of not-testing is zero at all times after T, the utility of destroying the Earth is zero at all times after T, and the utility of winning the war is greater than or equal to 0 at all times after T, then whatever your time horizon you will test if you are an expected utility maximizer. Where I disagree is that there exists a base rate of anywhere like once per century for taking a 3 in a million chance of destroying the world in any scenarios where those sorts of figures don’t hold. I also disagree that rational agents will confront each other with such choices anything like so often as once per century.
As for the calculated risk of atmospheric ignition, the calculated risk at the time of testing was even lower than the 300000:1 you stated
So what were they?
They were zero! Proved to be physically impossible, and then no-one seriously questioned the validity of the calculations. As for the best-possible estimate, it must have been higher, of course.
Where I disagree is that there exists a base rate of anywhere like once per century for taking a 3 in a million chance of destroying the world in any scenarios where those sorts of figures don’t hold. I also disagree that rational agents will confront each other with such choices anything like so often as once per century.
If you were to wind the clock back to 1940, and restart World War 2, is there less than a 10% chance of arriving in the nightmare scenario? If not, doesn’t this imply the base rate is at least once per thousand years?
I think that you’re historically correct, but it’s enough to posit a hypothetical World War 2 in which the alternative was the nightmare invasion scenario, and show that it would then be rational to conduct the Trinity test.
It actually still depends on the time horizon. I suppose a 30 year cut-off could be short enough to make it rational. Of course if being overrun by Nazis is assumed to lead to eternal hellish conditions...
The time-horizon is very important. One of my points is that I don’t see how a rational agent could have a time-horizon on the scale of the life of the universe.
I meant that low existential risk wagers are almost always available, regardless of the presence or absence of high existential risk wagers, and I claimed that those low risk wagers are preferable even when the cost of not taking the high risk wager is very high, provided that is that you have a long time horizon. The only time you should take a high existential risk wager is when your long-term future utility will be permanently and substantially decreased by your not doing so. That doesn’t apply to your example of the first nuclear test, as the alternative would not lead to the nightmare invasion scenario, but rather to a bloody but recoverable mess. So you haven’t proven the rationality of testing nukes (although only in that scenario, as you point out).
It actually still depends on the time horizon and the utility of a surrender or truce. I suppose a 30 year cut-off could be short enough to make it rational. Of course if being overrun by Nazis is assumed to lead to eternal hellish conditions...
I really don’t think we can. If the utility of not-testing is zero at all times after T, the utility of destroying the Earth is zero at all times after T, and the utility of winning the war is greater than or equal to 0 at all times after T, then whatever your time horizon you will test if you are an expected utility maximizer. Where I disagree is that there exists a base rate of anywhere like once per century for taking a 3 in a million chance of destroying the world in any scenarios where those sorts of figures don’t hold. I also disagree that rational agents will confront each other with such choices anything like so often as once per century.
They were zero! Proved to be physically impossible, and then no-one seriously questioned the validity of the calculations. As for the best-possible estimate, it must have been higher, of course.
If you were to wind the clock back to 1940, and restart World War 2, is there less than a 10% chance of arriving in the nightmare scenario? If not, doesn’t this imply the base rate is at least once per thousand years?
I think that you’re historically correct, but it’s enough to posit a hypothetical World War 2 in which the alternative was the nightmare invasion scenario, and show that it would then be rational to conduct the Trinity test.
I’ve edited my comment in response to the hypothetical.
The time-horizon is very important. One of my points is that I don’t see how a rational agent could have a time-horizon on the scale of the life of the universe.