Your argument assumes that the time-horizon of rational utility maximisers never reaches further than their next decision. If I only get one shot to increase my expected utility by 1%, and I’m rational, yes, I’ll take any odds better than 99:1 in favour on an all or nothing bet. That is a highly contrived scenario: it is almost always possible to stake less than your entire utility on an outcome, in which case you generally should in order to reduce risk-of-ruin and thus increase long-term expected utility.
Further, the risks of not using nuclear weapons in the Second World War were nothing like those you gave. Japan was in no danger of occupying the United States at the time the decision to initiate the Trinity test was made; as for Germany, it had already surrendered! The anticipated effects of not using the Bomb were rather that of losing perhaps several hundred thousand soldiers in an invasion of Japan, and of course the large economic cost of prolonging the war. As for the calculated risk of atmospheric ignition, the calculated risk at the time of testing was even lower than the 300000:1 you stated (although Fermi offered evens on the morning of the Trinity test).
That is a highly contrived scenario: it is almost always possible to stake less than your entire utility on an outcome, in which case you generally should in order to reduce risk-of-ruin and thus increase long-term expected utility.
It is derived from real-life experiences, which I listed. Yes, it is almost always possible to stake less than your entire utility. Almost. Hence the “once per century”, instead of “billions of times per day”.
Further, the risks of not using nuclear weapons in the Second World War were nothing like those you gave. Japan was in no danger of occupying the United States at the time the decision to initiate the Trinity test was made; as for Germany, it had already surrendered! The anticipated effects of not using the Bomb were rather that of losing perhaps several hundred thousand soldiers in an invasion of Japan, and of course the large economic cost of prolonging the war.
I think that’s right. Do you realize you are only making my case stronger, by showing that the decision was made by somewhat-rational people for even fewer benefits?
If you accept that it would be rational to perform the Trinity test in a hypothetical world in which the Germans and Japanese were winning the war, and in which it had a 3 in 1 million chance of destroying life on Earth, then I have made my point. Arguing about what the risks and benefits truly were historically is irrelevant. It doesn’t really matter what actual humans did in an actual situation, because we already agree that humans are irrational. What matters is whether we can find conditions under which perfectly rational beings, faced with the situation as I posed it, would choose not to conduct the Trinity test.
As for the calculated risk of atmospheric ignition, the calculated risk at the time of testing was even lower than the 300000:1 you stated
I meant that low existential risk wagers are almost always available, regardless of the presence or absence of high existential risk wagers, and I claimed that those low risk wagers are preferable even when the cost of not taking the high risk wager is very high, provided that is that you have a long time horizon. The only time you should take a high existential risk wager is when your long-term future utility will be permanently and substantially decreased by your not doing so. That doesn’t apply to your example of the first nuclear test, as the alternative would not lead to the nightmare invasion scenario, but rather to a bloody but recoverable mess. So you haven’t proven the rationality of testing nukes (although only in that scenario, as you point out).
If you accept that it would be rational to perform the Trinity test in a hypothetical world in which the Germans and Japanese were winning the war, and in which it had a 3 in 1 million chance of destroying life on Earth, then I have made my point.
It actually still depends on the time horizon and the utility of a surrender or truce. I suppose a 30 year cut-off could be short enough to make it rational. Of course if being overrun by Nazis is assumed to lead to eternal hellish conditions...
What matters is whether we can find conditions under which perfectly rational beings, faced with the situation as I posed it, would choose not to conduct the Trinity test.
I really don’t think we can. If the utility of not-testing is zero at all times after T, the utility of destroying the Earth is zero at all times after T, and the utility of winning the war is greater than or equal to 0 at all times after T, then whatever your time horizon you will test if you are an expected utility maximizer. Where I disagree is that there exists a base rate of anywhere like once per century for taking a 3 in a million chance of destroying the world in any scenarios where those sorts of figures don’t hold. I also disagree that rational agents will confront each other with such choices anything like so often as once per century.
As for the calculated risk of atmospheric ignition, the calculated risk at the time of testing was even lower than the 300000:1 you stated
So what were they?
They were zero! Proved to be physically impossible, and then no-one seriously questioned the validity of the calculations. As for the best-possible estimate, it must have been higher, of course.
Where I disagree is that there exists a base rate of anywhere like once per century for taking a 3 in a million chance of destroying the world in any scenarios where those sorts of figures don’t hold. I also disagree that rational agents will confront each other with such choices anything like so often as once per century.
If you were to wind the clock back to 1940, and restart World War 2, is there less than a 10% chance of arriving in the nightmare scenario? If not, doesn’t this imply the base rate is at least once per thousand years?
I think that you’re historically correct, but it’s enough to posit a hypothetical World War 2 in which the alternative was the nightmare invasion scenario, and show that it would then be rational to conduct the Trinity test.
It actually still depends on the time horizon. I suppose a 30 year cut-off could be short enough to make it rational. Of course if being overrun by Nazis is assumed to lead to eternal hellish conditions...
The time-horizon is very important. One of my points is that I don’t see how a rational agent could have a time-horizon on the scale of the life of the universe.
Your argument assumes that the time-horizon of rational utility maximisers never reaches further than their next decision. If I only get one shot to increase my expected utility by 1%, and I’m rational, yes, I’ll take any odds better than 99:1 in favour on an all or nothing bet. That is a highly contrived scenario: it is almost always possible to stake less than your entire utility on an outcome, in which case you generally should in order to reduce risk-of-ruin and thus increase long-term expected utility.
Further, the risks of not using nuclear weapons in the Second World War were nothing like those you gave. Japan was in no danger of occupying the United States at the time the decision to initiate the Trinity test was made; as for Germany, it had already surrendered! The anticipated effects of not using the Bomb were rather that of losing perhaps several hundred thousand soldiers in an invasion of Japan, and of course the large economic cost of prolonging the war. As for the calculated risk of atmospheric ignition, the calculated risk at the time of testing was even lower than the 300000:1 you stated (although Fermi offered evens on the morning of the Trinity test).
It is derived from real-life experiences, which I listed. Yes, it is almost always possible to stake less than your entire utility. Almost. Hence the “once per century”, instead of “billions of times per day”.
I think that’s right. Do you realize you are only making my case stronger, by showing that the decision was made by somewhat-rational people for even fewer benefits?
If you accept that it would be rational to perform the Trinity test in a hypothetical world in which the Germans and Japanese were winning the war, and in which it had a 3 in 1 million chance of destroying life on Earth, then I have made my point. Arguing about what the risks and benefits truly were historically is irrelevant. It doesn’t really matter what actual humans did in an actual situation, because we already agree that humans are irrational. What matters is whether we can find conditions under which perfectly rational beings, faced with the situation as I posed it, would choose not to conduct the Trinity test.
So what were they?
I meant that low existential risk wagers are almost always available, regardless of the presence or absence of high existential risk wagers, and I claimed that those low risk wagers are preferable even when the cost of not taking the high risk wager is very high, provided that is that you have a long time horizon. The only time you should take a high existential risk wager is when your long-term future utility will be permanently and substantially decreased by your not doing so. That doesn’t apply to your example of the first nuclear test, as the alternative would not lead to the nightmare invasion scenario, but rather to a bloody but recoverable mess. So you haven’t proven the rationality of testing nukes (although only in that scenario, as you point out).
It actually still depends on the time horizon and the utility of a surrender or truce. I suppose a 30 year cut-off could be short enough to make it rational. Of course if being overrun by Nazis is assumed to lead to eternal hellish conditions...
I really don’t think we can. If the utility of not-testing is zero at all times after T, the utility of destroying the Earth is zero at all times after T, and the utility of winning the war is greater than or equal to 0 at all times after T, then whatever your time horizon you will test if you are an expected utility maximizer. Where I disagree is that there exists a base rate of anywhere like once per century for taking a 3 in a million chance of destroying the world in any scenarios where those sorts of figures don’t hold. I also disagree that rational agents will confront each other with such choices anything like so often as once per century.
They were zero! Proved to be physically impossible, and then no-one seriously questioned the validity of the calculations. As for the best-possible estimate, it must have been higher, of course.
If you were to wind the clock back to 1940, and restart World War 2, is there less than a 10% chance of arriving in the nightmare scenario? If not, doesn’t this imply the base rate is at least once per thousand years?
I think that you’re historically correct, but it’s enough to posit a hypothetical World War 2 in which the alternative was the nightmare invasion scenario, and show that it would then be rational to conduct the Trinity test.
I’ve edited my comment in response to the hypothetical.
The time-horizon is very important. One of my points is that I don’t see how a rational agent could have a time-horizon on the scale of the life of the universe.
I bet I know which side of the bet Fermi was willing to take.
It would be rational to offer any odds for that bet.
Seconded regarding the stakes in WW2. The scientists weren’t on the front lines either, so it’s highly doubtful they would have been killed.