Reverse Doomsday Argument is hitting preppers hard

“Where is my Doomsday?” asks a prepper on a conspirological site, — “I spent thousands of dollars on ammunition and 10 years on waiting, and still nothing. My ammo is rusting!”

There is a general problem of predicting the end of the world: it is not happening. There are many reasons for this, but one is purely mathematical: if something didn’t happen for a long time, this is very strong evidence that it will not happen any time soon. If we have no nuclear war for 70 years, its probability tomorrow is very small, no matter how serious are international relations.

The first who observed this was Laplace with the “sunrise problem”. He asked: What is the probability that the Sun will not rise tomorrow, given that it has risen for the last 5000 years. He derived an equation, and the probability of no sunrise is 1/​N, when N is the number of days when the Sun has risen. This is known as a rule of succession and Laplace has even more general equation for it, which could account for a situation where the Sun had missed several sunrises.

The fact that something didn’t happen for a long time is an evidence that some unknown causal mechanism provides stability for the observed system, even if all visible causal mechanisms are pointing on “the end is nigh”.

“You see, the end of the US is near, as the dollar debt pyramid is unsustainable, it is growing more than a trillion dollars every year” — would say a preper. But the dollar was a fiat currency for decades, and it is very unlikely that it will fail tomorrow.

The same rule of succession could be used to get a rough prediction of the end times. If there is no nuclear war for 70 years, there is a 50 per cent chance that it will happen in the next 70 years. This is known as the Doomsday argument in J.R. Gott’s version.

Surely, something bad will happen in decades. But your ammo will rust first. However, on the civilizational level, we should be invest in preventing the global risks even if they have a small probability, as on a long run it ensures our survival

This could be called “Reverse Doomsday Argument”, as it claims that the doomsday is unlikely to be very near. In AI safety, it is a (relatively weak) argument against near-term “near-term AI risk”, that is, that dangerous AI is less than 5 years from now.

.