Here’s the SIA doomsday argument in ADT: suppose you are a total utilitarian towards all beings at your level of civilizational development. Suppose someone offers you to bet on whether the Great filter is early or late. Suppose you assume that such bets are commonly made, and commonly made by rich selfish people to altruists (that last clause is just to say that there is nothing wrong with winning the bet).
Then betting on “early great filter”: when you win, only a few other people win. But betting on “late great filter”, then a lot of people win when you do.
This produces the SIA doomsday, but illustrates clearly that it has nothing to do with probabilities of doom. I’ll add this to the top post to reflect this.
I think that the size of the bet is depending of my estimation of the probabilities of different outcomes (may be not in this case) and so we can’t completely exclude probabilities estimation.
But in general I agree this your theory. It is useful in estimation of the x-risks. We don’t need exact probabilities of different risks. We need information how to use our limited resources to prevent them. This is our bets on our ability to prevent them. But to make such bets we need some idea about order of magnitude of risks and their order in time.
Here’s the SIA doomsday argument in ADT: suppose you are a total utilitarian towards all beings at your level of civilizational development. Suppose someone offers you to bet on whether the Great filter is early or late. Suppose you assume that such bets are commonly made, and commonly made by rich selfish people to altruists (that last clause is just to say that there is nothing wrong with winning the bet).
Then betting on “early great filter”: when you win, only a few other people win. But betting on “late great filter”, then a lot of people win when you do.
This produces the SIA doomsday, but illustrates clearly that it has nothing to do with probabilities of doom. I’ll add this to the top post to reflect this.
I think that the size of the bet is depending of my estimation of the probabilities of different outcomes (may be not in this case) and so we can’t completely exclude probabilities estimation.
But in general I agree this your theory. It is useful in estimation of the x-risks. We don’t need exact probabilities of different risks. We need information how to use our limited resources to prevent them. This is our bets on our ability to prevent them. But to make such bets we need some idea about order of magnitude of risks and their order in time.