ADT basically doesn’t have a doomsday argument. Just like SIA, you can’t formulate it successfully. So the risk of death is the objective risk we see, not adjusted by various DAs.
Here’s the SIA doomsday argument in ADT: suppose you are a total utilitarian towards all beings at your level of civilizational development. Suppose someone offers you to bet on whether the Great filter is early or late. Suppose you assume that such bets are commonly made, and commonly made by rich selfish people to altruists (that last clause is just to say that there is nothing wrong with winning the bet).
Then betting on “early great filter”: when you win, only a few other people win. But betting on “late great filter”, then a lot of people win when you do.
This produces the SIA doomsday, but illustrates clearly that it has nothing to do with probabilities of doom. I’ll add this to the top post to reflect this.
I think that the size of the bet is depending of my estimation of the probabilities of different outcomes (may be not in this case) and so we can’t completely exclude probabilities estimation.
But in general I agree this your theory. It is useful in estimation of the x-risks. We don’t need exact probabilities of different risks. We need information how to use our limited resources to prevent them. This is our bets on our ability to prevent them. But to make such bets we need some idea about order of magnitude of risks and their order in time.
I think it is true, as we live in the infinite universe in many levels where all possible people exist. But this exhaust SIA action, and it becomes non-informative and thus does not cancel SSA in DA.
The probability of it is high. There are several Tegmark’s levels of the universe infinity which are mutually independent. (quantum multiverse, cosmological inflation, independent universes, eternal existence)
Also my own existence under SIA is argument for almost infinite universe.
And as humans are finite, we don’t need infinite universe for existing of all possible humans, just very large.
ADT basically doesn’t have a doomsday argument. Just like SIA, you can’t formulate it successfully. So the risk of death is the objective risk we see, not adjusted by various DAs.
But Katja Grace showed that SIA has its own DA, that is that Great filter is more likely to be ahead of us. I don’t understand how ADT prevent it.
Here’s the SIA doomsday argument in ADT: suppose you are a total utilitarian towards all beings at your level of civilizational development. Suppose someone offers you to bet on whether the Great filter is early or late. Suppose you assume that such bets are commonly made, and commonly made by rich selfish people to altruists (that last clause is just to say that there is nothing wrong with winning the bet).
Then betting on “early great filter”: when you win, only a few other people win. But betting on “late great filter”, then a lot of people win when you do.
This produces the SIA doomsday, but illustrates clearly that it has nothing to do with probabilities of doom. I’ll add this to the top post to reflect this.
I think that the size of the bet is depending of my estimation of the probabilities of different outcomes (may be not in this case) and so we can’t completely exclude probabilities estimation.
But in general I agree this your theory. It is useful in estimation of the x-risks. We don’t need exact probabilities of different risks. We need information how to use our limited resources to prevent them. This is our bets on our ability to prevent them. But to make such bets we need some idea about order of magnitude of risks and their order in time.
What people often don’t notice about SIA is that it implies 100% certainty that there are an infinite number of people.
This is not 100% certain, so SIA is false.
I think it is true, as we live in the infinite universe in many levels where all possible people exist. But this exhaust SIA action, and it becomes non-informative and thus does not cancel SSA in DA.
We might live in an infinite universe, but this does not have a probability of 100%.
The probability of it is high. There are several Tegmark’s levels of the universe infinity which are mutually independent. (quantum multiverse, cosmological inflation, independent universes, eternal existence)
Also my own existence under SIA is argument for almost infinite universe.
And as humans are finite, we don’t need infinite universe for existing of all possible humans, just very large.