I still think that this explanation fails the criteria “explain as if I am 5”. I copy below my comment, in which I try to construct more clear example of ADT reasoning for a civilization which is at a risk of extinction, and which you said is, in fact, presumptious philosopher variant (I hope to create an example which is applicable to our world situation):
Imagine that there is 1000 civilization in the Universe and 999 from them will extinct in their early stage. And one civilization, which will not extinct, could survive only if it spent billions of billions on large prevention project. Each civilization independently developed DA argument on its early stage and concluded that Doom probability is almost 1. Each civilization has two options in early stage:
1) Start partying, trying to get as much utility as possible before the inevitable catastrophe.
2) Ignore anthropic update and go all in in desperate attempt of the catastrophe prevention.
If we choose option 1, then all other agents similar to our decision process will come to the same conclusion and even a civilization which was able to survive, will not attempt to survive, and as a result, all intelligent life in the universe will die off.
If we choose 2, we will most likely fail anyway, but one of the civilizations will survive.
The choice depends on our utilitarian perspective: If we interested only in our civilization well-being, option 1 will give us higher utility, but if we care about the survival of other civilizations, we should choose 2, even if we believe that probability is against us.
in which I try to construct more clear example of ADT reasoning for a civilization which is at a risk of extinction, and which you said is, in fact, presumptious philosopher variant (I hope to create an example which is applicable to our world situation)
I do not think there is a sensible ADT DA that can be constructed for reasonable civilizations. In ADT, only weird utilities like average utilitarians have a DA.
SSA has a DA. ADT has a SSAish like agent, which is the average utilitarian. Therefore, ADT must have a DA. I constructed it. And it turns out the ADT DA via this has no real doom aspect to it; it has behaviour that looks like avoiding doom, but only for agents with strange preferences. ADT does not have a DA with teeth.
I still think that this explanation fails the criteria “explain as if I am 5”. I copy below my comment, in which I try to construct more clear example of ADT reasoning for a civilization which is at a risk of extinction, and which you said is, in fact, presumptious philosopher variant (I hope to create an example which is applicable to our world situation):
Imagine that there is 1000 civilization in the Universe and 999 from them will extinct in their early stage. And one civilization, which will not extinct, could survive only if it spent billions of billions on large prevention project. Each civilization independently developed DA argument on its early stage and concluded that Doom probability is almost 1. Each civilization has two options in early stage:
1) Start partying, trying to get as much utility as possible before the inevitable catastrophe. 2) Ignore anthropic update and go all in in desperate attempt of the catastrophe prevention.
If we choose option 1, then all other agents similar to our decision process will come to the same conclusion and even a civilization which was able to survive, will not attempt to survive, and as a result, all intelligent life in the universe will die off.
If we choose 2, we will most likely fail anyway, but one of the civilizations will survive.
The choice depends on our utilitarian perspective: If we interested only in our civilization well-being, option 1 will give us higher utility, but if we care about the survival of other civilizations, we should choose 2, even if we believe that probability is against us.
I do not think there is a sensible ADT DA that can be constructed for reasonable civilizations. In ADT, only weird utilities like average utilitarians have a DA.
SSA has a DA. ADT has a SSAish like agent, which is the average utilitarian. Therefore, ADT must have a DA. I constructed it. And it turns out the ADT DA via this has no real doom aspect to it; it has behaviour that looks like avoiding doom, but only for agents with strange preferences. ADT does not have a DA with teeth.