I created a practical example, which demonstrates me correctness of your point of view as I understand it.
Imagine that there is 1000 civilization in the Universe and 999 from them will extinct in their early stage. And one civilization, which will not extinct, could survive only if it spends billions of billions on large prevention project.
Each civilization independently developed DA argument on its early stage and concluded that Doom probability is almost 1. Each civilization has two options in early stage:
1) Start partying, trying to get as much utility as possible before the inevitable catastrophe.
2) Ignore anthropic update and go all in in desperate attempt of the catastrophe prevention.
If we choose option 1, then all other agents similar to our decision process will come to the same conclusion and even a civilization which was able to survive, will not attempt to survive, and as a result, all intelligent life in the universe will die off.
If we choose 2, we will most likely fail anyway, but one of the civilizations will survive.
The choice depends on our utilitarian perspective: If we interested only in our civilization well-being, option 1 will give us higher utility, but if we care about the survival of other civilizations, we should choose 2, even if we believe that probability is against us.
This is a good illustration of anthropic reasoning, but it’s an illustration of the presumptuous philosopher, not of the DA (though they are symmetric in a sense). Here we have people saying “I expect to fail, but I will do it anyway because I hope others will succeed, and we all make the same decision”. Hence it’s the total utilitarian (who is the “SIAish” agent) who is acting against what seems to be the objective probabilities.
I created a practical example, which demonstrates me correctness of your point of view as I understand it.
Imagine that there is 1000 civilization in the Universe and 999 from them will extinct in their early stage. And one civilization, which will not extinct, could survive only if it spends billions of billions on large prevention project.
Each civilization independently developed DA argument on its early stage and concluded that Doom probability is almost 1. Each civilization has two options in early stage:
1) Start partying, trying to get as much utility as possible before the inevitable catastrophe. 2) Ignore anthropic update and go all in in desperate attempt of the catastrophe prevention.
If we choose option 1, then all other agents similar to our decision process will come to the same conclusion and even a civilization which was able to survive, will not attempt to survive, and as a result, all intelligent life in the universe will die off.
If we choose 2, we will most likely fail anyway, but one of the civilizations will survive.
The choice depends on our utilitarian perspective: If we interested only in our civilization well-being, option 1 will give us higher utility, but if we care about the survival of other civilizations, we should choose 2, even if we believe that probability is against us.
Is this example correct from the point of ADT?
This is a good illustration of anthropic reasoning, but it’s an illustration of the presumptuous philosopher, not of the DA (though they are symmetric in a sense). Here we have people saying “I expect to fail, but I will do it anyway because I hope others will succeed, and we all make the same decision”. Hence it’s the total utilitarian (who is the “SIAish” agent) who is acting against what seems to be the objective probabilities.
http://lesswrong.com/lw/8bw/anthropic_decision_theory_vi_applying_adt_to/