I also don’t see that you actually reject with probabilities, as I still have to behave as if they were true.
(However, I understand the similar logic in the voting example: I have to go to vote for my candidate and should reject any updates that my personal vote very unlikely change result of the election.)
Something like this example may help: I don’t believe that the world will end soon, but I have to invest more in x-risks prevention after I learned about DA (and given that I average utilitarian). I think some more concrete example will be useful for understanding here.
SIA has its own DA via Fermi paradox as K.Grace showed. https://meteuphoric.wordpress.com/2010/03/23/sia-doomsday-the-filter-is-ahead/
I also don’t see that you actually reject with probabilities, as I still have to behave as if they were true. (However, I understand the similar logic in the voting example: I have to go to vote for my candidate and should reject any updates that my personal vote very unlikely change result of the election.)
Something like this example may help: I don’t believe that the world will end soon, but I have to invest more in x-risks prevention after I learned about DA (and given that I average utilitarian). I think some more concrete example will be useful for understanding here.
I looked at the SAI DA in my previous post on DA, and I feel I got that one right:
http://lesswrong.com/lw/mqg/doomsday_argument_for_anthropic_decision_theory/