DA style reasoning in non-anthropic situations are fine. I reject the notion that anthropic probabilities are meaningful. The fact that SIA doesn’t have DA, and is in most ways a better probability theory than SSA, is enough to indicate (ha!) that something odd is going on.
We’ve had this discussion before. I see no reason to think anthropic probabilities are meaningless, and I see every reason to think DA style reason will generally work in anthropic situations just as well as in other situations.
I also don’t see that you actually reject with probabilities, as I still have to behave as if they were true.
(However, I understand the similar logic in the voting example: I have to go to vote for my candidate and should reject any updates that my personal vote very unlikely change result of the election.)
Something like this example may help: I don’t believe that the world will end soon, but I have to invest more in x-risks prevention after I learned about DA (and given that I average utilitarian). I think some more concrete example will be useful for understanding here.
DA style reasoning in non-anthropic situations are fine. I reject the notion that anthropic probabilities are meaningful. The fact that SIA doesn’t have DA, and is in most ways a better probability theory than SSA, is enough to indicate (ha!) that something odd is going on.
We’ve had this discussion before. I see no reason to think anthropic probabilities are meaningless, and I see every reason to think DA style reason will generally work in anthropic situations just as well as in other situations.
SIA has its own DA via Fermi paradox as K.Grace showed. https://meteuphoric.wordpress.com/2010/03/23/sia-doomsday-the-filter-is-ahead/
I also don’t see that you actually reject with probabilities, as I still have to behave as if they were true. (However, I understand the similar logic in the voting example: I have to go to vote for my candidate and should reject any updates that my personal vote very unlikely change result of the election.)
Something like this example may help: I don’t believe that the world will end soon, but I have to invest more in x-risks prevention after I learned about DA (and given that I average utilitarian). I think some more concrete example will be useful for understanding here.
I looked at the SAI DA in my previous post on DA, and I feel I got that one right:
http://lesswrong.com/lw/mqg/doomsday_argument_for_anthropic_decision_theory/