I agree that thinking about payoffs is obviously correct, and ideally anyone talking about SIA and SSA should also keep this in the back of their heads. That doesn’t make anthropic assumptions useless, for the following two reasons:
They give the correct answer for some natural payoff structures.
They are friendlier to our intuitive ideas of how probability should work.
I don’t actually think that they’re worth the effort, but that’s a just a question of presentation. In any case, the particular choice of anthropic language is less important than engaging with the thesis, though the particular avenue of engagement may be along the lines of “SIA is inappropriate for the kind of payoffs involved in the Doomsday Argument, because...”
I agree that thinking about payoffs is obviously correct, and ideally anyone talking about SIA and SSA should also keep this in the back of their heads. That doesn’t make anthropic assumptions useless, for the following two reasons:
They give the correct answer for some natural payoff structures.
They are friendlier to our intuitive ideas of how probability should work.
I don’t actually think that they’re worth the effort, but that’s a just a question of presentation. In any case, the particular choice of anthropic language is less important than engaging with the thesis, though the particular avenue of engagement may be along the lines of “SIA is inappropriate for the kind of payoffs involved in the Doomsday Argument, because...”