Wait, I had the impression that this community had come to the consensus that SIA vs SSA was a problem along the lines of “If a tree falls in the woods and no one’s around, does it make a sound?”? It finds an ambiguity in what we mean by “probability”, and forces us to grapple with it.
The Bayesian definition of “probability” is essentially just a number you use in decision making algorithms constrained to satisfy certain optimality criteria. The optimal number to use in a decision obviously depends on the problem, but the unintuitive and surprising thing is that it can depend on details like how forgetful you are and whether you’ve been copied and how payoffs are aggregated.
The post I linked gave some examples:
If Sleeping Beauty is credited a cumulative dollar every time she guesses correctly, she should act as if she assigns a probability of 1⁄2 to the proposition.
If Sleeping Beauty is given a dollar only if she guesses correctly in all cases, otherwise nothing, then she should act as if she assigns a probability of 1⁄3 to the proposition.
Other payoff structures give other probabilities. If you never recombine Sleeping Beauty, then the problem starts to become about whether or not she values her alternate self getting money and what she believes her alternate self will do.
I agree that thinking about payoffs is obviously correct, and ideally anyone talking about SIA and SSA should also keep this in the back of their heads. That doesn’t make anthropic assumptions useless, for the following two reasons:
They give the correct answer for some natural payoff structures.
They are friendlier to our intuitive ideas of how probability should work.
I don’t actually think that they’re worth the effort, but that’s a just a question of presentation. In any case, the particular choice of anthropic language is less important than engaging with the thesis, though the particular avenue of engagement may be along the lines of “SIA is inappropriate for the kind of payoffs involved in the Doomsday Argument, because...”
I don’t think question pits SSA against SIA; rather, it concerns what SIA itself implies. But I think my argument was wrong, and I’ve edited the top-level post to explain why.
Wait, I had the impression that this community had come to the consensus that SIA vs SSA was a problem along the lines of “If a tree falls in the woods and no one’s around, does it make a sound?”? It finds an ambiguity in what we mean by “probability”, and forces us to grapple with it.
In fact, there’s a well-upvoted post with exactly that content.
The Bayesian definition of “probability” is essentially just a number you use in decision making algorithms constrained to satisfy certain optimality criteria. The optimal number to use in a decision obviously depends on the problem, but the unintuitive and surprising thing is that it can depend on details like how forgetful you are and whether you’ve been copied and how payoffs are aggregated.
The post I linked gave some examples:
If Sleeping Beauty is credited a cumulative dollar every time she guesses correctly, she should act as if she assigns a probability of 1⁄2 to the proposition.
If Sleeping Beauty is given a dollar only if she guesses correctly in all cases, otherwise nothing, then she should act as if she assigns a probability of 1⁄3 to the proposition.
Other payoff structures give other probabilities. If you never recombine Sleeping Beauty, then the problem starts to become about whether or not she values her alternate self getting money and what she believes her alternate self will do.
I agree that thinking about payoffs is obviously correct, and ideally anyone talking about SIA and SSA should also keep this in the back of their heads. That doesn’t make anthropic assumptions useless, for the following two reasons:
They give the correct answer for some natural payoff structures.
They are friendlier to our intuitive ideas of how probability should work.
I don’t actually think that they’re worth the effort, but that’s a just a question of presentation. In any case, the particular choice of anthropic language is less important than engaging with the thesis, though the particular avenue of engagement may be along the lines of “SIA is inappropriate for the kind of payoffs involved in the Doomsday Argument, because...”
I don’t think question pits SSA against SIA; rather, it concerns what SIA itself implies. But I think my argument was wrong, and I’ve edited the top-level post to explain why.