I’ve also had a look at the Armstrong paper recommended by “endoself” above. This is actually rather interesting, since it relates SIA and SSA to decision theory.
Broadly, Armstrong says that if you are trying to maximize total expected utility then it makes sense to apply SIA + SSA together (though Armstrong just describes this combination as “SIA”). Whereas if you are trying to maximize average utility per person, or selfishly, your own individual utility, then it makes sense to apply SSA without SIA. This supports both the “halfer” and “thirder” solutions to the Sleeping Beauty problem, since both are justified by different utility functions. Very elegant.
However, this also seems to tie in with my remarks above, since total utility maximizers come unstuck in infinite universes (or multiverses). Total utility will be infinite whatever they do, and the only sensible thing to do is to maximize personal utility or average utility. Further if a decider is trying to maximize total expected utility, then they really force themselves to decide that the universe is infinite, since if they guess right, then the positive payoff from that correct guess will be realized infinitely many times, whereas if they guess wrong then the negative payoff from that incorrect guess will be realized only finitely many times. So I think this suggests—in a rather different way—that SIA doesn’t work as a way out of DA. Also, that it’s rather silly (since it creates an overwhelming bias towards guesses at infinite universes or multiverses).
One other thing I don’t get is Armstrong’s—rather odd—claim that if you are an average utility (or selfish utility) maximizer, then you shouldn’t care anyway about “Doom Soon”. So in practice there is no decision-theoretic shift in your behaviour brought about by DA. This strikes me as just plain wrong—an average utilitarian would still be worried about the big “disutility” of people who live through (or close to ) the Doom. A selfish utility maximizer would worry about the chance of seeing Doom himself.
Tim—thanks. I’ll check out the article.
I’ve also had a look at the Armstrong paper recommended by “endoself” above. This is actually rather interesting, since it relates SIA and SSA to decision theory.
Broadly, Armstrong says that if you are trying to maximize total expected utility then it makes sense to apply SIA + SSA together (though Armstrong just describes this combination as “SIA”). Whereas if you are trying to maximize average utility per person, or selfishly, your own individual utility, then it makes sense to apply SSA without SIA. This supports both the “halfer” and “thirder” solutions to the Sleeping Beauty problem, since both are justified by different utility functions. Very elegant.
However, this also seems to tie in with my remarks above, since total utility maximizers come unstuck in infinite universes (or multiverses). Total utility will be infinite whatever they do, and the only sensible thing to do is to maximize personal utility or average utility. Further if a decider is trying to maximize total expected utility, then they really force themselves to decide that the universe is infinite, since if they guess right, then the positive payoff from that correct guess will be realized infinitely many times, whereas if they guess wrong then the negative payoff from that incorrect guess will be realized only finitely many times. So I think this suggests—in a rather different way—that SIA doesn’t work as a way out of DA. Also, that it’s rather silly (since it creates an overwhelming bias towards guesses at infinite universes or multiverses).
One other thing I don’t get is Armstrong’s—rather odd—claim that if you are an average utility (or selfish utility) maximizer, then you shouldn’t care anyway about “Doom Soon”. So in practice there is no decision-theoretic shift in your behaviour brought about by DA. This strikes me as just plain wrong—an average utilitarian would still be worried about the big “disutility” of people who live through (or close to ) the Doom. A selfish utility maximizer would worry about the chance of seeing Doom himself.