This looks like what Armstrong calls a “selfless” utility function i.e. it has no explicit term for Beauty’s welfare here/now or at any other point in time.
Stuart’s terms are a bit misleading because they’re about decision-making by counting utilities, which is not the same as decision-making by maximizing expected utility. His terms like “selfish” and “selfless” and so on are only names for counting rules for utilities, and have no direct counterpart in expected utility maximizers.
So U can contain terms like “I eat a candy bar. +1 utility.” Or it could only contain terms like “a sentient life-form eats a candy bar. +1 utility.” It doesn’t actually change what process Sleeping Beauty uses to make decisions in anthropic situations, because those ideas only applied to decision-making by counting utilities. Additionally, Sleeping Beauty makes identical decisions in anthropic and non-anthropic situations, if the utilities and the probabilities are the same.
OK, I think this is clearer. The main point is that whatever this “ordinary” U is scoring (and it could be more or less anything) then winning the tails bet scores +2 whereas losing the tails bet scores −1. This leads to 2⁄3 betting probability. If subjective probabilities are identical to betting probabilities (a common position for Bayesians) then the subjective probability of tails has to be 2⁄3.
The point about alternative utility functions though is that this property doesn’t always hold i.e. two Beauties winning doesn’t have to be twice as good as one Beauty winning. And that’s especially true for a trillion Beauties winning.
Finally, if you adopt a relative frequency interpretation (the coin-toss is repeated multiple times, and take limit to infinity) then there are obviously two relative frequencies of interest. Half the coins fall Tails, but two thirds of Beauty awakenings are after Tails. Either of these can be interpreted as a probability.
If subjective probabilities are identical to betting probabilities (a common position for Bayesians)
If we start with an expected utility maximizer, what does it do when deciding whether to take a bet on, say, a coin flip? Expected utility is the utility times the probability, so it checks whether P(heads) U(heads) > P(tails) U(tails). So betting can only tell you the probability if you know the utilities. And changing the utility function around is enough to get really interesting behavior, but it doesn’t mean you changed the probabilities.
Half the coins fall Tails, but two thirds of Beauty awakenings are after Tails. Either of these can be interpreted as a probability.
What sort of questions, given what sorts of information, would give you these two probabilities? :D
For the first question: if I observe multiple coin-tosses and count what fraction of them are tails, then what should I expect that fraction to be? (Answer one half). Clearly “I” here is anyone other than Beauty herself, who never observes the coin-toss.
For the second question: if I interview Beauty on multiple days (as the story is repeated) and then ask her courtiers (who did see the toss) whether it was heads or tails, then what fraction of the time will they tell me tails? (Answer two thirds.)
What information is needed for this? None except what is defined in the original problem, though with the stipulation that the story is repeated often enough to get convergence.
Incidentally, these questions and answers aren’t framed as bets, though I could use them to decide whether to make side-bets.
Stuart’s terms are a bit misleading because they’re about decision-making by counting utilities, which is not the same as decision-making by maximizing expected utility. His terms like “selfish” and “selfless” and so on are only names for counting rules for utilities, and have no direct counterpart in expected utility maximizers.
So U can contain terms like “I eat a candy bar. +1 utility.” Or it could only contain terms like “a sentient life-form eats a candy bar. +1 utility.” It doesn’t actually change what process Sleeping Beauty uses to make decisions in anthropic situations, because those ideas only applied to decision-making by counting utilities. Additionally, Sleeping Beauty makes identical decisions in anthropic and non-anthropic situations, if the utilities and the probabilities are the same.
OK, I think this is clearer. The main point is that whatever this “ordinary” U is scoring (and it could be more or less anything) then winning the tails bet scores +2 whereas losing the tails bet scores −1. This leads to 2⁄3 betting probability. If subjective probabilities are identical to betting probabilities (a common position for Bayesians) then the subjective probability of tails has to be 2⁄3.
The point about alternative utility functions though is that this property doesn’t always hold i.e. two Beauties winning doesn’t have to be twice as good as one Beauty winning. And that’s especially true for a trillion Beauties winning.
Finally, if you adopt a relative frequency interpretation (the coin-toss is repeated multiple times, and take limit to infinity) then there are obviously two relative frequencies of interest. Half the coins fall Tails, but two thirds of Beauty awakenings are after Tails. Either of these can be interpreted as a probability.
If we start with an expected utility maximizer, what does it do when deciding whether to take a bet on, say, a coin flip? Expected utility is the utility times the probability, so it checks whether P(heads) U(heads) > P(tails) U(tails). So betting can only tell you the probability if you know the utilities. And changing the utility function around is enough to get really interesting behavior, but it doesn’t mean you changed the probabilities.
What sort of questions, given what sorts of information, would give you these two probabilities? :D
For the first question: if I observe multiple coin-tosses and count what fraction of them are tails, then what should I expect that fraction to be? (Answer one half). Clearly “I” here is anyone other than Beauty herself, who never observes the coin-toss.
For the second question: if I interview Beauty on multiple days (as the story is repeated) and then ask her courtiers (who did see the toss) whether it was heads or tails, then what fraction of the time will they tell me tails? (Answer two thirds.)
What information is needed for this? None except what is defined in the original problem, though with the stipulation that the story is repeated often enough to get convergence.
Incidentally, these questions and answers aren’t framed as bets, though I could use them to decide whether to make side-bets.