What makes something an “outcome” in Savage’s theorem is simply that it follows a certain set of rules and relationships—the interpretation into the real world is left to the reader.
It’s totally possible to regard the state of the entire universe as the “outcome”—in that case, the things that corresponds to the “actions” (the thing that the agent chooses between to get different “outcomes”) are actually the strategies that the agent could follow. And the thing that the agent always acts as if it has probabilities over are the “events,” which are the things outside the agent’s control that determine the mapping from “actions” to “outcomes,” and given this interpretation the day does not fulfill such a role—only the coin.
So in that sense, you’re totally right. But this interpretation isn’t unique.
It’s also a valid interpretation to have the “outcome” be whether Sleeping Beauty wins, loses, or doesn’t take an individual bet about what day it is (there is a preference ordering over these things), the “action” being accepting or rejecting the bet, and the “event” being which day it is (the outcome is a function of the chosen action and the event).
Here’s the point: for all valid interpretations, a consistent Sleeping Beauty will act as if she has probabilities over the events. That’s what makes Savage’s theorem a theorem. What day it is is an event in a valid interpretation, therefore Sleeping Beauty acts as if it has a probability.
Side note: It is possible to make what day it is a non-”event,” at least in the Savage sense. You just have to force the “outcomes” to be the outcome of a strategy. Suppose Sleeping Beauty instead just had to choose A or B on each day, and only gets a reward if her choices are AB or BA, but not AA or BB (or any case where the reward tensor is not a tensor sum of rewards for individual days). To play this game well, Savage’s theorem does not say you have to act like you assign a probability to what day it is. The canonical example of this problem in anthropics is the absent-minded driver problem—compared to Sleeping Beauty, it is strictly trickier to talk about whether the absent-minded driver should have a probability that they’re at the first intersection—argument in favor have to either resort to Cox’s theorem (which I find more confusing), or engage in contortions about games that counterfactually could be constructed.
It’s also a valid interpretation to have the “outcome” be whether Sleeping Beauty wins, loses, or doesn’t take an individual bet about what day it is (there is a preference ordering over these things), the “action” being accepting or rejecting the bet, and the “event” being which day it is (the outcome is a function of the chosen action and the event).
In Savage’s theorem acts are arbitrary functions from the set of states to the set of consequences. Therefore to apply Savage’s theorem in this context you have to consider blatantly inconsistent counterfactuals in which the sleeping beauty makes difference choices in computationally equivalent situations. If you have an extension of the utility function to these counterfactuals and it happens to satisfy the conditions of Savage’s theorem then you can assign probabilities. This extension is not unique. Moreover, in some anthropic scenarios in doesn’t exist (as you noted yourself).
...argument in favor have to either resort to Cox’s theorem (which I find more confusing), or engage in contortions about games that counterfactually could be constructed.
Cox’s theorem only says that any reasonable measure of uncertainty can be transformed into a probability assignment. Here there is no such measure of uncertainty. Different counterfactual games lead to different probability assignments.
First, thanks for having this conversation with me. Before, I was very overconfident in my ability to explain this in a post.
In order for the local interpretation of Sleeping Beauty to work, it’s true that the utility function has to assign utilities to impossible counterfactuals. I don’t think this is a problem, but it does raise an interesting point.
Because only one action is actually taken, any consistent consequentialist decision theory that considers more than one action is a decision theory that has to assign utilities to impossible counterfactuals. But the counterfactuals you mention up are different: they have to be assigned a utility, but they never actually get considered by our decision theory because they’re causally inaccessible—their utilities don’t affect anything, in some logical-counterfactual or algorithmic-causal counterfactual sense.
In the utility functions I used as examples above (winning bets to maximize money, trying to watch a sports game on a specific day), the utility for these impossible counterfactuals is naturally specified because the utility function was specified as a sum of the utilities of local properties of the universe. This is what both allows local “consequences” in Savage’s theorem, and specifies those causally-inaccessible utilities.
This raises the question of whether, if you were given only the total utilities of the causally accessible histories of the universe, it would be “okay” to choose the inaccessible utilities arbitrarily such that the utility could be expressed in terms of local properties. I think this idea might neglect the importance of causal information in deciding what to call an “event.”
Different counterfactual games lead to different probability assignments.
Do you have some examples in mind? I’ve seen this claim before, but it’s either relied on the assumption that probabilities can be recovered straightforwardly from the optimal action (not valid when the straightforward decision theory fails, e.g. absent-minded driver, Psy-kosh’s non-anthropic problem), or that certain population-ethics preferences can be ignored without changing anything (highly dubious).
In order for the local interpretation of Sleeping Beauty to work, it’s true that the utility function has to assign utilities to impossible counterfactuals. I don’t think this is a problem...
It is a problem in the sense that there is no canonical way to assign these utilities in general.
In the utility functions I used as examples above (winning bets to maximize money, trying to watch a sports game on a specific day), the utility for these impossible counterfactuals is naturally specified because the utility function was specified as a sum of the utilities of local properties of the universe. This is what both allows local “consequences” in Savage’s theorem, and specifies those causally-inaccessible utilities.
True. As a side note, the Savage theorem is not quite the right thing here since it produces both probabilities and utilities while in our situations the utilities are already given.
This raises the question of whether, if you were given only the total utilities of the causally accessible histories of the universe, it would be “okay” to choose the inaccessible utilities arbitrarily such that the utility could be expressed in terms of local properties.
The problem is that different extensions produce complete different probabilities. For example, suppose U(AA) = 0, U(BB) = 1. We can decide U(AB)=U(BA)=0.5 in which case the probability of both copies is 50%. Or, we can decide U(AB)=0.7 and U(BA)=0.3 in which case the probability of the first copy is 30% and the probability of the second copy is 70%.
The ambiguity is avoided if each copy has an independent source of random because this way all of the counterfactuals are “legal.” However, as the example above shows, these probabilities depend on the utility function. So, even if we consider sleeping beauties with independent sources of random, the classical formulation of the problem is ambiguous since it doesn’t specify a utility function. Moreover, if all of the counterfactuals are legal then it might be the utility function doesn’t decompose into a linear combination over copies, in which case there is no probability assignment at all. This is why Everett branches have well defined probabilities but e.g. brain emulation clones don’t.
What makes something an “outcome” in Savage’s theorem is simply that it follows a certain set of rules and relationships—the interpretation into the real world is left to the reader.
It’s totally possible to regard the state of the entire universe as the “outcome”—in that case, the things that corresponds to the “actions” (the thing that the agent chooses between to get different “outcomes”) are actually the strategies that the agent could follow. And the thing that the agent always acts as if it has probabilities over are the “events,” which are the things outside the agent’s control that determine the mapping from “actions” to “outcomes,” and given this interpretation the day does not fulfill such a role—only the coin.
So in that sense, you’re totally right. But this interpretation isn’t unique.
It’s also a valid interpretation to have the “outcome” be whether Sleeping Beauty wins, loses, or doesn’t take an individual bet about what day it is (there is a preference ordering over these things), the “action” being accepting or rejecting the bet, and the “event” being which day it is (the outcome is a function of the chosen action and the event).
Here’s the point: for all valid interpretations, a consistent Sleeping Beauty will act as if she has probabilities over the events. That’s what makes Savage’s theorem a theorem. What day it is is an event in a valid interpretation, therefore Sleeping Beauty acts as if it has a probability.
Side note: It is possible to make what day it is a non-”event,” at least in the Savage sense. You just have to force the “outcomes” to be the outcome of a strategy. Suppose Sleeping Beauty instead just had to choose A or B on each day, and only gets a reward if her choices are AB or BA, but not AA or BB (or any case where the reward tensor is not a tensor sum of rewards for individual days). To play this game well, Savage’s theorem does not say you have to act like you assign a probability to what day it is. The canonical example of this problem in anthropics is the absent-minded driver problem—compared to Sleeping Beauty, it is strictly trickier to talk about whether the absent-minded driver should have a probability that they’re at the first intersection—argument in favor have to either resort to Cox’s theorem (which I find more confusing), or engage in contortions about games that counterfactually could be constructed.
In Savage’s theorem acts are arbitrary functions from the set of states to the set of consequences. Therefore to apply Savage’s theorem in this context you have to consider blatantly inconsistent counterfactuals in which the sleeping beauty makes difference choices in computationally equivalent situations. If you have an extension of the utility function to these counterfactuals and it happens to satisfy the conditions of Savage’s theorem then you can assign probabilities. This extension is not unique. Moreover, in some anthropic scenarios in doesn’t exist (as you noted yourself).
Cox’s theorem only says that any reasonable measure of uncertainty can be transformed into a probability assignment. Here there is no such measure of uncertainty. Different counterfactual games lead to different probability assignments.
First, thanks for having this conversation with me. Before, I was very overconfident in my ability to explain this in a post.
In order for the local interpretation of Sleeping Beauty to work, it’s true that the utility function has to assign utilities to impossible counterfactuals. I don’t think this is a problem, but it does raise an interesting point.
Because only one action is actually taken, any consistent consequentialist decision theory that considers more than one action is a decision theory that has to assign utilities to impossible counterfactuals. But the counterfactuals you mention up are different: they have to be assigned a utility, but they never actually get considered by our decision theory because they’re causally inaccessible—their utilities don’t affect anything, in some logical-counterfactual or algorithmic-causal counterfactual sense.
In the utility functions I used as examples above (winning bets to maximize money, trying to watch a sports game on a specific day), the utility for these impossible counterfactuals is naturally specified because the utility function was specified as a sum of the utilities of local properties of the universe. This is what both allows local “consequences” in Savage’s theorem, and specifies those causally-inaccessible utilities.
This raises the question of whether, if you were given only the total utilities of the causally accessible histories of the universe, it would be “okay” to choose the inaccessible utilities arbitrarily such that the utility could be expressed in terms of local properties. I think this idea might neglect the importance of causal information in deciding what to call an “event.”
Do you have some examples in mind? I’ve seen this claim before, but it’s either relied on the assumption that probabilities can be recovered straightforwardly from the optimal action (not valid when the straightforward decision theory fails, e.g. absent-minded driver, Psy-kosh’s non-anthropic problem), or that certain population-ethics preferences can be ignored without changing anything (highly dubious).
It is a problem in the sense that there is no canonical way to assign these utilities in general.
True. As a side note, the Savage theorem is not quite the right thing here since it produces both probabilities and utilities while in our situations the utilities are already given.
The problem is that different extensions produce complete different probabilities. For example, suppose U(AA) = 0, U(BB) = 1. We can decide U(AB)=U(BA)=0.5 in which case the probability of both copies is 50%. Or, we can decide U(AB)=0.7 and U(BA)=0.3 in which case the probability of the first copy is 30% and the probability of the second copy is 70%.
The ambiguity is avoided if each copy has an independent source of random because this way all of the counterfactuals are “legal.” However, as the example above shows, these probabilities depend on the utility function. So, even if we consider sleeping beauties with independent sources of random, the classical formulation of the problem is ambiguous since it doesn’t specify a utility function. Moreover, if all of the counterfactuals are legal then it might be the utility function doesn’t decompose into a linear combination over copies, in which case there is no probability assignment at all. This is why Everett branches have well defined probabilities but e.g. brain emulation clones don’t.