Sure, if the bet is offered only once per experiment, Beauty receives new evidence (from a thirder‘s perspective) and she could update.
In case the bet is offered on every awakening: do you mean if she gives conflicting answers on Monday and Tuesday that the bet nevertheless is regarded as accepted?
My initial idea was, that if for example only her Monday answer counts and Beauty knows that, she could reason that when her answer counts it is Monday, arriving at the conclusion that it is reasonable to act as if it was Monday on every awakening, thus grounding her answer on P(H/Monday)=1/2. Same logic holds for rule „last awakening counts“ and „random awakening counts“.
In case the bet is offered on every awakening: do you mean if she gives conflicting answers on Monday and Tuesday that the bet nevertheless is regarded as accepted?
Yes I do.
Of course, if the experiment is run as stated she wouldn’t be able to give conflicting answers, so the point is moot. But having a strict algorithm for resolving such theoretical cases is a good thing anyway.
My initial idea was, that if for example only her Monday answer counts and Beauty knows that, she could reason that when her answer counts it is Monday, arriving at the conclusion that it is reasonable to act as if it was Monday on every awakening, thus grounding her answer on P(H/Monday)=1/2. Same logic holds for rule „last awakening counts“ and „random awakening counts“.
Yes, I got it. As a matter of fact this is unlawful. Probability estimate is about the evidence you receive not about what “counts” for a betting scheme. If the Beauty receives the same evidence when her awakening counts and when it doesn’t count she can’t update her probability estimate. If in order to arrive to the correct answer she needs to behave as if every day is Monday it means that there is something wrong with her model.
Thankfully for thirdism, she does not have to do it. She can just assign zero utility to Tuesday awakening and get the correct betting odds.
Anyway, all this is quite tangental to the question of utility instability. Which is about the Beauty making a bet on Sunday and then reflecting on it during the experiment, even if no bets are proposed. According to thirdism probability of the coin being Heads changes on awakening, so, in order for Beauty not to regret about making an optimal bet on Sunday, her utility has to change as well. Therefore utility instability.
Honestly, I do not see any unlawful reasoning going on here. First of all, it‘s certainly important to distinguish between a probability model and a strategy. The job of a probability model is simply to suggest the probability of certain events and to describe how probabilities are affected by the realization of other events. A strategy on the other hand is to guide decision making to arrive at certain predefined goals.
My point is, that the probabilities a model suggests you to have based on the currently available evidence do NOT neccessarily have to match the probabilities that are relevant to your strategy and decisions. If Beauty is awake and doesn‘t know if it is the day her bet counts, it is in fact a rational strategy to behave and decide as if her bet counts today. If she knows that her bet only counts on Monday and her probability model suggests that „Today is Monday“ is relevant for H, then ideal rationality requires her to base her decision on P(H/Monday) cause she knows that Monday is realized when her decision counts. This guarantees that on her Monday awakening when her decision counts, she is calculating the probability for heads based on all relevant evidence that is realized on that day.
It is true that the thirder model does not suggest such a strategy, but suggesting strategies and therefore suggesting which probabilities are relevant for decisions is not the job of a probability model anyway. Similar is the case of the Technicolor Beauty: The strategy „only updating if Red“ is neither suggested nor hinted by your model. All your model suggests are probabilities conditional on the realization of certain events. It can’t tell you to treat the observation „Red room“ as a realization of the event „There is an awakening in a red room“ while treating the observation „Blue room“ merely as a realization of the event „There is an awakening in a red or a blue room“ instead of „There is an awakening in a blue room“. The observation of a blue room is always a realization of both of these events, and it is your strategy „tracking red“ and not your probability model that suggests to prefer one over the other as the relevant evidence to calculate your probabilities. I had been thinking over this for a while after I recently discovered this „Updating only if Red“-strategy for myself and how this strategy could be directly derived from the halfer model. But I honestly see no better justification to apply it than the plain fact that it proves to be more successful in the long run.
First of all, it‘s certainly important to distinguish between a probability model and a strategy. The job of a probability model is simply to suggest the probability of certain events and to describe how probabilities are affected by the realization of other events. A strategy on the other hand is to guide decision making to arrive at certain predefined goals.
Of course. As soon as we are talking about goals and strategies we are not talking about just probabilities anymore. We are also talking about utilities and expected utilities. However, probabilities do not suddenly change because of it. Probabilistic model is the same, there are simply additional considerations as well.
My point is, that the probabilities a model suggests you to have based on the currently available evidence do NOT neccessarily have to match the probabilities that are relevant to your strategy and decisions.
Whether or not your probability model leads to optimal descision making is the test allowing to falsify it. There are no separate “theoretical probabilities” and “decision making probabilities”. Only the ones that guide your behaviour can be correct. What’s the point of a theory that is not applicable to practice, anyway?
If your model claims that the probability based on your evidence is 1⁄3 but the optimal decision making happens when you act as if it’s 1⁄2, then your model is wrong and you switch to a model that claims that the probability is 1⁄2. That’s the whole reason why betting arguments are popular.
If Beauty is awake and doesn‘t know if it is the day her bet counts, it is in fact a rational strategy to behave and decide as if her bet counts today.
Questions of what “counts” or “matters” are not the realm of probability. However, the Beauty is free to adjust her utilities based on the specifics of the betting scheme.
All your model suggests are probabilities conditional on the realization of certain events.
The model says that
P(Heads|Red) = 1⁄3
P(Heads|Blue) = 1⁄3
but
P(Heads|Red or Blue) = 1⁄2
Which obviosly translates in a betting scheme: someone who bets on Tails only when the room is Red wins 2⁄3 of times and someone who bets on Tails only when the room is Blue wins 2⁄3 of times, while someone who always bet on Tails wins only 1⁄2 of time.
This leads to a conclusion that observing event “Red” instead of “Red or Blue” is possible only for someone who has been expecting to observe event “Red” in particular. Likewise, observing HTHHTTHT is possible for a person who was expecting this particular sequence of coin tosses, instead of any combination with length 8. See Another Non-Anthropic Paradox: The Unsurprising Rareness of Rare Events
„Whether or not your probability model leads to optimal descision making is the test allowing to falsify it.“
Sure, I don‘t deny that. What I am saying is, that your probability model don‘t tell you which probability you have to base on a certain decision. If you can derive a probability from your model and provide a good reason to consider this probability relevant to your decision, your model is not falsified as long you arrive at the right decision.
Suppose a simple experiment where the experimenter flips a fair coin and you have to guess if Tails or Heads, but you are only rewarded for the correct decision if the coin comes up Tails. Then, of course, you should still entertain unconditional probabilities P(Heads)=P(Tails)=1/2. But this uncertainty is completely irrelevant to your decision. What is relevant, however, is P(Tails/Tails)=1 and P(Heads/Tails)=0, concluding you should follow the strategy always guessing Tails.
Another way to arrive at this strategy is to calculate expected utilities setting U(Heads)=0 as you would propose. But this is not the only reasonable solution. It’s just a different route of reasoning to take into account the experimental condition that your decision counts only if the coin lands Tails.
„The model says that
P(Heads|Red) = 1⁄3
P(Heads|Blue) = 1⁄3
but
P(Heads|Red or Blue) = 1⁄2
Which obviosly translates in a betting scheme: someone who bets on Tails only when the room is Red wins 2⁄3 of times and someone who bets on Tails only when the room is Blue wins 2⁄3 of times, while someone who always bet on Tails wins only 1⁄2 of time.“
A quick translation of the probabilities is:
P(Heads/Red)=1/3: If your total evidence is Red, then you should entertain probability 1⁄3 for Heads.
P(Heads/Blue)=1/3: If your total evidence is Blue, then you should entertain probability 1⁄3 for Heads.
P(Heads/Red or Blue)=1/2: If your total evidence is Red or Blue, which is the case if you know that either red or blue or both, but not which exactly, you should entertain probalitity 1⁄2 for Heads.
If the optimal betting sheme requires you to rely on P(Heads/Red or Blue)=1/2 when receiving evidence Blue, then the betting sheme demands you to ignore your total evidence. Ignoring total evidence does not necessarily invalidate the probability model, but it certainly needs justification. Otherwise, by strictly following total evidence your model will let you also run foul of the Reflection Principle, since you will arrive at probability 1⁄3 in every single experimental run.
Going one step back, with my translation of the conditional probabilities above I have made the implicit assumption that the way the agent learns evidence is not biased towards a certain hypothesis. But this is obviously not true for the Beauty:
Due to the memory loss Beauty is unable to learn evidence „Red and Blue“ regardless of the coin toss. This in combination with her sleep on Tuesday if Heads, she is going to learn „Red“ and „Blue“ (but not „Red and Blue“) if Tails while she is only going to learn either „Red“ or „Blue“ if Heads, resulting in a bias towards the Tails-hypothesis.
I admit that P(Heads/Red)=P(Heads/Blue)=1/3, but P(Heads/Red or Blue)=1/2 hints you towards the existence of that information selection bias. However, this is just as little a feature of your model as a flat tire is a feature of your car because it hints you to fix it. It is not your probability model that guides you to adopt the proper betting strategy by ignoring total evidence. In fact, it is just the other way around that your knowledge about the bias guides you to partially dismiss your model.
As mentioned above, this does not necessarily invalidate your model, but it shows that directly applying it in certain decision scenarios does not guarantee optimal decisions but can even lead to bad decisions and violating Reflection Principle.
Therefore, as a halfer, I would prefer an updating rule that takes into account the bias and telling me P(Heads/Red)=P(Heads/Blue)=P(Red or Blue)=1/2. While offering me the possibility of a workaround to arrive at your betting sheme.
One possible workaround is that Beauty runs a simulation of another experiment within her original Technicolor Experiment in which she is only awoken in a Red room. She can easily simulate that and the same updating rule that tells her P(Heads/Red)=1/2 for the original experiment tells her P(Heads/Red)=1/3 for the simulated experiment.
„This leads to a conclusion that observing event “Red” instead of “Red or Blue” is possible only for someone who has been expecting to observe event “Red” in particular. Likewise, observing HTHHTTHT is possible for a person who was expecting this particular sequence of coin tosses, instead of any combination with length 8. See Another Non-Anthropic Paradox: The Unsurprising Rareness of Rare Events“
I have already refuted this way of reasoning in the comments of your post.
Sure, I don‘t deny that. What I am saying is, that your probability model don‘t tell you which probability you have to base on a certain decision
It says which probability you have, based on what you’ve observed. If you observed that it’s Monday—you are supposed to use probability conditionally on the fact that it’s Monday, if you didn’t observe that it’s Monday you can’t lawfully use the probability conditionally on the fact that it’s Monday. Simple as that.
There is a possible confusion where people may think that they have observed “this specific thing happened” while actually they observed “any thing from some group of things happened”, which is the technicolor and rare event cases are about.
Suppose a simple experiment where the experimenter flips a fair coin and you have to guess if Tails or Heads, but you are only rewarded for the correct decision if the coin comes up Tails. Then, of course, you should still entertain unconditional probabilities P(Heads)=P(Tails)=1/2. But this uncertainty is completely irrelevant to your decision.
Here you are confusing probability and utility. The fact that P(Heads)=P(Tails)=1/2 is very much relevant to our decision making! The correct reasoning goes like this:
Which means that you shouldn’t bet on Heads at any odds
What is relevant, however, is P(Tails/Tails)=1 and P(Heads/Tails)=0, concluding you should follow the strategy always guessing Tails.
And why did you happen to decide that it’s P(Tails|Tails) = 1 and P(Heads|Tails) = 0 instead of
P(Heads|Heads) = 1 and P(Tails|Heads) = 0 which are “relevant” for you decision making?
You seem to just decide the “relevance” of probabilities post hoc, after you’ve already calculated the correct answer the proper way. I don’t think you can formalize this line of thinking, so that you had a way to systematically correctly solve decision theory problems, which you do not yet know the answer to. Otherwise, we wouldn’t need utilities as a concept.
Another way to arrive at this strategy is to calculate expected utilities setting U(Heads)=0 as you would propose. But this is not the only reasonable solution. It’s just a different route of reasoning to take into account the experimental condition that your decision counts only if the coin lands Tails.
This is not “another way”. This is the right way. It has the proper formalization and actually allows us to arrive to the correct answer even if we do not yet know it.
If the optimal betting sheme requires you to rely on P(Heads/Red or Blue)=1/2 when receiving evidence Blue, then the betting sheme demands you to ignore your total evidence.
You do not “ignore your total evidence”—you are never supposed to do that. It’s just that you didn’t actually receive the evidence in the first place. You can observe the fact that the room is blue in the experiment only if you put your mind in a state where you distinguish blue in particular. Until then your event space doesn’t even include “Blue” only “Blue or Red”.
But I suppose it’s better to go to the comment section Another Non-Anthropic Paradox for this particular crux.
„And why did you happen to decide that it’s P(Tails|Tails) = 1 and P(Heads|Tails) = 0 instead of
P(Heads|Heads) = 1 and P(Tails|Heads) = 0 which are “relevant” for you decision making?
You seem to just decide the “relevance” of probabilities post hoc, after you’ve already calculated the correct answer the proper way. I don’t think you can formalize this line of thinking, so that you had a way to systematically correctly solve decision theory problems, which you do not yet know the answer to. Otherwise, we wouldn’t need utilities as a concept.“
No, it‘s not post hoc. The simple rule to follow is: If a certain value x of a random variable X is relevant to your decision, then base your decision on the probability of x conditional on all conditions that are known to be satisfied when your decision is actually linked to the consequences of interest. And this is P(x/Tails) and not P(x/Heads) in case of guessing X is only rewarded if X=Tails.
Of course, the rule can‘t guarantee you correct answers, since the correctness of your decision does not only depend on the proper application of the rule but also on the quality of your probability model. However, notice that this feature could be used to test a probability model. For example, David Lewis model of the original Sleeping Beauty experiment says P(Heads/Monday)=2/3 resulting in bad betting decisions in case the bet only counts on Monday and applying the rule. Thus, there must be something wrong either with the rule or with to model. Since the logic of the rule seems valid to me, it leads me to dismiss Lewis model.
„You do not “ignore your total evidence”—you are never supposed to do that. It’s just that you didn’t actually receive the evidence in the first place. You can observe the fact that the room is blue in the experiment only if you put your mind in a state where you distinguish blue in particular. Until then your event space doesn’t even include “Blue” only “Blue or Red”.
But I suppose it’s better to go to the comment section Another Non-Anthropic Paradox for this particular crux“
I‘ve read your latest reply on this topic and I generally agree with it. As I already wrote, it is absolutely possible to create an event space that models a state of mind that is biased towards perceiving certain events (e.g. red) while neglecting others (e.g. blue). However, I find it difficult to understand how adopting such an event space that excludes an event that is relevant evidence according to your model, is not ignoring total evidence. This seems to me as if you were arguing that you don‘t ignore something because you are biased to ignore it. Or are you just saying that I was referring to the wrong mental concept, since we can only ignore what we actually do observe? Well, from my psychologist point of view, I highly doubt that simply precommitting to red is a sufficient condition to reliably prevent the human brain from classifying the perception of blue as the event „blue room“ instead of merely “a colored room (red or blue)“. I guess, most people would still subjectively experiencing themselves in a blue room.
Apart from that, is the concept of total evidence really limited to evidence that is actually observed or does it rather refer to all evidence accessible to the agent, including evidence through further investigating, reflecting, reasoning and inference beyond direct observation? Though if the evidence „blue room“ was not initially observed by the agent due to some strong, biased mindset, the evidence would be still accessible to him and could therefore be considered part of his total evidence as long the agent is able to break the mindset.
At the end, the experiment could be modified in a way that Beauty‘s memory about her precommittment on Sunday is erased while sleeping and brought back into her mind again by the experimenter after awoken and seeing the room. In this case, she has already observed a particular color before her Sunday mindset, which could have prevented this, is „reactivated“.
Sure, if the bet is offered only once per experiment, Beauty receives new evidence (from a thirder‘s perspective) and she could update.
In case the bet is offered on every awakening: do you mean if she gives conflicting answers on Monday and Tuesday that the bet nevertheless is regarded as accepted?
My initial idea was, that if for example only her Monday answer counts and Beauty knows that, she could reason that when her answer counts it is Monday, arriving at the conclusion that it is reasonable to act as if it was Monday on every awakening, thus grounding her answer on P(H/Monday)=1/2. Same logic holds for rule „last awakening counts“ and „random awakening counts“.
Yes I do.
Of course, if the experiment is run as stated she wouldn’t be able to give conflicting answers, so the point is moot. But having a strict algorithm for resolving such theoretical cases is a good thing anyway.
Yes, I got it. As a matter of fact this is unlawful. Probability estimate is about the evidence you receive not about what “counts” for a betting scheme. If the Beauty receives the same evidence when her awakening counts and when it doesn’t count she can’t update her probability estimate. If in order to arrive to the correct answer she needs to behave as if every day is Monday it means that there is something wrong with her model.
Thankfully for thirdism, she does not have to do it. She can just assign zero utility to Tuesday awakening and get the correct betting odds.
Anyway, all this is quite tangental to the question of utility instability. Which is about the Beauty making a bet on Sunday and then reflecting on it during the experiment, even if no bets are proposed. According to thirdism probability of the coin being Heads changes on awakening, so, in order for Beauty not to regret about making an optimal bet on Sunday, her utility has to change as well. Therefore utility instability.
Honestly, I do not see any unlawful reasoning going on here. First of all, it‘s certainly important to distinguish between a probability model and a strategy. The job of a probability model is simply to suggest the probability of certain events and to describe how probabilities are affected by the realization of other events. A strategy on the other hand is to guide decision making to arrive at certain predefined goals.
My point is, that the probabilities a model suggests you to have based on the currently available evidence do NOT neccessarily have to match the probabilities that are relevant to your strategy and decisions. If Beauty is awake and doesn‘t know if it is the day her bet counts, it is in fact a rational strategy to behave and decide as if her bet counts today. If she knows that her bet only counts on Monday and her probability model suggests that „Today is Monday“ is relevant for H, then ideal rationality requires her to base her decision on P(H/Monday) cause she knows that Monday is realized when her decision counts. This guarantees that on her Monday awakening when her decision counts, she is calculating the probability for heads based on all relevant evidence that is realized on that day.
It is true that the thirder model does not suggest such a strategy, but suggesting strategies and therefore suggesting which probabilities are relevant for decisions is not the job of a probability model anyway. Similar is the case of the Technicolor Beauty: The strategy „only updating if Red“ is neither suggested nor hinted by your model. All your model suggests are probabilities conditional on the realization of certain events. It can’t tell you to treat the observation „Red room“ as a realization of the event „There is an awakening in a red room“ while treating the observation „Blue room“ merely as a realization of the event „There is an awakening in a red or a blue room“ instead of „There is an awakening in a blue room“. The observation of a blue room is always a realization of both of these events, and it is your strategy „tracking red“ and not your probability model that suggests to prefer one over the other as the relevant evidence to calculate your probabilities. I had been thinking over this for a while after I recently discovered this „Updating only if Red“-strategy for myself and how this strategy could be directly derived from the halfer model. But I honestly see no better justification to apply it than the plain fact that it proves to be more successful in the long run.
Of course. As soon as we are talking about goals and strategies we are not talking about just probabilities anymore. We are also talking about utilities and expected utilities. However, probabilities do not suddenly change because of it. Probabilistic model is the same, there are simply additional considerations as well.
Whether or not your probability model leads to optimal descision making is the test allowing to falsify it. There are no separate “theoretical probabilities” and “decision making probabilities”. Only the ones that guide your behaviour can be correct. What’s the point of a theory that is not applicable to practice, anyway?
If your model claims that the probability based on your evidence is 1⁄3 but the optimal decision making happens when you act as if it’s 1⁄2, then your model is wrong and you switch to a model that claims that the probability is 1⁄2. That’s the whole reason why betting arguments are popular.
Questions of what “counts” or “matters” are not the realm of probability. However, the Beauty is free to adjust her utilities based on the specifics of the betting scheme.
The model says that
P(Heads|Red) = 1⁄3
P(Heads|Blue) = 1⁄3
but
P(Heads|Red or Blue) = 1⁄2
Which obviosly translates in a betting scheme: someone who bets on Tails only when the room is Red wins 2⁄3 of times and someone who bets on Tails only when the room is Blue wins 2⁄3 of times, while someone who always bet on Tails wins only 1⁄2 of time.
This leads to a conclusion that observing event “Red” instead of “Red or Blue” is possible only for someone who has been expecting to observe event “Red” in particular. Likewise, observing HTHHTTHT is possible for a person who was expecting this particular sequence of coin tosses, instead of any combination with length 8. See Another Non-Anthropic Paradox: The Unsurprising Rareness of Rare Events
„Whether or not your probability model leads to optimal descision making is the test allowing to falsify it.“
Sure, I don‘t deny that. What I am saying is, that your probability model don‘t tell you which probability you have to base on a certain decision. If you can derive a probability from your model and provide a good reason to consider this probability relevant to your decision, your model is not falsified as long you arrive at the right decision. Suppose a simple experiment where the experimenter flips a fair coin and you have to guess if Tails or Heads, but you are only rewarded for the correct decision if the coin comes up Tails. Then, of course, you should still entertain unconditional probabilities P(Heads)=P(Tails)=1/2. But this uncertainty is completely irrelevant to your decision. What is relevant, however, is P(Tails/Tails)=1 and P(Heads/Tails)=0, concluding you should follow the strategy always guessing Tails. Another way to arrive at this strategy is to calculate expected utilities setting U(Heads)=0 as you would propose. But this is not the only reasonable solution. It’s just a different route of reasoning to take into account the experimental condition that your decision counts only if the coin lands Tails.
„The model says that P(Heads|Red) = 1⁄3 P(Heads|Blue) = 1⁄3 but P(Heads|Red or Blue) = 1⁄2 Which obviosly translates in a betting scheme: someone who bets on Tails only when the room is Red wins 2⁄3 of times and someone who bets on Tails only when the room is Blue wins 2⁄3 of times, while someone who always bet on Tails wins only 1⁄2 of time.“
A quick translation of the probabilities is:
P(Heads/Red)=1/3: If your total evidence is Red, then you should entertain probability 1⁄3 for Heads.
P(Heads/Blue)=1/3: If your total evidence is Blue, then you should entertain probability 1⁄3 for Heads.
P(Heads/Red or Blue)=1/2: If your total evidence is Red or Blue, which is the case if you know that either red or blue or both, but not which exactly, you should entertain probalitity 1⁄2 for Heads.
If the optimal betting sheme requires you to rely on P(Heads/Red or Blue)=1/2 when receiving evidence Blue, then the betting sheme demands you to ignore your total evidence. Ignoring total evidence does not necessarily invalidate the probability model, but it certainly needs justification. Otherwise, by strictly following total evidence your model will let you also run foul of the Reflection Principle, since you will arrive at probability 1⁄3 in every single experimental run.
Going one step back, with my translation of the conditional probabilities above I have made the implicit assumption that the way the agent learns evidence is not biased towards a certain hypothesis. But this is obviously not true for the Beauty: Due to the memory loss Beauty is unable to learn evidence „Red and Blue“ regardless of the coin toss. This in combination with her sleep on Tuesday if Heads, she is going to learn „Red“ and „Blue“ (but not „Red and Blue“) if Tails while she is only going to learn either „Red“ or „Blue“ if Heads, resulting in a bias towards the Tails-hypothesis.
I admit that P(Heads/Red)=P(Heads/Blue)=1/3, but P(Heads/Red or Blue)=1/2 hints you towards the existence of that information selection bias. However, this is just as little a feature of your model as a flat tire is a feature of your car because it hints you to fix it. It is not your probability model that guides you to adopt the proper betting strategy by ignoring total evidence. In fact, it is just the other way around that your knowledge about the bias guides you to partially dismiss your model. As mentioned above, this does not necessarily invalidate your model, but it shows that directly applying it in certain decision scenarios does not guarantee optimal decisions but can even lead to bad decisions and violating Reflection Principle.
Therefore, as a halfer, I would prefer an updating rule that takes into account the bias and telling me P(Heads/Red)=P(Heads/Blue)=P(Red or Blue)=1/2. While offering me the possibility of a workaround to arrive at your betting sheme. One possible workaround is that Beauty runs a simulation of another experiment within her original Technicolor Experiment in which she is only awoken in a Red room. She can easily simulate that and the same updating rule that tells her P(Heads/Red)=1/2 for the original experiment tells her P(Heads/Red)=1/3 for the simulated experiment.
„This leads to a conclusion that observing event “Red” instead of “Red or Blue” is possible only for someone who has been expecting to observe event “Red” in particular. Likewise, observing HTHHTTHT is possible for a person who was expecting this particular sequence of coin tosses, instead of any combination with length 8. See Another Non-Anthropic Paradox: The Unsurprising Rareness of Rare Events“
I have already refuted this way of reasoning in the comments of your post.
It says which probability you have, based on what you’ve observed. If you observed that it’s Monday—you are supposed to use probability conditionally on the fact that it’s Monday, if you didn’t observe that it’s Monday you can’t lawfully use the probability conditionally on the fact that it’s Monday. Simple as that.
There is a possible confusion where people may think that they have observed “this specific thing happened” while actually they observed “any thing from some group of things happened”, which is the technicolor and rare event cases are about.
Here you are confusing probability and utility. The fact that P(Heads)=P(Tails)=1/2 is very much relevant to our decision making! The correct reasoning goes like this:
P(Heads) = 1⁄2
P(Tails) = 1⁄2
U(Heads) = 0
U(Tails) = X,
E(Tails) = P(Tails)U(Tails) - P(Heads)U(Heads) = 1/2X − 0
Solving E(Tails) = 0 for X:
X = 0
Which means that you shouldn’t bet on Heads at any odds
And why did you happen to decide that it’s P(Tails|Tails) = 1 and P(Heads|Tails) = 0 instead of
P(Heads|Heads) = 1 and P(Tails|Heads) = 0 which are “relevant” for you decision making?
You seem to just decide the “relevance” of probabilities post hoc, after you’ve already calculated the correct answer the proper way. I don’t think you can formalize this line of thinking, so that you had a way to systematically correctly solve decision theory problems, which you do not yet know the answer to. Otherwise, we wouldn’t need utilities as a concept.
This is not “another way”. This is the right way. It has the proper formalization and actually allows us to arrive to the correct answer even if we do not yet know it.
You do not “ignore your total evidence”—you are never supposed to do that. It’s just that you didn’t actually receive the evidence in the first place. You can observe the fact that the room is blue in the experiment only if you put your mind in a state where you distinguish blue in particular. Until then your event space doesn’t even include “Blue” only “Blue or Red”.
But I suppose it’s better to go to the comment section Another Non-Anthropic Paradox for this particular crux.
„And why did you happen to decide that it’s P(Tails|Tails) = 1 and P(Heads|Tails) = 0 instead of P(Heads|Heads) = 1 and P(Tails|Heads) = 0 which are “relevant” for you decision making? You seem to just decide the “relevance” of probabilities post hoc, after you’ve already calculated the correct answer the proper way. I don’t think you can formalize this line of thinking, so that you had a way to systematically correctly solve decision theory problems, which you do not yet know the answer to. Otherwise, we wouldn’t need utilities as a concept.“
No, it‘s not post hoc. The simple rule to follow is: If a certain value x of a random variable X is relevant to your decision, then base your decision on the probability of x conditional on all conditions that are known to be satisfied when your decision is actually linked to the consequences of interest. And this is P(x/Tails) and not P(x/Heads) in case of guessing X is only rewarded if X=Tails.
Of course, the rule can‘t guarantee you correct answers, since the correctness of your decision does not only depend on the proper application of the rule but also on the quality of your probability model. However, notice that this feature could be used to test a probability model. For example, David Lewis model of the original Sleeping Beauty experiment says P(Heads/Monday)=2/3 resulting in bad betting decisions in case the bet only counts on Monday and applying the rule. Thus, there must be something wrong either with the rule or with to model. Since the logic of the rule seems valid to me, it leads me to dismiss Lewis model.
„You do not “ignore your total evidence”—you are never supposed to do that. It’s just that you didn’t actually receive the evidence in the first place. You can observe the fact that the room is blue in the experiment only if you put your mind in a state where you distinguish blue in particular. Until then your event space doesn’t even include “Blue” only “Blue or Red”. But I suppose it’s better to go to the comment section Another Non-Anthropic Paradox for this particular crux“
I‘ve read your latest reply on this topic and I generally agree with it. As I already wrote, it is absolutely possible to create an event space that models a state of mind that is biased towards perceiving certain events (e.g. red) while neglecting others (e.g. blue). However, I find it difficult to understand how adopting such an event space that excludes an event that is relevant evidence according to your model, is not ignoring total evidence. This seems to me as if you were arguing that you don‘t ignore something because you are biased to ignore it. Or are you just saying that I was referring to the wrong mental concept, since we can only ignore what we actually do observe? Well, from my psychologist point of view, I highly doubt that simply precommitting to red is a sufficient condition to reliably prevent the human brain from classifying the perception of blue as the event „blue room“ instead of merely “a colored room (red or blue)“. I guess, most people would still subjectively experiencing themselves in a blue room.
Apart from that, is the concept of total evidence really limited to evidence that is actually observed or does it rather refer to all evidence accessible to the agent, including evidence through further investigating, reflecting, reasoning and inference beyond direct observation? Though if the evidence „blue room“ was not initially observed by the agent due to some strong, biased mindset, the evidence would be still accessible to him and could therefore be considered part of his total evidence as long the agent is able to break the mindset. At the end, the experiment could be modified in a way that Beauty‘s memory about her precommittment on Sunday is erased while sleeping and brought back into her mind again by the experimenter after awoken and seeing the room. In this case, she has already observed a particular color before her Sunday mindset, which could have prevented this, is „reactivated“.