Replicate the entire experiment 1000 times. That is, there will be 1000 independent tosses of the coin. This will lead between 1000 and 2000 awakenings, with expected value of 1500 awakenings.
and
Replicate her awakening-state 1000 times. Because her epistemic state is always the same on an awakening, from her perspective, it could be Monday or Tuesday, it could be heads or tails.
The distinction between 1 and 2 is that, in 2, we are trying to repeatedly sample from the joint probability distributions that she should have on an awakening. In 1, we are replicating the entire experiment, with the double counting on tails.
This seems a distinction without a difference. The longer the iterated SB process continues, the less important is the distinction between counting tosses versus counting awakenings. This distinction is only about a stopping criterion, not about the convergent behavior of observations or coin tosses to expected values as it’s ongoing. Considered as an ongoing process of indefinite duration, the expected number of tosses and of observations of each type are well-defined, easily computed, and well-behaved with respect to each other. Over the long run, #awakenings accumulates 1.5 times more frequently than #tosses. Beauty is never more than two awakenings away from starting a new coin toss, so whether you choose to stop as soon as an awakening has completed or until you finish a coin-toss cycle, the relative perturbation in the statistics collected so far goes to zero. Briefly, there is no “natural” unit of replication independent of observer interest.
She knows that it was a fair coin. She knows that if she’s awake it’s definitely Monday if heads, and could be either Monday or Tuesday if tails. She knows that 50% of coin tosses would end up heads, so we assign 0.5 to Monday&heads.
This would be an error. You are assigning a 50% probability to an observation (that it is Heads&Monday) without taking into account the bias that’s built in to the process for Beauty to make observations. Alternatively, if you are uncertain whether Monday is true or not—you know it might be Tuesday—then you should be uncertain that P(Heads)=P(Heads&Monday).
You the outside observer know the chance of observing that the coin lands Heads is 50%. You presumably know this because you have corroborated it through an unbiased observation process: look at the coin exactly once per toss. Once Beauty is put to sleep and awoken, she is no longer an outside observer, she is a particpant in a biased observation process, so she should update her expectation about what her observation process will show.
Different observation process, different observations, different likelhoods of what she can expect to see.
Of course, as a card-carrying thirder, I’m assuming that the question about credence is about what Beauty is likely to see upon awakening. That’s what the carefully constructed wording of the question suggests to me.
She knows that 50% of coin tosses would end up tails,
except that as we agreed, she’s not observing coin tosses, she’s observing biased samples of coin tosses. The connection between what she observes and the objective behavior of the coin is just what’s at issue here, so you can’t beg the question.
In 1, people are using these ratios of expected counts to get the 1⁄3 answer. 1⁄3 is the correct answer to the question about the long-run frequencies of awakenings preceded by heads to awakenings preceded by tails. But I do not think it is the answer to the question about her credence of heads on an awakening.
Agreed, but for this: it all depends on what you want credence to mean, and what it’s good for; see discussion below.
In 2, the joint probabilities are determined ahead of time based on what we know about the experiment.
Let n2 and n3 are counts, in repeated trials, of tails&Monday and tails&Tuesday, respectively. You will of course see that n2=n3. They are the same random variable. tails&Monday and tails&Tuesday are the same.
Let me uphold a distinction that’s continually skated over, but which is crucial point of disagreement here. I think you’re confusing your evidence for the thing evidenced. And you are selectively filtering your evidence, which amounts to throwing away information. Tails&Monday and Tails&Tuesday are not the same; they are distinct observations of the same state of the coin, thus they are perfectly correlated in that regard. Aside from the coin, they observe distinct days of the week, and thus different states of affairs. By a state of affairs I mean the conjunction of all the observable properties of interest at the moment of observation.
It’s like what Jack said about types and tokens. It’s like Vladimir_Nesov said:
The distinction between types and tokens is only relevant when you want to interepret your tokens as being about something else, their types, rather than about themselves. But types are carved out of observers’ interests in their significance, which are non-objective, observer-dependent if anything is. Their variety and fineness of distinction is potentially infinite. As I mentioned above, a state of affairs is a conjunction of observable properties of interest. This Boolean lattice has exactly one top: Everything, and unknown atoms if any at bottom. Where you choose to carve out a distinction between type and token is a matter of observer interest.
Two subsequent states of a given dynamical system make for poor distinct elements of a sample space: when we’ve observed that the first moment of a given dynamical trajectory is not the second, what are we going to do when we encounter the second one? It’s already ruled “impossible”! Thus, Monday and Tuesday under the same circumstances shouldn’t be modeled as two different elements of a sample space.
I’ll certainly agree it isn’t desirable, but oughtn’t isn’t the same as isn’t, and in the Sleeping Beauty problem we have no choice. Monday and Tuesday just are different elements in a sample space, by construction.
if she starts out believing that heads has probability 1⁄2, but learns something about the coin toss, her probability might go up a little if heads and down a little if tails.
What you seem to be talking about is using evidence that observations provide to corroborate or update Beauty’s belief that the coin is in fact fair. Is that a reasonable take? But due to the epistemic reset between awakenings, there is never any usable input to this updating procedure. I’ve already stipulated this is impossible. This is precisely what the epistemic reset assumption is for. I thought we were getting off this merry-go-round.
Suppose, for example, she is informed of a variable X. If P(heads|X)=P(tails|X), then why is she updating at all? Meaning, why is P(heads)=/=P(heads|X)? This would be unusual. It seems to me that the only reason she changes is because she knows she’d be essentially ‘betting’ twice of tails, but that really is distinct from credence for tails.
Ok, I guess it depends on what you want the word “credence” to mean, and what you’re going to use it for. If you’re only interested in some updating process that digests incoming information-theoretic quanta, like you would get if you were trying to corroborate that the coin was inded a fair one to within a certain standard error, you don’t have it here. That’s not Sleeping Beauty, that’s her faithful but silent, non-memory-impaired lab partner with the log book. If Beauty herself is to have any meaningful notion of credence in Heads, it’s pointless for it be about whether the coin is indeed fair. That’s a separate question, which in this context is a boring thing to ask her about, because it’s trivially obvious: she’s already accepted the information going in that it is fair and she will never get new information from anywhere regarding that belief. And, while she’s undergoing the process of being awoken inside the experimental setup, a value of credence that’s not connected to her observations is not useful for any purpose that I can see, other than perhaps to maintain her membership in good standing in the Guild of Rational Bayesian Epistomologists. It doesn’t connect to her experience, it doesn’t predict frequencies of anything she has any access to, it’s gone completely metaphysical. Ok, what else is there to talk about? On my view, the only thing left is Sleeping Beauty’s phenomenology when awakened. On Bishop Berkeley’s view, that’s all you ever have.
Beauty gets usable, useful information (I guess it depends on what you want “information” to mean, too) once, on Sunday evening, and she never forgets it thereafter. This information is separate from, in addition to the information that the coin itself is fair. This other information allows her to make a more accurate prediction about the likelihood that, each time she is awoken, the coin is showing heads. Or whether it’s Monday or Tuesday. The information she receives is the details of the sampling process, which has been specifically constructed to give results that are biased with respect to the coin toss itself, and the day of the week. Directly after being informed of the structure of the sampling process, she knows it is biased and therefore ought to update her prediction about what relative frequencies per observation will be of each observable aspect of the possible state of affairs she’s awoken into—Heads vs. Tails, Monday vs. Tuesday.
I think I might understand the interpretation that a halfer puts on the question. I’m just doubtful of its interest or relevance. Do you see any validity (I mean logical coherence, as opposed to wrong-headedness) to this interpretation? Is this just a turf war over who gets to define a coveted word for their purposes?
Two ways to iterate the experiment:
and
This seems a distinction without a difference. The longer the iterated SB process continues, the less important is the distinction between counting tosses versus counting awakenings. This distinction is only about a stopping criterion, not about the convergent behavior of observations or coin tosses to expected values as it’s ongoing. Considered as an ongoing process of indefinite duration, the expected number of tosses and of observations of each type are well-defined, easily computed, and well-behaved with respect to each other. Over the long run, #awakenings accumulates 1.5 times more frequently than #tosses. Beauty is never more than two awakenings away from starting a new coin toss, so whether you choose to stop as soon as an awakening has completed or until you finish a coin-toss cycle, the relative perturbation in the statistics collected so far goes to zero. Briefly, there is no “natural” unit of replication independent of observer interest.
This would be an error. You are assigning a 50% probability to an observation (that it is Heads&Monday) without taking into account the bias that’s built in to the process for Beauty to make observations. Alternatively, if you are uncertain whether Monday is true or not—you know it might be Tuesday—then you should be uncertain that P(Heads)=P(Heads&Monday).
You the outside observer know the chance of observing that the coin lands Heads is 50%. You presumably know this because you have corroborated it through an unbiased observation process: look at the coin exactly once per toss. Once Beauty is put to sleep and awoken, she is no longer an outside observer, she is a particpant in a biased observation process, so she should update her expectation about what her observation process will show. Different observation process, different observations, different likelhoods of what she can expect to see.
Of course, as a card-carrying thirder, I’m assuming that the question about credence is about what Beauty is likely to see upon awakening. That’s what the carefully constructed wording of the question suggests to me.
except that as we agreed, she’s not observing coin tosses, she’s observing biased samples of coin tosses. The connection between what she observes and the objective behavior of the coin is just what’s at issue here, so you can’t beg the question.
Agreed, but for this: it all depends on what you want credence to mean, and what it’s good for; see discussion below.
Let me uphold a distinction that’s continually skated over, but which is crucial point of disagreement here. I think you’re confusing your evidence for the thing evidenced. And you are selectively filtering your evidence, which amounts to throwing away information. Tails&Monday and Tails&Tuesday are not the same; they are distinct observations of the same state of the coin, thus they are perfectly correlated in that regard. Aside from the coin, they observe distinct days of the week, and thus different states of affairs. By a state of affairs I mean the conjunction of all the observable properties of interest at the moment of observation.
The distinction between types and tokens is only relevant when you want to interepret your tokens as being about something else, their types, rather than about themselves. But types are carved out of observers’ interests in their significance, which are non-objective, observer-dependent if anything is. Their variety and fineness of distinction is potentially infinite. As I mentioned above, a state of affairs is a conjunction of observable properties of interest. This Boolean lattice has exactly one top: Everything, and unknown atoms if any at bottom. Where you choose to carve out a distinction between type and token is a matter of observer interest.
I’ll certainly agree it isn’t desirable, but oughtn’t isn’t the same as isn’t, and in the Sleeping Beauty problem we have no choice. Monday and Tuesday just are different elements in a sample space, by construction.
What you seem to be talking about is using evidence that observations provide to corroborate or update Beauty’s belief that the coin is in fact fair. Is that a reasonable take? But due to the epistemic reset between awakenings, there is never any usable input to this updating procedure. I’ve already stipulated this is impossible. This is precisely what the epistemic reset assumption is for. I thought we were getting off this merry-go-round.
Ok, I guess it depends on what you want the word “credence” to mean, and what you’re going to use it for. If you’re only interested in some updating process that digests incoming information-theoretic quanta, like you would get if you were trying to corroborate that the coin was inded a fair one to within a certain standard error, you don’t have it here. That’s not Sleeping Beauty, that’s her faithful but silent, non-memory-impaired lab partner with the log book. If Beauty herself is to have any meaningful notion of credence in Heads, it’s pointless for it be about whether the coin is indeed fair. That’s a separate question, which in this context is a boring thing to ask her about, because it’s trivially obvious: she’s already accepted the information going in that it is fair and she will never get new information from anywhere regarding that belief. And, while she’s undergoing the process of being awoken inside the experimental setup, a value of credence that’s not connected to her observations is not useful for any purpose that I can see, other than perhaps to maintain her membership in good standing in the Guild of Rational Bayesian Epistomologists. It doesn’t connect to her experience, it doesn’t predict frequencies of anything she has any access to, it’s gone completely metaphysical. Ok, what else is there to talk about? On my view, the only thing left is Sleeping Beauty’s phenomenology when awakened. On Bishop Berkeley’s view, that’s all you ever have.
Beauty gets usable, useful information (I guess it depends on what you want “information” to mean, too) once, on Sunday evening, and she never forgets it thereafter. This information is separate from, in addition to the information that the coin itself is fair. This other information allows her to make a more accurate prediction about the likelihood that, each time she is awoken, the coin is showing heads. Or whether it’s Monday or Tuesday. The information she receives is the details of the sampling process, which has been specifically constructed to give results that are biased with respect to the coin toss itself, and the day of the week. Directly after being informed of the structure of the sampling process, she knows it is biased and therefore ought to update her prediction about what relative frequencies per observation will be of each observable aspect of the possible state of affairs she’s awoken into—Heads vs. Tails, Monday vs. Tuesday.
I think I might understand the interpretation that a halfer puts on the question. I’m just doubtful of its interest or relevance. Do you see any validity (I mean logical coherence, as opposed to wrong-headedness) to this interpretation? Is this just a turf war over who gets to define a coveted word for their purposes?