One of the major take-aways I got from actually reading Jaynes was how he is always careful to write probabilities as conditioned on all prior knowledge: P(A|X) where X is our “background knowledge”.
This is useful in the present case since we can distinguish X, Beauty’s background knowledge about which way a given coin might land, and X’, which represents X plus the description of the experimental setup, including the number of awakenings in each case.
That—the difference between X and X’ - is the new information that Beauty learns and which might make P(heads|X’) different from P(heads|X).
One of the major take-aways I got from actually reading Jaynes was how he is always careful to write probabilities as conditioned on all prior knowledge: P(A|X) where X is our “background knowledge”.
This is useful in the present case since we can distinguish X, Beauty’s background knowledge about which way a given coin might land, and X’, which represents X plus the description of the experimental setup, including the number of awakenings in each case.
That—the difference between X and X’ - is the new information that Beauty learns and which might make P(heads|X’) different from P(heads|X).