Yeah, but the OP was motivated by an intuition that probability theory is logically prior to and independent of decision theory. I don’t really have an opinion on whether that is right or not but I was trying to answer the post on its own terms. The lack of a good purely-probability-theory analysis might be a point in favor of taking a measure non-realist point of view though.
To make clear the difference between your view and ksvanhorn’s, I should point out that in his view if Sleeping Beauty is an AI that’s just woken up on Monday/Tuesday but not yet received any sensory input, then the probabilities are still 1⁄2; it is only after receiving some sensory input which is in fact different on the two days (even if it doesn’t allow the AI to determine what day it is) that the probabilities become 1⁄3. Whereas for decision-theoretic purposes you want the probability to be 1⁄3 as soon as the AI wakes up on Monday/Tuesday.
for decision-theoretic purposes you want the probability to be 1⁄3 as soon as the AI wakes up on Monday/Tuesday.
That is based on a flawed decision analysis that fails to account for the fact that Beauty will make the same choice, with the same outcome, on both Monday and Tuesday (it treats the outcomes on those two days as independent).
So you want to use FDT, not CDT. But if the additional data of which direction the fly is going isn’t used in the decision-theoretic computation, then Beauty will make the same choice on both days regardless of whether she has seen the fly’s direction or not. So according to this analysis the probability still needs to be 1⁄2 after she has seen the fly.
Yeah, but the OP was motivated by an intuition that probability theory is logically prior to and independent of decision theory. I don’t really have an opinion on whether that is right or not but I was trying to answer the post on its own terms. The lack of a good purely-probability-theory analysis might be a point in favor of taking a measure non-realist point of view though.
To make clear the difference between your view and ksvanhorn’s, I should point out that in his view if Sleeping Beauty is an AI that’s just woken up on Monday/Tuesday but not yet received any sensory input, then the probabilities are still 1⁄2; it is only after receiving some sensory input which is in fact different on the two days (even if it doesn’t allow the AI to determine what day it is) that the probabilities become 1⁄3. Whereas for decision-theoretic purposes you want the probability to be 1⁄3 as soon as the AI wakes up on Monday/Tuesday.
That is based on a flawed decision analysis that fails to account for the fact that Beauty will make the same choice, with the same outcome, on both Monday and Tuesday (it treats the outcomes on those two days as independent).
So you want to use FDT, not CDT. But if the additional data of which direction the fly is going isn’t used in the decision-theoretic computation, then Beauty will make the same choice on both days regardless of whether she has seen the fly’s direction or not. So according to this analysis the probability still needs to be 1⁄2 after she has seen the fly.