It seems really odd to do the latter, and I think more motivation is needed for it.
This old post of mine may help. The short version is that if you do probability with “centered propositions” then the resulting probabilities can’t be used in expected utility maximization.
(To be fair, I don’t have a better alternative in mind.)
I think the logical next step from Neal’s concept of “full non-indexical conditioning” (where updating on one’s experiences means taking all possible worlds, assigning 0 probability to those not containing “a version of me which has received this data as well as all of the prior data I have received”, then renormalizing sum of the rest to 1) is to not update, in other words, use UDT. The motivation here is that from a decision making perspective, the assigning 0 / renormalizing step either does nothing (if your decision has no consequences in the worlds that you’d assign 0 probability to) or is actively bad (if your decision does have consequences in those possible worlds, due to logical correlation between you and something/someone in one of those worlds). (UDT also has a bunch of other motivations if this one seems insufficient by itself.)
Yeah, but the OP was motivated by an intuition that probability theory is logically prior to and independent of decision theory. I don’t really have an opinion on whether that is right or not but I was trying to answer the post on its own terms. The lack of a good purely-probability-theory analysis might be a point in favor of taking a measure non-realist point of view though.
To make clear the difference between your view and ksvanhorn’s, I should point out that in his view if Sleeping Beauty is an AI that’s just woken up on Monday/Tuesday but not yet received any sensory input, then the probabilities are still 1⁄2; it is only after receiving some sensory input which is in fact different on the two days (even if it doesn’t allow the AI to determine what day it is) that the probabilities become 1⁄3. Whereas for decision-theoretic purposes you want the probability to be 1⁄3 as soon as the AI wakes up on Monday/Tuesday.
for decision-theoretic purposes you want the probability to be 1⁄3 as soon as the AI wakes up on Monday/Tuesday.
That is based on a flawed decision analysis that fails to account for the fact that Beauty will make the same choice, with the same outcome, on both Monday and Tuesday (it treats the outcomes on those two days as independent).
So you want to use FDT, not CDT. But if the additional data of which direction the fly is going isn’t used in the decision-theoretic computation, then Beauty will make the same choice on both days regardless of whether she has seen the fly’s direction or not. So according to this analysis the probability still needs to be 1⁄2 after she has seen the fly.
This old post of mine may help. The short version is that if you do probability with “centered propositions” then the resulting probabilities can’t be used in expected utility maximization.
I think the logical next step from Neal’s concept of “full non-indexical conditioning” (where updating on one’s experiences means taking all possible worlds, assigning 0 probability to those not containing “a version of me which has received this data as well as all of the prior data I have received”, then renormalizing sum of the rest to 1) is to not update, in other words, use UDT. The motivation here is that from a decision making perspective, the assigning 0 / renormalizing step either does nothing (if your decision has no consequences in the worlds that you’d assign 0 probability to) or is actively bad (if your decision does have consequences in those possible worlds, due to logical correlation between you and something/someone in one of those worlds). (UDT also has a bunch of other motivations if this one seems insufficient by itself.)
Yeah, but the OP was motivated by an intuition that probability theory is logically prior to and independent of decision theory. I don’t really have an opinion on whether that is right or not but I was trying to answer the post on its own terms. The lack of a good purely-probability-theory analysis might be a point in favor of taking a measure non-realist point of view though.
To make clear the difference between your view and ksvanhorn’s, I should point out that in his view if Sleeping Beauty is an AI that’s just woken up on Monday/Tuesday but not yet received any sensory input, then the probabilities are still 1⁄2; it is only after receiving some sensory input which is in fact different on the two days (even if it doesn’t allow the AI to determine what day it is) that the probabilities become 1⁄3. Whereas for decision-theoretic purposes you want the probability to be 1⁄3 as soon as the AI wakes up on Monday/Tuesday.
That is based on a flawed decision analysis that fails to account for the fact that Beauty will make the same choice, with the same outcome, on both Monday and Tuesday (it treats the outcomes on those two days as independent).
So you want to use FDT, not CDT. But if the additional data of which direction the fly is going isn’t used in the decision-theoretic computation, then Beauty will make the same choice on both days regardless of whether she has seen the fly’s direction or not. So according to this analysis the probability still needs to be 1⁄2 after she has seen the fly.