This is an interesting idea, that appeals to me owing to my earlier angle of attack on intuitions about “subjective anticipation”.
The question then becomes, how would we program a robot to answer the kind of question that was asked of Sleeping Beauty?
This comment suggests one concrete way of operationalizing the term “credence”. It could be a wrong way, but at least it is a concrete suggestion, something I think is lacking in other parts of this discussion. What is our criterion for judging either answer a “wrong” answer? More specifically still, how do we distinguish between a robot correctly programmed to answer this kind of question, and one that is buggy?
As in the robot-and-copying example, I suspect that which of 1⁄2 or 1⁄3 is the “correct” answer in fact depends on what (heretofore implicit) goals, epistemic or instrumental, we decide to program the robot to have.
As in the robot-and-copying example, I suspect that which of 1⁄2 or 1⁄3 is the “correct” answer in fact depends on what (heretofore implicit) goals, epistemic or instrumental, we decide to program the robot to have.
And I think this is roughly equivalent to the suggestion that the payoff matters.
This is an interesting idea, that appeals to me owing to my earlier angle of attack on intuitions about “subjective anticipation”.
The question then becomes, how would we program a robot to answer the kind of question that was asked of Sleeping Beauty?
This comment suggests one concrete way of operationalizing the term “credence”. It could be a wrong way, but at least it is a concrete suggestion, something I think is lacking in other parts of this discussion. What is our criterion for judging either answer a “wrong” answer? More specifically still, how do we distinguish between a robot correctly programmed to answer this kind of question, and one that is buggy?
As in the robot-and-copying example, I suspect that which of 1⁄2 or 1⁄3 is the “correct” answer in fact depends on what (heretofore implicit) goals, epistemic or instrumental, we decide to program the robot to have.
And I think this is roughly equivalent to the suggestion that the payoff matters.