A reasonable an idea for this and other problems that don’t’ seem to suffer from ugly asymptotics would simply to mechanically test it.
That is to say that it may be more efficient, requiring less brain power, to believe the results of repeated simulations. After going through the Monty Hall tree and statistics with people who can’t really understand either, then end up believing the results of a simulation whose code is straightforward to read, I advocate this method—empirical verification over intuition or mathematics that are fallible (because you yourself are fallible in your understanding, not because they contain a contradiction).
This is an interesting idea, that appeals to me owing to my earlier angle of attack on intuitions about “subjective anticipation”.
The question then becomes, how would we program a robot to answer the kind of question that was asked of Sleeping Beauty?
This comment suggests one concrete way of operationalizing the term “credence”. It could be a wrong way, but at least it is a concrete suggestion, something I think is lacking in other parts of this discussion. What is our criterion for judging either answer a “wrong” answer? More specifically still, how do we distinguish between a robot correctly programmed to answer this kind of question, and one that is buggy?
As in the robot-and-copying example, I suspect that which of 1⁄2 or 1⁄3 is the “correct” answer in fact depends on what (heretofore implicit) goals, epistemic or instrumental, we decide to program the robot to have.
As in the robot-and-copying example, I suspect that which of 1⁄2 or 1⁄3 is the “correct” answer in fact depends on what (heretofore implicit) goals, epistemic or instrumental, we decide to program the robot to have.
And I think this is roughly equivalent to the suggestion that the payoff matters.
A reasonable an idea for this and other problems that don’t’ seem to suffer from ugly asymptotics would simply to mechanically test it.
That is to say that it may be more efficient, requiring less brain power, to believe the results of repeated simulations. After going through the Monty Hall tree and statistics with people who can’t really understand either, then end up believing the results of a simulation whose code is straightforward to read, I advocate this method—empirical verification over intuition or mathematics that are fallible (because you yourself are fallible in your understanding, not because they contain a contradiction).
This is an interesting idea, that appeals to me owing to my earlier angle of attack on intuitions about “subjective anticipation”.
The question then becomes, how would we program a robot to answer the kind of question that was asked of Sleeping Beauty?
This comment suggests one concrete way of operationalizing the term “credence”. It could be a wrong way, but at least it is a concrete suggestion, something I think is lacking in other parts of this discussion. What is our criterion for judging either answer a “wrong” answer? More specifically still, how do we distinguish between a robot correctly programmed to answer this kind of question, and one that is buggy?
As in the robot-and-copying example, I suspect that which of 1⁄2 or 1⁄3 is the “correct” answer in fact depends on what (heretofore implicit) goals, epistemic or instrumental, we decide to program the robot to have.
And I think this is roughly equivalent to the suggestion that the payoff matters.
Depending on what you’re testing and a decent level of maths ability, empirics doesn’t help you here.