A big reason why probability (and belief in general) is useful is that it separates our observations of the world from our decisions. Rather than somehow relating every observation to every decision we might sometime need to make, we instead relate observations to our beliefs, and then use our beliefs when deciding on actions. That’s the cognitive architecture that evolution has selected for (excepting some more ancient reflexes), and it seems like a good one.
I don’t really disagree, per se, with this general point, but it seems strange to insist on rejecting an answer we already have, and already know is right, in the service of this broad point. If you want to undertake the project of generalizing and formalizing the cognitive algorithms that led us to the right answer, fine and well, but in no event should that get in the way of clarity w.r.t. the original question.
Again: we know the correct answer (i.e. the correct action for Beauty to take); and we know it differs depending on what reward structure is on offer. The question of whether there is, in some sense, a “right answer” even if there are no rewards at all, seems to me to be even potentially useful or interesting only in the case that said “right answer” does in fact generate all the practical correct answers that we already have. (And then we can ask whether it’s an improvement on whatever algorithm we had used to generate said right answers, etc.)
Well of course. If we know the right action from other reasoning, then the correct probabilities better lead us to the same action. That was my point about working backwards from actions to see what the correct probabilities are. One of the nice features about probabilities in “normal” situations is that the probabilities do not depend on the reward structure. Instead we have a decision theory that takes the reward structure and probabilities as input and produces actions. It would be nice if the same nice property held in SB-type problems, and so far it seems to me that it does.
I don’t think there has ever been much dispute about the right actions for Beauty to take in the SB problem (i.e., everyone agrees about the right bets for Beauty to make, for whatever payoff structure is defined). So if just getting the right answer for the actions was the goal, SB would never have been considered of much interest.
A big reason why probability (and belief in general) is useful is that it separates our observations of the world from our decisions. Rather than somehow relating every observation to every decision we might sometime need to make, we instead relate observations to our beliefs, and then use our beliefs when deciding on actions. That’s the cognitive architecture that evolution has selected for (excepting some more ancient reflexes), and it seems like a good one.
I don’t really disagree, per se, with this general point, but it seems strange to insist on rejecting an answer we already have, and already know is right, in the service of this broad point. If you want to undertake the project of generalizing and formalizing the cognitive algorithms that led us to the right answer, fine and well, but in no event should that get in the way of clarity w.r.t. the original question.
Again: we know the correct answer (i.e. the correct action for Beauty to take); and we know it differs depending on what reward structure is on offer. The question of whether there is, in some sense, a “right answer” even if there are no rewards at all, seems to me to be even potentially useful or interesting only in the case that said “right answer” does in fact generate all the practical correct answers that we already have. (And then we can ask whether it’s an improvement on whatever algorithm we had used to generate said right answers, etc.)
Well of course. If we know the right action from other reasoning, then the correct probabilities better lead us to the same action. That was my point about working backwards from actions to see what the correct probabilities are. One of the nice features about probabilities in “normal” situations is that the probabilities do not depend on the reward structure. Instead we have a decision theory that takes the reward structure and probabilities as input and produces actions. It would be nice if the same nice property held in SB-type problems, and so far it seems to me that it does.
I don’t think there has ever been much dispute about the right actions for Beauty to take in the SB problem (i.e., everyone agrees about the right bets for Beauty to make, for whatever payoff structure is defined). So if just getting the right answer for the actions was the goal, SB would never have been considered of much interest.