I don’t think so. Even in the heads case, it could still be Monday—and say the experimenter told her: “Regardless of the ultimate sequence of event, if you predict correctly when you are woken up, a million dollars will go to your children.”
To me “as a rational individual” is simply a way of saying “as an individual who is seeking to maximize the accuracy of the probability value she proposes—whenever she is in a position to make such proposal (which implies, among others, that she must be alive to make the proposal).”
This is why it’s important to specify what future impact the prediction has. “accuracy” has no meaning outside of resolution—what happens in the future.
Adding the result “$1M to your kids if you predict correctly” makes 1⁄2 the obvious and only choice. “Feel good about yourself if you survive” makes 0% the correct choice. Other outcomes can make 1⁄3 the right choice.
I do not agree that accuracy has no meaning outside of resolution. At least this is not the sense in which I was employing the word. By accurate I simply mean numerically correct within the context of conventional probability theory. Like if I ask the question “A dice is rolled—what is the probability that the result will be either three or four?” the accurate answer is 1⁄3. If I ask “A fair coin is tossed three times, what is the probability that it lands heads each time?” the accurate answer is 1⁄8 etc. This makes the accuracy of a probability value proposal wholly independent from pay-offs.
Cool, I think we’ve found our fundamental disagreement. I do not agree that “numerically correct within the context of conventional probability theory” is meaningful. That’s guessing the teacher’s password, rather than understanding how probability theory models reality.
In objective outside truth (if there is such a thing), all probabilities are 1 or 0 - a thing happens or it doesn’t. Individual assessments of probability are subjective, and are about knowledge of the thing, not the thing itself. Probabilities used in this way are predictions of future evidence.
If you don’t specify what evidence you’re predicting, it’s easy to get confused about what calculation you should use to calculate your probability.
Oh that’s an interesting way to approach things! If you were asked : a fair coin is tossed, what is the probability it will land on head—wouldn’t you reply 1⁄2, and wouldn’t you for your reply be relying on such a thing as conventional probability theory?
Yes for the first half, no for the second. I would reply 1⁄2, but not JUST because of conventional probability theory. It’s also because the unstated parts of “what will resolve the prediction”, in my estimation and modeling, match the setup of conventional probability theory. It’s generally assumed there’s no double-counting or other experience-affecting tomfoolery.
All right—but here the evidence predicted would simply be “the coin landed on heads”, no? I don’t really the contradiction between what you’re saying and conventional probability theory (more or less all which was developped with the specific idea of making predictions, winning games etc.) Yes I agree that saying “the coin landed on heads with probability 1/3″ is a somewhat strange way of putting things (the coin either did or did not land on heads) but it’s a shorthand for a conceptual framework that has firmly simple and sound foundations.
I don’t think so. Even in the heads case, it could still be Monday—and say the experimenter told her: “Regardless of the ultimate sequence of event, if you predict correctly when you are woken up, a million dollars will go to your children.”
To me “as a rational individual” is simply a way of saying “as an individual who is seeking to maximize the accuracy of the probability value she proposes—whenever she is in a position to make such proposal (which implies, among others, that she must be alive to make the proposal).”
This is why it’s important to specify what future impact the prediction has. “accuracy” has no meaning outside of resolution—what happens in the future.
Adding the result “$1M to your kids if you predict correctly” makes 1⁄2 the obvious and only choice. “Feel good about yourself if you survive” makes 0% the correct choice. Other outcomes can make 1⁄3 the right choice.
I do not agree that accuracy has no meaning outside of resolution. At least this is not the sense in which I was employing the word. By accurate I simply mean numerically correct within the context of conventional probability theory. Like if I ask the question “A dice is rolled—what is the probability that the result will be either three or four?” the accurate answer is 1⁄3. If I ask “A fair coin is tossed three times, what is the probability that it lands heads each time?” the accurate answer is 1⁄8 etc. This makes the accuracy of a probability value proposal wholly independent from pay-offs.
Cool, I think we’ve found our fundamental disagreement. I do not agree that “numerically correct within the context of conventional probability theory” is meaningful. That’s guessing the teacher’s password, rather than understanding how probability theory models reality.
In objective outside truth (if there is such a thing), all probabilities are 1 or 0 - a thing happens or it doesn’t. Individual assessments of probability are subjective, and are about knowledge of the thing, not the thing itself. Probabilities used in this way are predictions of future evidence.
If you don’t specify what evidence you’re predicting, it’s easy to get confused about what calculation you should use to calculate your probability.
Oh that’s an interesting way to approach things! If you were asked : a fair coin is tossed, what is the probability it will land on head—wouldn’t you reply 1⁄2, and wouldn’t you for your reply be relying on such a thing as conventional probability theory?
Yes for the first half, no for the second. I would reply 1⁄2, but not JUST because of conventional probability theory. It’s also because the unstated parts of “what will resolve the prediction”, in my estimation and modeling, match the setup of conventional probability theory. It’s generally assumed there’s no double-counting or other experience-affecting tomfoolery.
All right—but here the evidence predicted would simply be “the coin landed on heads”, no? I don’t really the contradiction between what you’re saying and conventional probability theory (more or less all which was developped with the specific idea of making predictions, winning games etc.) Yes I agree that saying “the coin landed on heads with probability 1/3″ is a somewhat strange way of putting things (the coin either did or did not land on heads) but it’s a shorthand for a conceptual framework that has firmly simple and sound foundations.