You’re dodging the question. What if the odds arose from a natural process, so that there isn’t a
person on the other side of the bet to compare your state of knowledge against?
Maybe it’s my failure of English comprehension (I’m not a native speaker, as you might guess from my frequent grammatical errors), but when I read the phrase “being offered good odds if offered a bet,” I understood it as asking about a bet with opponents who stand to lose if my guess is right. So, honestly, I wasn’t dodging the question.
But to answer your question, it depends on the concrete case. Some natural processes can be approximated with models that yield useful probability estimates, and faced with some such process, I would of course try to use the best scientific knowledge available to calculate the odds if the stakes are high enough to justify the effort. When this is not possible, however, the only honest answer is that my decision would be guided by whatever intuitive feeling my brain happens to produce after some common-sense consideration, and unless this intuitive feeling told me that losing the bet is extremely unlikely, I would refuse to bet. And I honestly cannot think of a situation where translating this intuitive feeling of certainty into numbers would increase the clarity and accuracy of my thinking, or provide for any useful practical guidelines.
For example, if I come across a ditch and decide to jump over to save the effort of walking around to cross over a bridge, I’m effectively betting that it’s narrow enough to jump over safely. In reality, I’ll feel intuitively either that it’s safe to jump or not, and I’ll act on that feeling, produced by some opaque module for physics calculations in my brain. Of course, my conclusion might be wrong, and as a kid I would occasionally injure myself by judging wrongly in such situations, but how can I possibly quantify this feeling of certainty numerically in a meaningful way? It simply makes no sense. The overwhelming majority of real-life cases where I have to produce some judgment, and perhaps even bet on it, are of this sort.
It would be cool to have a brain that produces confidence estimates for its conclusions with greater precision, but mine simply isn’t like that, and it’s useless to pretend that it is.
When this is not possible, however, the only honest answer is that my decision would be guided by whatever intuitive feeling my brain happens to produce after some common-sense consideration, and unless this intuitive feeling told me that losing the bet is extremely unlikely, I would refuse to bet.
Applying the view of probability as willingness to bet, you can’t refuse to reveal your probability assignments. Life continually throws at us risky choices. You can perform risky action X with high-value success Y and high-cost failure Z or you can refuse to perform it, but both actions reveal something about your probability assignments. If you perform the risky action X, it reveals that you assign sufficiently high probability to Y (i.e. low to Z) given the values that you place on Y and Z. If you refuse to perform risky action X, it reveals that you assign sufficiently low probability to Y given the values you place on Y and Z. This is nothing other than your willingness to bet.
In an actual case, your simple yes/no response to a given choice is not enough to reveal your probability assignment and only reveals some information about it (that it is below or above a certain value). But counterfactually, we can imagine infinite variations on the choice you are presented with, and for each of these choices, there is a response which (counterfactually) you would have given. This set of responses manifests your probability assignment (and reveals also its degree of precision). Of course, in real life, we can’t usually conduct an experiment that reveals a substantial portion of this set of counterfactuals, so in real life, we remain in the dark about your probability assignment (unless we find some clever alternative way to elicit it than the direct, brute force test-all-variations approach I have just described). But the counterfactuals are still there, and still define a probability assignment, even if we don’t know what it is.
And I honestly cannot think of a situation where translating this intuitive feeling of certainty into numbers would increase the clarity and accuracy of my thinking, or provide for any useful practical guidelines.
But this revealed probability assignment is parallel to revealed preference. The point of revealed preference is not to help the consumer make better choices. It is a conceptual and sometimes practical tool of economics. The economist studying people discovers their preferences by observing their purchases. And similarly, we can discover a person’s probability assignments by observing his choices. The purpose need not be to help that person to increase the clarity or accuracy of his own thinking, any more than the purpose of revealed preference is to help the consumer shop.
A person interested in self-knowledge, for whatever reason, might want to observe his own behavior in order to discover his own preferences. I think that people like Roissy in DC may be able to teach women about themselves if they choose to read him, teach them about what they really want in a man by pointing out what their behavior is, pointing out that they pursue certain kinds of men and shun others. Women—along with everybody else—are apparently suffering from many delusions about what they want, thinking they want one thing, but actually wanting another—as revealed by their behavior. This self-knowledge may or may not be helpful, but surely at least some women would be interested in it.
For example, if I come across a ditch and decide to jump over to save the effort of walking around to cross over a bridge, I’m effectively betting that it’s narrow enough to jump over safely.
But as a matter of fact your choice is influenced by several factors, including the reward of successfully jumping over the ditch (i.e. the reduction in walking time) and the cost of attempting the jump and failing, along with the width of the gap. As these factors are (counterfactually) varied, a possibly precise picture of your probability assignment may emerge. That is, it may turn out that you are willing to risk the jump if failure would only sprain an ankle, but unwilling to risk the jump if failure is certain death. This would narrow down the probability of success that you have assigned to the jump—it would be probable enough to be worth risking the sprained ankle, but not probable enough to be worth risking certain death. This probability assignment is not necessarily anything that you have immediately available to your conscious awareness, but in principle it can be elicited through experimentation with variations on the scenario.
Are you asking for a defense of the statement, or do you agree with it and are merely commenting on the way I expressed it?
I’ll give a defense by means of an example. At Wikipedia they give the following example of a counterfactual:
If Oswald had not shot Kennedy, then someone else would have.
Now consider the equation F=ma. This is translated at Wikipedia into the English:
A body of mass m subject to a force F undergoes an acceleration a that has the same direction as the force and a magnitude that is directly proportional to the force and inversely proportional to the mass, i.e., F = ma.
Now suppose that there is a body of mass m floating in space, and that it has not been subject to nor is it currently subject to any force. I believe that the following is a true counterfactual statement about the body:
Had this body (of mass m) been subject to a force F then it would have undergone an acceleration a that would have had the same direction as the force and a magnitude that would have been directly proportional to the force and inversely proportional to the mass.
That is a counterfactual statement following the model of the wikipedia example, and I believe it is true, and I believe that the contradiction of the counterfactual (which is also a counterfactual, i.e., the claim that the body would not have undergone the stated acceleration) is false.
I believe that this point can be extended to all the laws of physics, either Newton’s laws or, if they have been replaced, modern laws. And I believe, furthermore, that the point can be extended to higher-level statements about bodies which are not mere masses moving in space, but, say, thinking creatures making decisions.
Is there any part of this with which you disagree?
A point about the insertion of “I believe”. The phrase “I believe” is sometimes used by people to assert their religious beliefs. I don’t consider the point I am making to be a personal religious belief, but the plain truth. I only insert “I believe” because the very fact that you brought up the issue tells me that I may be in mixed company that includes someone whose philosophical education has instilled certain views.
jimrandomh:
Maybe it’s my failure of English comprehension (I’m not a native speaker, as you might guess from my frequent grammatical errors), but when I read the phrase “being offered good odds if offered a bet,” I understood it as asking about a bet with opponents who stand to lose if my guess is right. So, honestly, I wasn’t dodging the question.
But to answer your question, it depends on the concrete case. Some natural processes can be approximated with models that yield useful probability estimates, and faced with some such process, I would of course try to use the best scientific knowledge available to calculate the odds if the stakes are high enough to justify the effort. When this is not possible, however, the only honest answer is that my decision would be guided by whatever intuitive feeling my brain happens to produce after some common-sense consideration, and unless this intuitive feeling told me that losing the bet is extremely unlikely, I would refuse to bet. And I honestly cannot think of a situation where translating this intuitive feeling of certainty into numbers would increase the clarity and accuracy of my thinking, or provide for any useful practical guidelines.
For example, if I come across a ditch and decide to jump over to save the effort of walking around to cross over a bridge, I’m effectively betting that it’s narrow enough to jump over safely. In reality, I’ll feel intuitively either that it’s safe to jump or not, and I’ll act on that feeling, produced by some opaque module for physics calculations in my brain. Of course, my conclusion might be wrong, and as a kid I would occasionally injure myself by judging wrongly in such situations, but how can I possibly quantify this feeling of certainty numerically in a meaningful way? It simply makes no sense. The overwhelming majority of real-life cases where I have to produce some judgment, and perhaps even bet on it, are of this sort.
It would be cool to have a brain that produces confidence estimates for its conclusions with greater precision, but mine simply isn’t like that, and it’s useless to pretend that it is.
Applying the view of probability as willingness to bet, you can’t refuse to reveal your probability assignments. Life continually throws at us risky choices. You can perform risky action X with high-value success Y and high-cost failure Z or you can refuse to perform it, but both actions reveal something about your probability assignments. If you perform the risky action X, it reveals that you assign sufficiently high probability to Y (i.e. low to Z) given the values that you place on Y and Z. If you refuse to perform risky action X, it reveals that you assign sufficiently low probability to Y given the values you place on Y and Z. This is nothing other than your willingness to bet.
In an actual case, your simple yes/no response to a given choice is not enough to reveal your probability assignment and only reveals some information about it (that it is below or above a certain value). But counterfactually, we can imagine infinite variations on the choice you are presented with, and for each of these choices, there is a response which (counterfactually) you would have given. This set of responses manifests your probability assignment (and reveals also its degree of precision). Of course, in real life, we can’t usually conduct an experiment that reveals a substantial portion of this set of counterfactuals, so in real life, we remain in the dark about your probability assignment (unless we find some clever alternative way to elicit it than the direct, brute force test-all-variations approach I have just described). But the counterfactuals are still there, and still define a probability assignment, even if we don’t know what it is.
But this revealed probability assignment is parallel to revealed preference. The point of revealed preference is not to help the consumer make better choices. It is a conceptual and sometimes practical tool of economics. The economist studying people discovers their preferences by observing their purchases. And similarly, we can discover a person’s probability assignments by observing his choices. The purpose need not be to help that person to increase the clarity or accuracy of his own thinking, any more than the purpose of revealed preference is to help the consumer shop.
A person interested in self-knowledge, for whatever reason, might want to observe his own behavior in order to discover his own preferences. I think that people like Roissy in DC may be able to teach women about themselves if they choose to read him, teach them about what they really want in a man by pointing out what their behavior is, pointing out that they pursue certain kinds of men and shun others. Women—along with everybody else—are apparently suffering from many delusions about what they want, thinking they want one thing, but actually wanting another—as revealed by their behavior. This self-knowledge may or may not be helpful, but surely at least some women would be interested in it.
But as a matter of fact your choice is influenced by several factors, including the reward of successfully jumping over the ditch (i.e. the reduction in walking time) and the cost of attempting the jump and failing, along with the width of the gap. As these factors are (counterfactually) varied, a possibly precise picture of your probability assignment may emerge. That is, it may turn out that you are willing to risk the jump if failure would only sprain an ankle, but unwilling to risk the jump if failure is certain death. This would narrow down the probability of success that you have assigned to the jump—it would be probable enough to be worth risking the sprained ankle, but not probable enough to be worth risking certain death. This probability assignment is not necessarily anything that you have immediately available to your conscious awareness, but in principle it can be elicited through experimentation with variations on the scenario.
That’s a startling statement (especially out of context).
Are you asking for a defense of the statement, or do you agree with it and are merely commenting on the way I expressed it?
I’ll give a defense by means of an example. At Wikipedia they give the following example of a counterfactual:
If Oswald had not shot Kennedy, then someone else would have.
Now consider the equation F=ma. This is translated at Wikipedia into the English:
A body of mass m subject to a force F undergoes an acceleration a that has the same direction as the force and a magnitude that is directly proportional to the force and inversely proportional to the mass, i.e., F = ma.
Now suppose that there is a body of mass m floating in space, and that it has not been subject to nor is it currently subject to any force. I believe that the following is a true counterfactual statement about the body:
Had this body (of mass m) been subject to a force F then it would have undergone an acceleration a that would have had the same direction as the force and a magnitude that would have been directly proportional to the force and inversely proportional to the mass.
That is a counterfactual statement following the model of the wikipedia example, and I believe it is true, and I believe that the contradiction of the counterfactual (which is also a counterfactual, i.e., the claim that the body would not have undergone the stated acceleration) is false.
I believe that this point can be extended to all the laws of physics, either Newton’s laws or, if they have been replaced, modern laws. And I believe, furthermore, that the point can be extended to higher-level statements about bodies which are not mere masses moving in space, but, say, thinking creatures making decisions.
Is there any part of this with which you disagree?
A point about the insertion of “I believe”. The phrase “I believe” is sometimes used by people to assert their religious beliefs. I don’t consider the point I am making to be a personal religious belief, but the plain truth. I only insert “I believe” because the very fact that you brought up the issue tells me that I may be in mixed company that includes someone whose philosophical education has instilled certain views.
I am merely commenting. Counterfactuals are counterfactual, and so don’t “exist” and can’t be “there” by their very nature.
Yes, of course, they’re part of how we do our analyses.