Given your position on the meaninglessness of assigning a numerical probability value to a vague feeling of how likely something is, how would you decide whether you were being offered good odds if offered a bet?
In reality, it is rational to bet only with people over whom you have superior relevant knowledge, or with someone who is suffering from an evident failure of common sense. Otherwise, betting is just gambling (which of course can be worthwhile for fun or signaling value). Look at the stock market: it’s pure gambling, unless you have insider knowledge or vastly higher expertise than the average investor.
This is the basic reason why I consider the emphasis on subjective Bayesian probabilities that is so popular here misguided. In technical problems where probability calculations can be helpful, the experts in the field already know how to use them. On the other hand, for the great majority of the relevant beliefs and conclusions you’ll form in life, they offer nothing useful beyond what your vague common sense is already telling you. If you start taking them too seriously, it’s easy to start fooling yourself that your thinking is more accurate and precise than it really is, and if you start actually betting on them, you’ll be just gambling.
If you’re not in the habit of accepting bets, how do you think someone who does this for a living (a bookie for example) should go about deciding on what odds to assign to a given bet?
I’m not familiar with the details of this business, but from what I understand, bookmakers work in such a way that they’re guaranteed to make a profit no matter what happens. Effectively, they exploit the inconsistencies between different people’s estimates of what the favorable odds are. (If there are bookmakers who stake their profit on some particular outcome, then I’m sure that they have insider knowledge if they can stay profitable.) Now of course, the trick is to come up with a book that is both profitable and offers odds that will sell well, but here we get into the fuzzy art of exploiting people’s biases for profit.
In reality, it is rational to bet only with people over whom you have superior relevant knowledge, or with someone who is suffering from an evident failure of common sense.
You still have to be able to translate your superior relevant knowledge into odds in order to set the terms of the bet however. Do you not believe that this is an ability that people have varying degrees of aptitude for?
Look at the stock market: it’s pure gambling, unless you have insider knowledge or vastly higher expertise than the average investor.
Vastly higher expertise than the average investor would appear to include something like the ability in question—translating your beliefs about the future into a probability such that you can judge whether investments have positive expected value. If you accept that true alpha) exists (and the evidence suggests that though rare a small percentage of the best investors do appear to have positive alpha) then what process do you believe those who possess it use to decide which investments are good and which bad?
What’s your opinion on prediction markets? They seem to produce fairly good probability estimates so presumably the participants must be using some better-than-random process for arriving at numerical probability estimates for their predictions.
I’m not familiar with the details of this business, but from what I understand, bookmakers work in such a way that they’re guaranteed to make a profit no matter what happens.
They certainly aim for a balanced book but they wouldn’t be very profitable if they were not reasonably competent at setting initial odds (and updating them in the light of new information). If the initial odds are wildly out of line with their customers’ then they won’t be able to make a balanced book.
You still have to be able to translate your superior relevant knowledge into odds in order to set the terms of the bet however. Do you not believe that this is an ability that people have varying degrees of aptitude for?
They sure do, but in all the examples I can think of, people either just follow their intuition directly when faced with a concrete situation, or employ rigorous science to attack the problem. (It doesn’t have to be the official accredited science, of course; the Venn diagram of official science and valid science features only a partial overlap.) I just don’t see any practical examples of people successfully betting by doing calculations with probability numbers derived from their intuitive feelings of confidence that would go beyond what a mere verbal expression of these feelings would convey. Can you think of any?
If you accept that true alpha exists (and the evidence suggests that though rare a small percentage of the best investors do appear to have positive alpha) then what process do you believe those who possess it use to decide which investments are good and which bad?
Well, if I knew, I would be doing it myself—and I sure wouldn’t be talking about it publicly!
The problem with discussing investment strategies is that any non-trivial public information about this topic necessarily has to be bullshit, or at least drowned in bullshit to the point of being irrecoverable, since exclusive possession of correct information is a sure path to getting rich, but its effectiveness critically depends on exclusivity. Still, I would be surprised to find out that the success of some alpha-achieving investors is based on taking numerical expressions of common-sense confidence seriously.
In a sense, a similar problem faces anyone who aspires to be more “rational” than the average folk in any meaningful sense. Either your “rationality” manifests itself only in irrelevant matters, or you have to ask yourself what is so special and exclusive about you that you’re reaping practical success that eludes so many other people, and in such a way that they can’t just copy your approach.
What’s your opinion on prediction markets? They seem to produce fairly good probability estimates so presumably the participants must be using some better-than-random process for arriving at numerical probability estimates for their predictions.
I agree with this assessment, but the accuracy of information aggregated by a prediction market implies nothing about your own individual certainty. Prediction markets work by cancelling out random errors and enabling specialists who wield esoteric expertise to take advantage of amateurs’ systematic biases. Where your own individual judgment falls within this picture, you cannot know, unless you’re one of these people with esoteric expertise.
I just don’t see any practical examples of people successfully betting by doing calculations with probability numbers derived from their intuitive feelings of confidence that would go beyond what a mere verbal expression of these feelings would convey. Can you think of any?
I’d speculate that bookies and professional sports bettors are doing something like this. By bookies here I mean primarily the kind of individuals who stand with a chalkboard at race tracks rather than the large companies. They probably use some semi-rigorous / scientific techniques to analyze past form and then mix it with a lot of intuition / expertise together with lots of detailed domain specific knowledge and ‘insider’ info (a particular horse or jockey has recently recovered from an illness or injury and so may perform worse than expected, etc.). They’ll then integrate all of this information together using some non mathematically rigorous opaque mental process and derive a probability estimate which will determine what odds they are willing to offer or accept.
I’ve read a fair bit of material by professional investors and macro hedge fund managers describing their thinking and how they make investment decisions. I think they are often doing something similar. Integrating information derived from rigorous analysis with more fuzzy / intuitive reasoning based on expertise, knowledge and experience and using it to derive probabilities for particular outcomes. They then seek out investments that currently appear to be mis-priced relative to the probabilities they’ve estimated, ideally with a fairly large margin of safety to allow for the imprecise and uncertain nature of their estimates.
It’s entirely possible that this is not what’s going on at all but it appears to me that something like this is a factor in the success of anyone who consistently profits from dealing with risk and uncertainty.
The problem with discussing investment strategies is that any non-trivial public information about this topic necessarily has to be bullshit, or at least drowned in bullshit to the point of being irrecoverable, since exclusive possession of correct information is a sure path to getting rich, but its effectiveness critically depends on exclusivity.
My experience leads me to believe that this is not entirely accurate. Investors are understandably reluctant to share very specific time critical investment ideas for free but they frequently share their thought processes for free and talk in general terms about their approaches and my impression is that they are no more obfuscatory or deliberately misleading than anyone else who talks about their success in any field.
In addition, hedge fund investor letters often share quite specific details of reasoning after the fact once profitable trades have been closed and these kinds of details are commonly elaborated in books and interviews once time-sensitive information has lost most of its value.
Either your “rationality” manifests itself only in irrelevant matters, or you have to ask yourself what is so special and exclusive about you that you’re reaping practical success that eludes so many other people, and in such a way that they can’t just copy your approach.
This seems to be taking the ethos of the EMH a little far. I comfortably attribute a significant portion of my academic and career success to being more intelligent and a clearer thinker than most people. Anyone here who through a sense of false modesty believes otherwise is probably deluding themselves.
Where your own individual judgment falls within this picture, you cannot know, unless you’re one of these people with esoteric expertise.
This seems to be the main point of ongoing calibration exercises. If you have a track record of well calibrated predictions then you can gain some confidence that your own individual judgement is sound.
Overall I don’t think we have a massive disagreement here. I agree with most of your reservations and I’m by no means certain that improving one’s own calibration is feasible but I suspect that it might be and it seems sufficiently instrumentally useful that I’m interested in trying to improve my own.
I’d speculate that bookies and professional sports bettors are doing something like this. [...] I’ve read a fair bit of material by professional investors and macro hedge fund managers describing their thinking and how they make investment decisions. I think they are often doing something similar.
Your knowledge about these trades seems to be much greater than mine, so I’ll accept these examples. In the meantime, I have expounded my whole view of the topic in a reply to an excellent systematic list of questions posed by prase, and in those terms, this would indicate the existence of what I called the third type of exceptions under point (3). I still maintain that these are rare exceptions in the overall range of human judgments, though, and that my basic point holds for the overwhelming majority of human common-sense thinking.
Investors are understandably reluctant to share very specific time critical investment ideas for free but they frequently share their thought processes for free and talk in general terms about their approaches and my impression is that they are no more obfuscatory or deliberately misleading than anyone else who talks about their success in any field.
I don’t think they’re being deliberately misleading. I just think that the whole mechanism by which the public discourse on these topics comes into being inherently generates a nearly impenetrable confusion, which you can dispel to extract useful information only if you are already an expert in the first place. There are many specific reasons for this, but it all ultimately comes down to the stability of the weak EMH equilibrium.
This seems to be taking the ethos of the EMH a little far. I comfortably attribute a significant portion of my academic and career success to being more intelligent and a clearer thinker than most people. Anyone here who through a sense of false modesty believes otherwise is probably deluding themselves.
Oh, absolutely! But you’re presumably estimating the rank of your abilities based on some significant accomplishments that most people would indeed find impossible to achieve. What I meant to say (even though I expressed it poorly) is that there is no easy and readily available way to excel at “rationality” in any really relevant matters. This in contrast to the attitude, sometimes seen among the people here, that you can learn about Bayesianism or whatever else and just by virtue of that set yourself apart from the masses in accuracy of thought. The EMH ethos is, in my opinion, a good intellectual antidote against such temptations of hubris.
Given your position on the meaninglessness of assigning a numerical probability value to a vague feeling of how likely something is, how would you decide whether you were being offered good odds if offered a bet?
In reality, it is rational to bet only with people over whom you have superior relevant knowledge, or with someone who is suffering from an evident failure of common sense
You’re dodging the question. What if the odds arose from a natural process, so that there isn’t a person on the other side of the bet to compare your state of knowledge against?
I think this is right. The idea that you would be betting against another person is inessential, an unfortunate distraction arising from the choice of thought experiment. Admittedly it’s a natural way to understand the thought experiment, but it’s inessential. The experiment could be revised to exlude it. In fact every moment we make decisions whose outcomes depend on things we don’t know, and in making those decisions we are therefore in effect gambling. We are surrounded by risks, and our decisions reveal our assessment of those risks.
You’re dodging the question. What if the odds arose from a natural process, so that there isn’t a
person on the other side of the bet to compare your state of knowledge against?
Maybe it’s my failure of English comprehension (I’m not a native speaker, as you might guess from my frequent grammatical errors), but when I read the phrase “being offered good odds if offered a bet,” I understood it as asking about a bet with opponents who stand to lose if my guess is right. So, honestly, I wasn’t dodging the question.
But to answer your question, it depends on the concrete case. Some natural processes can be approximated with models that yield useful probability estimates, and faced with some such process, I would of course try to use the best scientific knowledge available to calculate the odds if the stakes are high enough to justify the effort. When this is not possible, however, the only honest answer is that my decision would be guided by whatever intuitive feeling my brain happens to produce after some common-sense consideration, and unless this intuitive feeling told me that losing the bet is extremely unlikely, I would refuse to bet. And I honestly cannot think of a situation where translating this intuitive feeling of certainty into numbers would increase the clarity and accuracy of my thinking, or provide for any useful practical guidelines.
For example, if I come across a ditch and decide to jump over to save the effort of walking around to cross over a bridge, I’m effectively betting that it’s narrow enough to jump over safely. In reality, I’ll feel intuitively either that it’s safe to jump or not, and I’ll act on that feeling, produced by some opaque module for physics calculations in my brain. Of course, my conclusion might be wrong, and as a kid I would occasionally injure myself by judging wrongly in such situations, but how can I possibly quantify this feeling of certainty numerically in a meaningful way? It simply makes no sense. The overwhelming majority of real-life cases where I have to produce some judgment, and perhaps even bet on it, are of this sort.
It would be cool to have a brain that produces confidence estimates for its conclusions with greater precision, but mine simply isn’t like that, and it’s useless to pretend that it is.
When this is not possible, however, the only honest answer is that my decision would be guided by whatever intuitive feeling my brain happens to produce after some common-sense consideration, and unless this intuitive feeling told me that losing the bet is extremely unlikely, I would refuse to bet.
Applying the view of probability as willingness to bet, you can’t refuse to reveal your probability assignments. Life continually throws at us risky choices. You can perform risky action X with high-value success Y and high-cost failure Z or you can refuse to perform it, but both actions reveal something about your probability assignments. If you perform the risky action X, it reveals that you assign sufficiently high probability to Y (i.e. low to Z) given the values that you place on Y and Z. If you refuse to perform risky action X, it reveals that you assign sufficiently low probability to Y given the values you place on Y and Z. This is nothing other than your willingness to bet.
In an actual case, your simple yes/no response to a given choice is not enough to reveal your probability assignment and only reveals some information about it (that it is below or above a certain value). But counterfactually, we can imagine infinite variations on the choice you are presented with, and for each of these choices, there is a response which (counterfactually) you would have given. This set of responses manifests your probability assignment (and reveals also its degree of precision). Of course, in real life, we can’t usually conduct an experiment that reveals a substantial portion of this set of counterfactuals, so in real life, we remain in the dark about your probability assignment (unless we find some clever alternative way to elicit it than the direct, brute force test-all-variations approach I have just described). But the counterfactuals are still there, and still define a probability assignment, even if we don’t know what it is.
And I honestly cannot think of a situation where translating this intuitive feeling of certainty into numbers would increase the clarity and accuracy of my thinking, or provide for any useful practical guidelines.
But this revealed probability assignment is parallel to revealed preference. The point of revealed preference is not to help the consumer make better choices. It is a conceptual and sometimes practical tool of economics. The economist studying people discovers their preferences by observing their purchases. And similarly, we can discover a person’s probability assignments by observing his choices. The purpose need not be to help that person to increase the clarity or accuracy of his own thinking, any more than the purpose of revealed preference is to help the consumer shop.
A person interested in self-knowledge, for whatever reason, might want to observe his own behavior in order to discover his own preferences. I think that people like Roissy in DC may be able to teach women about themselves if they choose to read him, teach them about what they really want in a man by pointing out what their behavior is, pointing out that they pursue certain kinds of men and shun others. Women—along with everybody else—are apparently suffering from many delusions about what they want, thinking they want one thing, but actually wanting another—as revealed by their behavior. This self-knowledge may or may not be helpful, but surely at least some women would be interested in it.
For example, if I come across a ditch and decide to jump over to save the effort of walking around to cross over a bridge, I’m effectively betting that it’s narrow enough to jump over safely.
But as a matter of fact your choice is influenced by several factors, including the reward of successfully jumping over the ditch (i.e. the reduction in walking time) and the cost of attempting the jump and failing, along with the width of the gap. As these factors are (counterfactually) varied, a possibly precise picture of your probability assignment may emerge. That is, it may turn out that you are willing to risk the jump if failure would only sprain an ankle, but unwilling to risk the jump if failure is certain death. This would narrow down the probability of success that you have assigned to the jump—it would be probable enough to be worth risking the sprained ankle, but not probable enough to be worth risking certain death. This probability assignment is not necessarily anything that you have immediately available to your conscious awareness, but in principle it can be elicited through experimentation with variations on the scenario.
Are you asking for a defense of the statement, or do you agree with it and are merely commenting on the way I expressed it?
I’ll give a defense by means of an example. At Wikipedia they give the following example of a counterfactual:
If Oswald had not shot Kennedy, then someone else would have.
Now consider the equation F=ma. This is translated at Wikipedia into the English:
A body of mass m subject to a force F undergoes an acceleration a that has the same direction as the force and a magnitude that is directly proportional to the force and inversely proportional to the mass, i.e., F = ma.
Now suppose that there is a body of mass m floating in space, and that it has not been subject to nor is it currently subject to any force. I believe that the following is a true counterfactual statement about the body:
Had this body (of mass m) been subject to a force F then it would have undergone an acceleration a that would have had the same direction as the force and a magnitude that would have been directly proportional to the force and inversely proportional to the mass.
That is a counterfactual statement following the model of the wikipedia example, and I believe it is true, and I believe that the contradiction of the counterfactual (which is also a counterfactual, i.e., the claim that the body would not have undergone the stated acceleration) is false.
I believe that this point can be extended to all the laws of physics, either Newton’s laws or, if they have been replaced, modern laws. And I believe, furthermore, that the point can be extended to higher-level statements about bodies which are not mere masses moving in space, but, say, thinking creatures making decisions.
Is there any part of this with which you disagree?
A point about the insertion of “I believe”. The phrase “I believe” is sometimes used by people to assert their religious beliefs. I don’t consider the point I am making to be a personal religious belief, but the plain truth. I only insert “I believe” because the very fact that you brought up the issue tells me that I may be in mixed company that includes someone whose philosophical education has instilled certain views.
mattnewport:
In reality, it is rational to bet only with people over whom you have superior relevant knowledge, or with someone who is suffering from an evident failure of common sense. Otherwise, betting is just gambling (which of course can be worthwhile for fun or signaling value). Look at the stock market: it’s pure gambling, unless you have insider knowledge or vastly higher expertise than the average investor.
This is the basic reason why I consider the emphasis on subjective Bayesian probabilities that is so popular here misguided. In technical problems where probability calculations can be helpful, the experts in the field already know how to use them. On the other hand, for the great majority of the relevant beliefs and conclusions you’ll form in life, they offer nothing useful beyond what your vague common sense is already telling you. If you start taking them too seriously, it’s easy to start fooling yourself that your thinking is more accurate and precise than it really is, and if you start actually betting on them, you’ll be just gambling.
I’m not familiar with the details of this business, but from what I understand, bookmakers work in such a way that they’re guaranteed to make a profit no matter what happens. Effectively, they exploit the inconsistencies between different people’s estimates of what the favorable odds are. (If there are bookmakers who stake their profit on some particular outcome, then I’m sure that they have insider knowledge if they can stay profitable.) Now of course, the trick is to come up with a book that is both profitable and offers odds that will sell well, but here we get into the fuzzy art of exploiting people’s biases for profit.
You still have to be able to translate your superior relevant knowledge into odds in order to set the terms of the bet however. Do you not believe that this is an ability that people have varying degrees of aptitude for?
Vastly higher expertise than the average investor would appear to include something like the ability in question—translating your beliefs about the future into a probability such that you can judge whether investments have positive expected value. If you accept that true alpha) exists (and the evidence suggests that though rare a small percentage of the best investors do appear to have positive alpha) then what process do you believe those who possess it use to decide which investments are good and which bad?
What’s your opinion on prediction markets? They seem to produce fairly good probability estimates so presumably the participants must be using some better-than-random process for arriving at numerical probability estimates for their predictions.
They certainly aim for a balanced book but they wouldn’t be very profitable if they were not reasonably competent at setting initial odds (and updating them in the light of new information). If the initial odds are wildly out of line with their customers’ then they won’t be able to make a balanced book.
mattnewport:
They sure do, but in all the examples I can think of, people either just follow their intuition directly when faced with a concrete situation, or employ rigorous science to attack the problem. (It doesn’t have to be the official accredited science, of course; the Venn diagram of official science and valid science features only a partial overlap.) I just don’t see any practical examples of people successfully betting by doing calculations with probability numbers derived from their intuitive feelings of confidence that would go beyond what a mere verbal expression of these feelings would convey. Can you think of any?
Well, if I knew, I would be doing it myself—and I sure wouldn’t be talking about it publicly!
The problem with discussing investment strategies is that any non-trivial public information about this topic necessarily has to be bullshit, or at least drowned in bullshit to the point of being irrecoverable, since exclusive possession of correct information is a sure path to getting rich, but its effectiveness critically depends on exclusivity. Still, I would be surprised to find out that the success of some alpha-achieving investors is based on taking numerical expressions of common-sense confidence seriously.
In a sense, a similar problem faces anyone who aspires to be more “rational” than the average folk in any meaningful sense. Either your “rationality” manifests itself only in irrelevant matters, or you have to ask yourself what is so special and exclusive about you that you’re reaping practical success that eludes so many other people, and in such a way that they can’t just copy your approach.
I agree with this assessment, but the accuracy of information aggregated by a prediction market implies nothing about your own individual certainty. Prediction markets work by cancelling out random errors and enabling specialists who wield esoteric expertise to take advantage of amateurs’ systematic biases. Where your own individual judgment falls within this picture, you cannot know, unless you’re one of these people with esoteric expertise.
I’d speculate that bookies and professional sports bettors are doing something like this. By bookies here I mean primarily the kind of individuals who stand with a chalkboard at race tracks rather than the large companies. They probably use some semi-rigorous / scientific techniques to analyze past form and then mix it with a lot of intuition / expertise together with lots of detailed domain specific knowledge and ‘insider’ info (a particular horse or jockey has recently recovered from an illness or injury and so may perform worse than expected, etc.). They’ll then integrate all of this information together using some non mathematically rigorous opaque mental process and derive a probability estimate which will determine what odds they are willing to offer or accept.
I’ve read a fair bit of material by professional investors and macro hedge fund managers describing their thinking and how they make investment decisions. I think they are often doing something similar. Integrating information derived from rigorous analysis with more fuzzy / intuitive reasoning based on expertise, knowledge and experience and using it to derive probabilities for particular outcomes. They then seek out investments that currently appear to be mis-priced relative to the probabilities they’ve estimated, ideally with a fairly large margin of safety to allow for the imprecise and uncertain nature of their estimates.
It’s entirely possible that this is not what’s going on at all but it appears to me that something like this is a factor in the success of anyone who consistently profits from dealing with risk and uncertainty.
My experience leads me to believe that this is not entirely accurate. Investors are understandably reluctant to share very specific time critical investment ideas for free but they frequently share their thought processes for free and talk in general terms about their approaches and my impression is that they are no more obfuscatory or deliberately misleading than anyone else who talks about their success in any field.
In addition, hedge fund investor letters often share quite specific details of reasoning after the fact once profitable trades have been closed and these kinds of details are commonly elaborated in books and interviews once time-sensitive information has lost most of its value.
This seems to be taking the ethos of the EMH a little far. I comfortably attribute a significant portion of my academic and career success to being more intelligent and a clearer thinker than most people. Anyone here who through a sense of false modesty believes otherwise is probably deluding themselves.
This seems to be the main point of ongoing calibration exercises. If you have a track record of well calibrated predictions then you can gain some confidence that your own individual judgement is sound.
Overall I don’t think we have a massive disagreement here. I agree with most of your reservations and I’m by no means certain that improving one’s own calibration is feasible but I suspect that it might be and it seems sufficiently instrumentally useful that I’m interested in trying to improve my own.
mattnewport:
Your knowledge about these trades seems to be much greater than mine, so I’ll accept these examples. In the meantime, I have expounded my whole view of the topic in a reply to an excellent systematic list of questions posed by prase, and in those terms, this would indicate the existence of what I called the third type of exceptions under point (3). I still maintain that these are rare exceptions in the overall range of human judgments, though, and that my basic point holds for the overwhelming majority of human common-sense thinking.
I don’t think they’re being deliberately misleading. I just think that the whole mechanism by which the public discourse on these topics comes into being inherently generates a nearly impenetrable confusion, which you can dispel to extract useful information only if you are already an expert in the first place. There are many specific reasons for this, but it all ultimately comes down to the stability of the weak EMH equilibrium.
Oh, absolutely! But you’re presumably estimating the rank of your abilities based on some significant accomplishments that most people would indeed find impossible to achieve. What I meant to say (even though I expressed it poorly) is that there is no easy and readily available way to excel at “rationality” in any really relevant matters. This in contrast to the attitude, sometimes seen among the people here, that you can learn about Bayesianism or whatever else and just by virtue of that set yourself apart from the masses in accuracy of thought. The EMH ethos is, in my opinion, a good intellectual antidote against such temptations of hubris.
You’re dodging the question. What if the odds arose from a natural process, so that there isn’t a person on the other side of the bet to compare your state of knowledge against?
I think this is right. The idea that you would be betting against another person is inessential, an unfortunate distraction arising from the choice of thought experiment. Admittedly it’s a natural way to understand the thought experiment, but it’s inessential. The experiment could be revised to exlude it. In fact every moment we make decisions whose outcomes depend on things we don’t know, and in making those decisions we are therefore in effect gambling. We are surrounded by risks, and our decisions reveal our assessment of those risks.
jimrandomh:
Maybe it’s my failure of English comprehension (I’m not a native speaker, as you might guess from my frequent grammatical errors), but when I read the phrase “being offered good odds if offered a bet,” I understood it as asking about a bet with opponents who stand to lose if my guess is right. So, honestly, I wasn’t dodging the question.
But to answer your question, it depends on the concrete case. Some natural processes can be approximated with models that yield useful probability estimates, and faced with some such process, I would of course try to use the best scientific knowledge available to calculate the odds if the stakes are high enough to justify the effort. When this is not possible, however, the only honest answer is that my decision would be guided by whatever intuitive feeling my brain happens to produce after some common-sense consideration, and unless this intuitive feeling told me that losing the bet is extremely unlikely, I would refuse to bet. And I honestly cannot think of a situation where translating this intuitive feeling of certainty into numbers would increase the clarity and accuracy of my thinking, or provide for any useful practical guidelines.
For example, if I come across a ditch and decide to jump over to save the effort of walking around to cross over a bridge, I’m effectively betting that it’s narrow enough to jump over safely. In reality, I’ll feel intuitively either that it’s safe to jump or not, and I’ll act on that feeling, produced by some opaque module for physics calculations in my brain. Of course, my conclusion might be wrong, and as a kid I would occasionally injure myself by judging wrongly in such situations, but how can I possibly quantify this feeling of certainty numerically in a meaningful way? It simply makes no sense. The overwhelming majority of real-life cases where I have to produce some judgment, and perhaps even bet on it, are of this sort.
It would be cool to have a brain that produces confidence estimates for its conclusions with greater precision, but mine simply isn’t like that, and it’s useless to pretend that it is.
Applying the view of probability as willingness to bet, you can’t refuse to reveal your probability assignments. Life continually throws at us risky choices. You can perform risky action X with high-value success Y and high-cost failure Z or you can refuse to perform it, but both actions reveal something about your probability assignments. If you perform the risky action X, it reveals that you assign sufficiently high probability to Y (i.e. low to Z) given the values that you place on Y and Z. If you refuse to perform risky action X, it reveals that you assign sufficiently low probability to Y given the values you place on Y and Z. This is nothing other than your willingness to bet.
In an actual case, your simple yes/no response to a given choice is not enough to reveal your probability assignment and only reveals some information about it (that it is below or above a certain value). But counterfactually, we can imagine infinite variations on the choice you are presented with, and for each of these choices, there is a response which (counterfactually) you would have given. This set of responses manifests your probability assignment (and reveals also its degree of precision). Of course, in real life, we can’t usually conduct an experiment that reveals a substantial portion of this set of counterfactuals, so in real life, we remain in the dark about your probability assignment (unless we find some clever alternative way to elicit it than the direct, brute force test-all-variations approach I have just described). But the counterfactuals are still there, and still define a probability assignment, even if we don’t know what it is.
But this revealed probability assignment is parallel to revealed preference. The point of revealed preference is not to help the consumer make better choices. It is a conceptual and sometimes practical tool of economics. The economist studying people discovers their preferences by observing their purchases. And similarly, we can discover a person’s probability assignments by observing his choices. The purpose need not be to help that person to increase the clarity or accuracy of his own thinking, any more than the purpose of revealed preference is to help the consumer shop.
A person interested in self-knowledge, for whatever reason, might want to observe his own behavior in order to discover his own preferences. I think that people like Roissy in DC may be able to teach women about themselves if they choose to read him, teach them about what they really want in a man by pointing out what their behavior is, pointing out that they pursue certain kinds of men and shun others. Women—along with everybody else—are apparently suffering from many delusions about what they want, thinking they want one thing, but actually wanting another—as revealed by their behavior. This self-knowledge may or may not be helpful, but surely at least some women would be interested in it.
But as a matter of fact your choice is influenced by several factors, including the reward of successfully jumping over the ditch (i.e. the reduction in walking time) and the cost of attempting the jump and failing, along with the width of the gap. As these factors are (counterfactually) varied, a possibly precise picture of your probability assignment may emerge. That is, it may turn out that you are willing to risk the jump if failure would only sprain an ankle, but unwilling to risk the jump if failure is certain death. This would narrow down the probability of success that you have assigned to the jump—it would be probable enough to be worth risking the sprained ankle, but not probable enough to be worth risking certain death. This probability assignment is not necessarily anything that you have immediately available to your conscious awareness, but in principle it can be elicited through experimentation with variations on the scenario.
That’s a startling statement (especially out of context).
Are you asking for a defense of the statement, or do you agree with it and are merely commenting on the way I expressed it?
I’ll give a defense by means of an example. At Wikipedia they give the following example of a counterfactual:
If Oswald had not shot Kennedy, then someone else would have.
Now consider the equation F=ma. This is translated at Wikipedia into the English:
A body of mass m subject to a force F undergoes an acceleration a that has the same direction as the force and a magnitude that is directly proportional to the force and inversely proportional to the mass, i.e., F = ma.
Now suppose that there is a body of mass m floating in space, and that it has not been subject to nor is it currently subject to any force. I believe that the following is a true counterfactual statement about the body:
Had this body (of mass m) been subject to a force F then it would have undergone an acceleration a that would have had the same direction as the force and a magnitude that would have been directly proportional to the force and inversely proportional to the mass.
That is a counterfactual statement following the model of the wikipedia example, and I believe it is true, and I believe that the contradiction of the counterfactual (which is also a counterfactual, i.e., the claim that the body would not have undergone the stated acceleration) is false.
I believe that this point can be extended to all the laws of physics, either Newton’s laws or, if they have been replaced, modern laws. And I believe, furthermore, that the point can be extended to higher-level statements about bodies which are not mere masses moving in space, but, say, thinking creatures making decisions.
Is there any part of this with which you disagree?
A point about the insertion of “I believe”. The phrase “I believe” is sometimes used by people to assert their religious beliefs. I don’t consider the point I am making to be a personal religious belief, but the plain truth. I only insert “I believe” because the very fact that you brought up the issue tells me that I may be in mixed company that includes someone whose philosophical education has instilled certain views.
I am merely commenting. Counterfactuals are counterfactual, and so don’t “exist” and can’t be “there” by their very nature.
Yes, of course, they’re part of how we do our analyses.