Suppose you have two studies, each of which measures and gives a probability for the same thing. The first study has a small sample size, and a not terribly rigorous experimental procedure; the second study has a large sample size, and a more thorough procedure. When called on to make a decision, you would use the probability from the larger study. But if the large study hadn’t been conducted, you wouldn’t give up and act like you didn’t have any probability at all; you’d use the one from the small study. You might have to do some extra sanity checks, and your results wouldn’t be as reliable, but they’d still be better than if you didn’t have a probability at all.
A probability assigned by common-sense reasoning is to a probability that came from a small study, as a probability from a small study is to a probability from a large study. The quality of probabilities varies continuously; you get better probabilities by conducting better studies. By saying that a probability based only on common-sense reasoning is meaningless, I think what you’re really trying to do is set a minimum quality level. Since probabilities that’re based on studies and calculation are generally better than probabilities that aren’t, this is a useful heuristic. However, it is only that, a heuristic; probabilities based on common-sense reasoning can sometimes be quite good, and they are often the only information available anywhere (and they are, therefore, the best information). Not all common-sense-based probabilities are equal; if an expert thinks for an hour and then gives a probability, without doing any calculation, then that probability will be much better than if a layman thinks about it for thirty seconds. The best common-sense probabilities are better than the worst statistical-study probabilities; and besides, there usually aren’t any relevant statistical calculations or studies to compare against.
I think what’s confusing you is an intuition that if someone gives a probability, you should be able to take it as-is and start calculating with it. But suppose you had collected five large studies, and someone gave you the results of a sixth. You wouldn’t take that probability as-is, you’d have to combine it with the other five studies somehow. You would only use the new probability as-is if it was significantly better (larger sample, more trustworthy procedure, etc) than the ones you already had, or you didn’t have any before. Now if there are no good studies, and someone gives you a probability that came from their common-sense reasoning, you almost certainly have a comparably good probability already: your own common-sense reasoning. So you have to combine it. So in a sense, those sorts of probabilities are less meaningful—you discard them when they compete with better probabilities, or at least weight them less—but there’s still a nonzero amount of meaning there.
(Aside: I’ve been stuck for awhile on an article I’m writing called “What Probability Requires”, dealing with this same topic, and seeing you argue the other side has been extremely helpful. I think I’m unstuck now; thank you for that.)
After thinking about your comment, I think this observation comes close to the core of our disagreement:
By saying that a probability based only on common-sense reasoning is meaningless, I think what you’re really trying to do is set a minimum quality level.
Basically, yes. More specifically, the quality level I wish to set is that the numbers must give more useful information than mere verbal expressions of confidence. Otherwise, their use at best simply adds nothing useful, and at worst leads to fallacious reasoning encouraged by a false feeling of accuracy.
Now, there are several possible ways to object my position:
The first is to note that even if not meaningful mathematically, numbers can serve as communication-facilitating figures of speech. I have conceded this point.
The second way is to insist on an absolute principle that one should always attach numerical probabilities to one’s beliefs. I haven’t seen anything in this thread (or elsewhere) yet that would shake my belief in the fallaciousness of this position, or even provide any plausible-seeming argument in favor of it.
The third way is to agree that sometimes attaching numerical probabilities to common-sense judgments makes no sense, but on the other hand, in some cases common-sense reasoning can produce numerical probabilities that will give more useful information than just fuzzy words. After the discussion with mattnewport and others, I agree that there are such cases, but I still maintain that these are rare exceptions. (In my original statement, I took an overly restrictive notion of “common sense”; I admit that in some cases, thinking that could be reasonably called like that is indeed precise enough to produce meaningful numerical probabilities.)
So, to clarify, which exact position do you take in this regard? Or would your position require a fourth item to summarize fairly?
I think what’s confusing you is an intuition that if someone gives a probability, you should be able to take it as-is and start calculating with it. [...] So in a sense, those sorts of probabilities are less meaningful—you discard them when they compete with better probabilities, or at least weight them less—but there’s still a nonzero amount of meaning there.
I agree that there is a non-zero amount of meaning, but the question is whether it exceeds what a simple verbal statement of confidence would convey. If I can’t take a number and start calculating with it, what good is it? (Except for the caveat about possible metaphorical meanings of numbers.)
My response to this ended up being a whole article, which is why it took so long. The short version of my position is, we should attack numbers to beliefs as often as possible, but for instrumental reasons rather than on principle.
As a matter of fact I can think of one reason—a strong reason in my view—that the consciously felt feeling of certainty is liable to be systematically and significantly exaggerated with respect to the true probability assignment assigned by the person’s mental black box—the latter being something that we might in principle elicit through experimentation by putting the same subject through variants of a given scenario. (Think revealed probability assignment—similar to revealed preference as understood by the economists.)
The reason is that whole-hearted commitment is usually best whatever one chooses to do. Consider Buridan’s ass, but with the following alterations. Instead of hay and water, to make it more symmetrical suppose the ass has two buckets of water, one on either side about equally distant. Suppose furthermore that his mental black box assigns a 51% probability to the proposition that the bucket on the right side is closer to him than the bucket on the left side.
The question, then, is what should the ass consciously feel about the probability that the bucket on the right is closest? I propose that given that his black box assigns a 51% probability to this, he should go to the bucket on the right. But given that he should go to the bucket on the right, he should go there without delay, without a hesitating step, because hesitation is merely a waste of time. But how can the ass go there without delay if he is consciously feeling that the probability is 51% that the bucket on the right is closest? That feeling will cause within him uncertainty and hesitation and will slow him down. Therefore it is best if the ass consciously is absolutely convinced that the bucket on the right is closest. This conscious feeling of certainty will speed his step and get him to the water quickly.
So it is best for Buridan’s ass that his consciously felt degrees of certainty are great exaggerations of his mental black box’s probability assignments. I think this generalizes. We should consciously feel much more certain of things than we really are, in order to get ourselves moving.
In fact, if Buridan’s ass’s mental black box assigns exactly 50% probability to the right bucket being the closer one, the mental black box should in effect flip a coin and then delude the conscious self to become entirely convinced that the right (or, depending on the coin flip, the left) bucket is the closest and act accordingly.
This can be applied to the reactions of prey to predators. It is so costly for a prey animal to be eaten, and relatively so not very costly for the prey animal merely to waste a bit of its time running, that a prey animal is most likely to survive to reproduce if it is in the habit of completely believing that there is a predator after it far more often than there really is a predator after it. Even if possible-predator-signals in the environment actually signify predators 10% of the time or less, since the prey animal never knows which of those signals is the predator, the prey needs to run for its very life every single time it senses the possible-predator-signal. For it to do this, it must be fully mentally committed to the proposition that there is in fact a predator after it. There is no reason for the prey animal to have any less than full belief that there is a predator after it, each and every time it senses a possible predator.
I don’t agree with this conflation of commitment and belief. I’ve never had to run from a predator, but when I run to catch a train, I am fully committed to catching the train, although I may be uncertain about whether I will succeed. In fact, the less time I have, the faster I must run, but the less likely I am to catch the train. That only affects my decision to run or not. On making the decision, belief and uncertainty are irrelevant, intention and action are everything.
Maybe some people have to make themselves believe in an outcome they know to be uncertain, in order to achieve it, but that is just a psychological exercise, not a necessary part of action.
The question is not whether there are some examples of commitment which do not involve belief. The question is whether there are (some, many) examples where really, absolutely full commitment does involve belief. I think there are many.
Consider what commitment is. If someone says, “you don’t seem fully committed to this”, what sort of thing might have prompted him to say this? It’s something like, he thinks you aren’t doing everything you could possibly do to help this along. He thinks you are holding back.
You might reply to this criticism, “I am not holding anything back. There is literally nothing more that I can do to further the probability of success, so there is no point in doing more—it would be an empty and possibly counterproductive gesture rather than being an action that truly furthers the chance of success.”
So the important question is, what can a creature do to further the probability of success? Let’s look at you running to catch the train. You claim that believing that you will succeed would not further the success of your effort. Well, of course not! I could have told you that! If you believe that you will succeed, you can become complacent, which runs the risk of slowing you down.
But if you believe that there is something chasing you, that is likely to speed you up.
Your argument is essentially, “my full commitment didn’t involve belief X, therefore you’re wrong”. But belief X is a belief that would have slowed you down. It would have reduced, not furthered, your chance of success. So of course your full commitment didn’t involve belief X.
My point is that it is often the case that a certain consciously felt belief would increase a person’s chances of success, given their chosen course of action. And in light of what commitment is—it is commitment of one’s self and one’s resources to furthering the probability of success—then if a belief would further a chance of success, then full, really full commitment will include that belief.
So I am not conflating conscious belief with commitment. I am saying that conscious belief can be, and often is, involved in the furthering of success, and therefore can be and often is a part of really full commitment. That is no more conflating belief with commitment than saying that a strong fabric makes a good coat conflates fabric with coats.
You’re right that my analogy was inaccurate: what corresponds in the train-catching scenario to believing there is a predator is my belief that I need to catch this train.
My point is that it is often the case that a certain consciously felt belief would increase a person’s chances of success, given their chosen course of action. And in light of what commitment is—it is commitment of one’s self and one’s resources to furthering the probability of success—then if a belief would further a chance of success, then full, really full commitment will include that belief.
A stronger belief may produce stronger commitment, but strong commitment does not require strong belief. The animal either flees or does not, because a half-hearted sprint will have no effect on the outcome whether a predator is there or not. Similarly, there’s no point making a half-hearted jog for a train, regardless of how much or little one values catching it.
Belief and commitment to act on the belief are two different parts of the process.
Of course, a lot of the “success” literature urges people to have faith in themselves, to believe in their mission, to cast all doubt aside, etc., and if a tool works for someone I’ve no urge to tell them it shouldn’t. But, personally, I take Yoda’s attitude: “Do, or do not.”
Yoda tutors Luke in Jedi philosophy and a practice, which it will take Luke a while to learn. In the meantime, however, Luke is merely an unpolished human. And I am not here recommending a particular philosophy and practice of thought and behavior, but making a prediction about how unpolished humans (and animals) are likely to act. My point is not to recommend that Buridan’s ass should have an exaggerated confidence that the right bucket is closer, but to observe that we can expect him to have an exaggerated confidence, because, for reasons I described, exaggerated confidence is likely to have been selected for because it is likely to have improved the chances of survival of asses who did not have the benefit of Yoda’s instruction.
So I don’t recommend, rather I expect that humans will commonly have conscious feelings of confidence which are exaggerated, and which do not truly reflect the output of the human’s mental black box, his mental machinery to which he does not have access.
Let me explain by the way what I mean here, because I’m saying that the black box can output a 51% probability for Proposition P while at the same time causing the person to be consciously absolutely convinced of the truth of P. This may be confusing, because I seem to be saying that the black box outputs two probabilities, a 51% probability for purposes of decisionmaking and a 100% probability for conscious consumption. So let me explain with an example what I mean.
Suppose you want to test Buridan’s ass to see what probability he assigns to the proposition that the right bucket is closer. What you can do is take the scenario and alter as follows: introduce a mechanism which, with 4% probability, will move the right bucket further than the left bucket before Buridan’s ass gets to it.
Now, if Buridan’s ass assigns a 100% probability that the right bucket is (currently) closer than the left bucket, then taking into account the introduced mechanism, this yields a 96% probability that, by the time the ass gets to it, the right bucket will still be closer to the ass’s starting position. But if Buridan’s ass assigns a 51% probability that the right bucket is (currently) closer than the left bucket, then taking into account the mechanism, this yields approximately a 49% probability (assuming I did the numbers right) that by the time the ass gets to it, the right bucket will be closer.
I am, of course, assuming that the ass is smart enough to understand and incorporate the mechanism into his calculations. Animals have eyes and ears and brains for a reason, so I don’t think it’s a stretch to suppose that there is some way to implement this scenario in a way that an ass really could understand.
So here’s how the test works. You observe that the ass goes to the bucket on the right. You are not sure whether the ass has assigned a 51% probability or a 100% probability to the right bucket being nearer. So you redo the experiment with the added mechanism. If the ass now (with the introduced mechanism) now goes to the bucket on the left, then you can infer that the ass now believes that the probability that the right bucket will be closer by the time he reaches it is less than 50%. But it only changed by a few percentage points as a result of the added mechanism. Therefore he must have assigned only slightly more than 50% probability to it to begin with.
And in this sort of way, you can elicit the ass’s probability assignments.
The ass’s conscious state of mind, however, is something completely separate from this. If we grant the ass the gift of speech, the ass may well say, each time, “there’s not a shred of doubt in my mind that the right bucket is closer”, or “I am entirely confident that the left bucket is closer”.
My point being that we may well be like the ass, and introspective examination of our own conscious state of mind may fail to reveal the actual probabilities that our mental black boxes have assigned to events. It may instead reveal only overconfident delusions that the black box has instilled in the conscious mind for the purpose of encouraging quick action.
Suppose you have two studies, each of which measures and gives a probability for the same thing. The first study has a small sample size, and a not terribly rigorous experimental procedure; the second study has a large sample size, and a more thorough procedure. When called on to make a decision, you would use the probability from the larger study. But if the large study hadn’t been conducted, you wouldn’t give up and act like you didn’t have any probability at all; you’d use the one from the small study. You might have to do some extra sanity checks, and your results wouldn’t be as reliable, but they’d still be better than if you didn’t have a probability at all.
A probability assigned by common-sense reasoning is to a probability that came from a small study, as a probability from a small study is to a probability from a large study. The quality of probabilities varies continuously; you get better probabilities by conducting better studies. By saying that a probability based only on common-sense reasoning is meaningless, I think what you’re really trying to do is set a minimum quality level. Since probabilities that’re based on studies and calculation are generally better than probabilities that aren’t, this is a useful heuristic. However, it is only that, a heuristic; probabilities based on common-sense reasoning can sometimes be quite good, and they are often the only information available anywhere (and they are, therefore, the best information). Not all common-sense-based probabilities are equal; if an expert thinks for an hour and then gives a probability, without doing any calculation, then that probability will be much better than if a layman thinks about it for thirty seconds. The best common-sense probabilities are better than the worst statistical-study probabilities; and besides, there usually aren’t any relevant statistical calculations or studies to compare against.
I think what’s confusing you is an intuition that if someone gives a probability, you should be able to take it as-is and start calculating with it. But suppose you had collected five large studies, and someone gave you the results of a sixth. You wouldn’t take that probability as-is, you’d have to combine it with the other five studies somehow. You would only use the new probability as-is if it was significantly better (larger sample, more trustworthy procedure, etc) than the ones you already had, or you didn’t have any before. Now if there are no good studies, and someone gives you a probability that came from their common-sense reasoning, you almost certainly have a comparably good probability already: your own common-sense reasoning. So you have to combine it. So in a sense, those sorts of probabilities are less meaningful—you discard them when they compete with better probabilities, or at least weight them less—but there’s still a nonzero amount of meaning there.
(Aside: I’ve been stuck for awhile on an article I’m writing called “What Probability Requires”, dealing with this same topic, and seeing you argue the other side has been extremely helpful. I think I’m unstuck now; thank you for that.)
After thinking about your comment, I think this observation comes close to the core of our disagreement:
Basically, yes. More specifically, the quality level I wish to set is that the numbers must give more useful information than mere verbal expressions of confidence. Otherwise, their use at best simply adds nothing useful, and at worst leads to fallacious reasoning encouraged by a false feeling of accuracy.
Now, there are several possible ways to object my position:
The first is to note that even if not meaningful mathematically, numbers can serve as communication-facilitating figures of speech. I have conceded this point.
The second way is to insist on an absolute principle that one should always attach numerical probabilities to one’s beliefs. I haven’t seen anything in this thread (or elsewhere) yet that would shake my belief in the fallaciousness of this position, or even provide any plausible-seeming argument in favor of it.
The third way is to agree that sometimes attaching numerical probabilities to common-sense judgments makes no sense, but on the other hand, in some cases common-sense reasoning can produce numerical probabilities that will give more useful information than just fuzzy words. After the discussion with mattnewport and others, I agree that there are such cases, but I still maintain that these are rare exceptions. (In my original statement, I took an overly restrictive notion of “common sense”; I admit that in some cases, thinking that could be reasonably called like that is indeed precise enough to produce meaningful numerical probabilities.)
So, to clarify, which exact position do you take in this regard? Or would your position require a fourth item to summarize fairly?
I agree that there is a non-zero amount of meaning, but the question is whether it exceeds what a simple verbal statement of confidence would convey. If I can’t take a number and start calculating with it, what good is it? (Except for the caveat about possible metaphorical meanings of numbers.)
My response to this ended up being a whole article, which is why it took so long. The short version of my position is, we should attack numbers to beliefs as often as possible, but for instrumental reasons rather than on principle.
As a matter of fact I can think of one reason—a strong reason in my view—that the consciously felt feeling of certainty is liable to be systematically and significantly exaggerated with respect to the true probability assignment assigned by the person’s mental black box—the latter being something that we might in principle elicit through experimentation by putting the same subject through variants of a given scenario. (Think revealed probability assignment—similar to revealed preference as understood by the economists.)
The reason is that whole-hearted commitment is usually best whatever one chooses to do. Consider Buridan’s ass, but with the following alterations. Instead of hay and water, to make it more symmetrical suppose the ass has two buckets of water, one on either side about equally distant. Suppose furthermore that his mental black box assigns a 51% probability to the proposition that the bucket on the right side is closer to him than the bucket on the left side.
The question, then, is what should the ass consciously feel about the probability that the bucket on the right is closest? I propose that given that his black box assigns a 51% probability to this, he should go to the bucket on the right. But given that he should go to the bucket on the right, he should go there without delay, without a hesitating step, because hesitation is merely a waste of time. But how can the ass go there without delay if he is consciously feeling that the probability is 51% that the bucket on the right is closest? That feeling will cause within him uncertainty and hesitation and will slow him down. Therefore it is best if the ass consciously is absolutely convinced that the bucket on the right is closest. This conscious feeling of certainty will speed his step and get him to the water quickly.
So it is best for Buridan’s ass that his consciously felt degrees of certainty are great exaggerations of his mental black box’s probability assignments. I think this generalizes. We should consciously feel much more certain of things than we really are, in order to get ourselves moving.
In fact, if Buridan’s ass’s mental black box assigns exactly 50% probability to the right bucket being the closer one, the mental black box should in effect flip a coin and then delude the conscious self to become entirely convinced that the right (or, depending on the coin flip, the left) bucket is the closest and act accordingly.
This can be applied to the reactions of prey to predators. It is so costly for a prey animal to be eaten, and relatively so not very costly for the prey animal merely to waste a bit of its time running, that a prey animal is most likely to survive to reproduce if it is in the habit of completely believing that there is a predator after it far more often than there really is a predator after it. Even if possible-predator-signals in the environment actually signify predators 10% of the time or less, since the prey animal never knows which of those signals is the predator, the prey needs to run for its very life every single time it senses the possible-predator-signal. For it to do this, it must be fully mentally committed to the proposition that there is in fact a predator after it. There is no reason for the prey animal to have any less than full belief that there is a predator after it, each and every time it senses a possible predator.
I don’t agree with this conflation of commitment and belief. I’ve never had to run from a predator, but when I run to catch a train, I am fully committed to catching the train, although I may be uncertain about whether I will succeed. In fact, the less time I have, the faster I must run, but the less likely I am to catch the train. That only affects my decision to run or not. On making the decision, belief and uncertainty are irrelevant, intention and action are everything.
Maybe some people have to make themselves believe in an outcome they know to be uncertain, in order to achieve it, but that is just a psychological exercise, not a necessary part of action.
The question is not whether there are some examples of commitment which do not involve belief. The question is whether there are (some, many) examples where really, absolutely full commitment does involve belief. I think there are many.
Consider what commitment is. If someone says, “you don’t seem fully committed to this”, what sort of thing might have prompted him to say this? It’s something like, he thinks you aren’t doing everything you could possibly do to help this along. He thinks you are holding back.
You might reply to this criticism, “I am not holding anything back. There is literally nothing more that I can do to further the probability of success, so there is no point in doing more—it would be an empty and possibly counterproductive gesture rather than being an action that truly furthers the chance of success.”
So the important question is, what can a creature do to further the probability of success? Let’s look at you running to catch the train. You claim that believing that you will succeed would not further the success of your effort. Well, of course not! I could have told you that! If you believe that you will succeed, you can become complacent, which runs the risk of slowing you down.
But if you believe that there is something chasing you, that is likely to speed you up.
Your argument is essentially, “my full commitment didn’t involve belief X, therefore you’re wrong”. But belief X is a belief that would have slowed you down. It would have reduced, not furthered, your chance of success. So of course your full commitment didn’t involve belief X.
My point is that it is often the case that a certain consciously felt belief would increase a person’s chances of success, given their chosen course of action. And in light of what commitment is—it is commitment of one’s self and one’s resources to furthering the probability of success—then if a belief would further a chance of success, then full, really full commitment will include that belief.
So I am not conflating conscious belief with commitment. I am saying that conscious belief can be, and often is, involved in the furthering of success, and therefore can be and often is a part of really full commitment. That is no more conflating belief with commitment than saying that a strong fabric makes a good coat conflates fabric with coats.
You’re right that my analogy was inaccurate: what corresponds in the train-catching scenario to believing there is a predator is my belief that I need to catch this train.
A stronger belief may produce stronger commitment, but strong commitment does not require strong belief. The animal either flees or does not, because a half-hearted sprint will have no effect on the outcome whether a predator is there or not. Similarly, there’s no point making a half-hearted jog for a train, regardless of how much or little one values catching it.
Belief and commitment to act on the belief are two different parts of the process.
Of course, a lot of the “success” literature urges people to have faith in themselves, to believe in their mission, to cast all doubt aside, etc., and if a tool works for someone I’ve no urge to tell them it shouldn’t. But, personally, I take Yoda’s attitude: “Do, or do not.”
Yoda tutors Luke in Jedi philosophy and a practice, which it will take Luke a while to learn. In the meantime, however, Luke is merely an unpolished human. And I am not here recommending a particular philosophy and practice of thought and behavior, but making a prediction about how unpolished humans (and animals) are likely to act. My point is not to recommend that Buridan’s ass should have an exaggerated confidence that the right bucket is closer, but to observe that we can expect him to have an exaggerated confidence, because, for reasons I described, exaggerated confidence is likely to have been selected for because it is likely to have improved the chances of survival of asses who did not have the benefit of Yoda’s instruction.
So I don’t recommend, rather I expect that humans will commonly have conscious feelings of confidence which are exaggerated, and which do not truly reflect the output of the human’s mental black box, his mental machinery to which he does not have access.
Let me explain by the way what I mean here, because I’m saying that the black box can output a 51% probability for Proposition P while at the same time causing the person to be consciously absolutely convinced of the truth of P. This may be confusing, because I seem to be saying that the black box outputs two probabilities, a 51% probability for purposes of decisionmaking and a 100% probability for conscious consumption. So let me explain with an example what I mean.
Suppose you want to test Buridan’s ass to see what probability he assigns to the proposition that the right bucket is closer. What you can do is take the scenario and alter as follows: introduce a mechanism which, with 4% probability, will move the right bucket further than the left bucket before Buridan’s ass gets to it.
Now, if Buridan’s ass assigns a 100% probability that the right bucket is (currently) closer than the left bucket, then taking into account the introduced mechanism, this yields a 96% probability that, by the time the ass gets to it, the right bucket will still be closer to the ass’s starting position. But if Buridan’s ass assigns a 51% probability that the right bucket is (currently) closer than the left bucket, then taking into account the mechanism, this yields approximately a 49% probability (assuming I did the numbers right) that by the time the ass gets to it, the right bucket will be closer.
I am, of course, assuming that the ass is smart enough to understand and incorporate the mechanism into his calculations. Animals have eyes and ears and brains for a reason, so I don’t think it’s a stretch to suppose that there is some way to implement this scenario in a way that an ass really could understand.
So here’s how the test works. You observe that the ass goes to the bucket on the right. You are not sure whether the ass has assigned a 51% probability or a 100% probability to the right bucket being nearer. So you redo the experiment with the added mechanism. If the ass now (with the introduced mechanism) now goes to the bucket on the left, then you can infer that the ass now believes that the probability that the right bucket will be closer by the time he reaches it is less than 50%. But it only changed by a few percentage points as a result of the added mechanism. Therefore he must have assigned only slightly more than 50% probability to it to begin with.
And in this sort of way, you can elicit the ass’s probability assignments.
The ass’s conscious state of mind, however, is something completely separate from this. If we grant the ass the gift of speech, the ass may well say, each time, “there’s not a shred of doubt in my mind that the right bucket is closer”, or “I am entirely confident that the left bucket is closer”.
My point being that we may well be like the ass, and introspective examination of our own conscious state of mind may fail to reveal the actual probabilities that our mental black boxes have assigned to events. It may instead reveal only overconfident delusions that the black box has instilled in the conscious mind for the purpose of encouraging quick action.