So, he agrees that how you act depends on utility. But, contrary to what he appears to believe, to declare a belief is to act—the action is linguistic. Therefore how you declare your beliefs depends on utility—that is, on the utility of making that declaration.
I apologize for giving you the impression I disagree with this. By ‘being convinced’, I thought you were talking about belief states rather than declarations of belief, and thence these errors are arose (yes?).
I think that belief is a kind of internal declaration of belief, because it serves essentially the same function (internally) as declaration of belief serves (externally). Please allow me to explain.
There are two pictures of how the brain works which don’t match up comfortably. On one picture, the brain assigns a probability to something. On the other picture, the brain either believes, or fails to believe, something. The reason they don’ t match up is that in the first picture the range of possible brain-states is continuous, ranging from P=0 to p=1. But in the second picture, the range of possible brain-states is binary: one state is the state of belief, the other is the state of failure to believe.
So the question then is, how do we reconcile these two pictures? My current view is that on a more fundamental level, our brains assign [probabilities (edited)]. And on a more superficial level, which is partially informed by the fundamental level, we flip a switch between two states: belief and failure to believe.
I think a key question here is: why do we have these two levels, the continuous level which assigns probabilities, and the binary level which flips a switch between two states? I think the reason for the second level is that action is (usually) binary. If you try to draw a map from probability assignment to best course of action (physical action involving our legs and arms), what you find is that the optimal leg/arm action quite often does not range continuously as probability assignment ranges from 0 to 1. Rather, at some threshold value, the optimal leg/arm action switches from one action to another, quite different action—with nothing in between.
So the level of action is a level populated by distinct courses of action with nothing in between, rather than a continuous range of action. What I think, then, is that the binary level of belief versus failure to believe is a kind of half-way point between probability assignments and leg/arm action. What it is, is a translation of assignment of probability (which ranges continuously from zero to one) into a non-continuous, binary belief which is immediately translatable into decision and then into leg/arm action.
But as has I think been agreed on, the optimal course of action does not depend merely on probability assignments. It also depends on value assignments. So, depending on your value assignments, the optimal course of action may switch from A to B at P=60%, or alternatively at P=80%, etc. In the case of crossing the street, I argued that the optimal course of action switches at P>99.9%.
But binary belief (i.e. belief versus non-belief), I think, is immediately translatable into decision and action. That, I think, is the function of binary belief. But in that case, since optimal action switches at different P depending on value assignments, then belief must also switch between belief and failure to believe at different P depending on value assignments.
Here’s a concise answer that straightforwardly applies the rule I already stated. Since my rule only applies above 50% and since P(being shot)=10% (as I recall), then we must consider the negation. Suppose P(I will be shot) is 10% and P(I will be stabbed) is 10% and suppose that (for some reason) “I will be shot” and “I will be stabbed” are mutually exclusive. Since P<50% for each of these we turn it around, and get:
P(I will not be shot)is 90% and P(I will not be stabbed) is 90%. Because the cost of being shot, and the cost of being stabbed, are so very high, then the threshold for being convinced must be very high as well—set it to 99.9%. Since P=90% for each of these, then it does not reach my threshold for being convinced.
Therefore I am not convinced that I will not be shot and I am not convinced that I will not be stabbed. Therefore I will not go without my bulletproof body armor and I will not go without my stab-proof body armor.
So the rule seems to work. The fact that these are mutually exclusive dangers doesn’t seem to affect the outcome. [Added: For what I consider to be a more useful discussion of the topic, see my other answer.]
{Added:see my other answer for a concise answer, which however leaves out a lot that I think important to discuss}
For starters, I think there is no problem understanding these two precautions against mutually exclusive dangers in terms of probability assignments, what I consider the more fundamental level of how we think. In fact, I consider this fact—that we do prepare for mutually exclusive dangers—as evidence that our fundamental way of thinking really is better described in terms of probability assignments than in terms of binary beliefs.
Talk about binary beliefs is folk psychology. As Wikipedia says:
Folk psychology embraces everyday concepts like “beliefs”, “desires”, “fear”, and “hope”.
People who think about mind and brain sometimes express misgivings about folk psychology, sometimes going so far as to suggest that things like beliefs and desires no more exist than do witches exist. I’m actually taking folk psychology somewhat seriously in granting that in addition to a fundamental, Bayesian level of cognition, I think there is also a more superficial, folk psychological level—that (binary) beliefs exist in a way that witches do not exist. I’ve actually gone and described a role that binary, folk psychological beliefs can play in the mental economy, as a mediator between Bayesian probability assignment and binary action.
But a problem immediately arises, in that, mapping probability assignments to different actions, different thresholds apply for different actions. When that arises, then the function of declaring a (binary) belief (publicly or to silently to oneself) breaks down, because the threshold for declaring belief appropriate to one action is inappropriate to another. I attempted to illustrate this breakdown with the two dialogs between Bob and Max. Bob revises his threshold up mid-conversation when he discovers that the actions he is called upon to perform in light of his stated beliefs are riskier than he had anticipated.
I think that in certain break-down situations, it can become problematic to assign binary, folk-psychological beliefs at all, and so we should fall back on Bayesian probability assignments to describe what the brain is doing. The idea of the Bayesian brain might also of course break down, it’s also just an approximation, but I think it’s a closer approximation. So in those break-down situations, my inclination is to refrain from asserting that a person believes, or fails to believe, something. My preference is to try to understand his behavior in terms of a probability that he has assigned to a possibility, rather than in terms of believing or failing to believe.
Sadly, I think that there is a strong tendency to insist that there is one unique true answer to a question that we have been answering all our lives. I think that, for example, a small child who has not yet learned that the planet is a sphere, “up” is one direction which doesn’t depend on where the child is. And if you send that small child into space, he might immediately wonder, “which way is up?” In fact, even many adults may, in their gut, wonder, “which way is up?”, because deep in their gut they believe that there must be an answer to this, even though intellectually they understand that “up” does not always make sense. The gut feeling that there is a universal “up” that applies to everything arises when someone takes a globe or map of the Earth and turns it upside down. It just looks upside down, even though we understand intellectually that “up” and “down” don’t truly apply here. Similarly, in science fiction space battles where all the ships are oriented in relation to a universal “up”.
Similarly, I think that is a strong tendency to insist that there is one unique and true answer to the question, “what do I believe”. And so we answer the question, “what do I believe”, and we hold on tightly to the answer. Because of this, I think that introspection about “what I believe” is suspect.
As I said, I have not entirely figured out the implicit rules that underly what we (declare to ourselves silently that we) believe. I’ve acknowledged that for P<50%, we seem to withhold (declaration of) belief regardless of what our value assignments are. That being the case, I’m not entirely sure how to answer questions about belief in the case of precautions against dangers with P<50%.
I find it extremely interesting, however, that Pascal actually seems to have bit the bullet and advocated (declaration of) belief even when P<<50%, for sufficiently extreme value assignments.
So in those break-down situations, my inclination is to refrain from asserting that a person believes, or fails to believe, something. My preference is to try to understand his behavior in terms of a probability that he has assigned to a possibility, rather than in terms of believing or failing to believe.
I think this is the most common position held on this board—that’s why I found your model confusing.
It seems the edge cases that make it break are very common (for example, taking precautions against a flip of heads and a flip of tails). Moreover, I think the reason it doesn’t work on probabilities below 50% is the same as the reason it doesn’t work on probabilities >= 50%. What lesson do you intend to impart by it?
As an aside, my understanding of Pascal’s wager is that it is an exhortation to seek out the best possible evidence, rather than to “believe something because it would be beneficial if you did” (which doesn’t really make a lot of sense).
I apologize for giving you the impression I disagree with this. By ‘being convinced’, I thought you were talking about belief states rather than declarations of belief, and thence these errors are arose (yes?).
I think that belief is a kind of internal declaration of belief, because it serves essentially the same function (internally) as declaration of belief serves (externally). Please allow me to explain.
There are two pictures of how the brain works which don’t match up comfortably. On one picture, the brain assigns a probability to something. On the other picture, the brain either believes, or fails to believe, something. The reason they don’ t match up is that in the first picture the range of possible brain-states is continuous, ranging from P=0 to p=1. But in the second picture, the range of possible brain-states is binary: one state is the state of belief, the other is the state of failure to believe.
So the question then is, how do we reconcile these two pictures? My current view is that on a more fundamental level, our brains assign [probabilities (edited)]. And on a more superficial level, which is partially informed by the fundamental level, we flip a switch between two states: belief and failure to believe.
I think a key question here is: why do we have these two levels, the continuous level which assigns probabilities, and the binary level which flips a switch between two states? I think the reason for the second level is that action is (usually) binary. If you try to draw a map from probability assignment to best course of action (physical action involving our legs and arms), what you find is that the optimal leg/arm action quite often does not range continuously as probability assignment ranges from 0 to 1. Rather, at some threshold value, the optimal leg/arm action switches from one action to another, quite different action—with nothing in between.
So the level of action is a level populated by distinct courses of action with nothing in between, rather than a continuous range of action. What I think, then, is that the binary level of belief versus failure to believe is a kind of half-way point between probability assignments and leg/arm action. What it is, is a translation of assignment of probability (which ranges continuously from zero to one) into a non-continuous, binary belief which is immediately translatable into decision and then into leg/arm action.
But as has I think been agreed on, the optimal course of action does not depend merely on probability assignments. It also depends on value assignments. So, depending on your value assignments, the optimal course of action may switch from A to B at P=60%, or alternatively at P=80%, etc. In the case of crossing the street, I argued that the optimal course of action switches at P>99.9%.
But binary belief (i.e. belief versus non-belief), I think, is immediately translatable into decision and action. That, I think, is the function of binary belief. But in that case, since optimal action switches at different P depending on value assignments, then belief must also switch between belief and failure to believe at different P depending on value assignments.
Okay, this makes sense, though I think I’d use ‘belief’ differently.
What does it mean in a situation where I take precautions against two possible but mutually exclusive dangers?
Here’s a concise answer that straightforwardly applies the rule I already stated. Since my rule only applies above 50% and since P(being shot)=10% (as I recall), then we must consider the negation. Suppose P(I will be shot) is 10% and P(I will be stabbed) is 10% and suppose that (for some reason) “I will be shot” and “I will be stabbed” are mutually exclusive. Since P<50% for each of these we turn it around, and get:
P(I will not be shot)is 90% and P(I will not be stabbed) is 90%. Because the cost of being shot, and the cost of being stabbed, are so very high, then the threshold for being convinced must be very high as well—set it to 99.9%. Since P=90% for each of these, then it does not reach my threshold for being convinced.
Therefore I am not convinced that I will not be shot and I am not convinced that I will not be stabbed. Therefore I will not go without my bulletproof body armor and I will not go without my stab-proof body armor.
So the rule seems to work. The fact that these are mutually exclusive dangers doesn’t seem to affect the outcome. [Added: For what I consider to be a more useful discussion of the topic, see my other answer.]
{Added:see my other answer for a concise answer, which however leaves out a lot that I think important to discuss}
For starters, I think there is no problem understanding these two precautions against mutually exclusive dangers in terms of probability assignments, what I consider the more fundamental level of how we think. In fact, I consider this fact—that we do prepare for mutually exclusive dangers—as evidence that our fundamental way of thinking really is better described in terms of probability assignments than in terms of binary beliefs.
Talk about binary beliefs is folk psychology. As Wikipedia says:
People who think about mind and brain sometimes express misgivings about folk psychology, sometimes going so far as to suggest that things like beliefs and desires no more exist than do witches exist. I’m actually taking folk psychology somewhat seriously in granting that in addition to a fundamental, Bayesian level of cognition, I think there is also a more superficial, folk psychological level—that (binary) beliefs exist in a way that witches do not exist. I’ve actually gone and described a role that binary, folk psychological beliefs can play in the mental economy, as a mediator between Bayesian probability assignment and binary action.
But a problem immediately arises, in that, mapping probability assignments to different actions, different thresholds apply for different actions. When that arises, then the function of declaring a (binary) belief (publicly or to silently to oneself) breaks down, because the threshold for declaring belief appropriate to one action is inappropriate to another. I attempted to illustrate this breakdown with the two dialogs between Bob and Max. Bob revises his threshold up mid-conversation when he discovers that the actions he is called upon to perform in light of his stated beliefs are riskier than he had anticipated.
I think that in certain break-down situations, it can become problematic to assign binary, folk-psychological beliefs at all, and so we should fall back on Bayesian probability assignments to describe what the brain is doing. The idea of the Bayesian brain might also of course break down, it’s also just an approximation, but I think it’s a closer approximation. So in those break-down situations, my inclination is to refrain from asserting that a person believes, or fails to believe, something. My preference is to try to understand his behavior in terms of a probability that he has assigned to a possibility, rather than in terms of believing or failing to believe.
Sadly, I think that there is a strong tendency to insist that there is one unique true answer to a question that we have been answering all our lives. I think that, for example, a small child who has not yet learned that the planet is a sphere, “up” is one direction which doesn’t depend on where the child is. And if you send that small child into space, he might immediately wonder, “which way is up?” In fact, even many adults may, in their gut, wonder, “which way is up?”, because deep in their gut they believe that there must be an answer to this, even though intellectually they understand that “up” does not always make sense. The gut feeling that there is a universal “up” that applies to everything arises when someone takes a globe or map of the Earth and turns it upside down. It just looks upside down, even though we understand intellectually that “up” and “down” don’t truly apply here. Similarly, in science fiction space battles where all the ships are oriented in relation to a universal “up”.
Similarly, I think that is a strong tendency to insist that there is one unique and true answer to the question, “what do I believe”. And so we answer the question, “what do I believe”, and we hold on tightly to the answer. Because of this, I think that introspection about “what I believe” is suspect.
As I said, I have not entirely figured out the implicit rules that underly what we (declare to ourselves silently that we) believe. I’ve acknowledged that for P<50%, we seem to withhold (declaration of) belief regardless of what our value assignments are. That being the case, I’m not entirely sure how to answer questions about belief in the case of precautions against dangers with P<50%.
I find it extremely interesting, however, that Pascal actually seems to have bit the bullet and advocated (declaration of) belief even when P<<50%, for sufficiently extreme value assignments.
I think this is the most common position held on this board—that’s why I found your model confusing.
It seems the edge cases that make it break are very common (for example, taking precautions against a flip of heads and a flip of tails). Moreover, I think the reason it doesn’t work on probabilities below 50% is the same as the reason it doesn’t work on probabilities >= 50%. What lesson do you intend to impart by it?
As an aside, my understanding of Pascal’s wager is that it is an exhortation to seek out the best possible evidence, rather than to “believe something because it would be beneficial if you did” (which doesn’t really make a lot of sense).