Here’s a concise answer that straightforwardly applies the rule I already stated. Since my rule only applies above 50% and since P(being shot)=10% (as I recall), then we must consider the negation. Suppose P(I will be shot) is 10% and P(I will be stabbed) is 10% and suppose that (for some reason) “I will be shot” and “I will be stabbed” are mutually exclusive. Since P<50% for each of these we turn it around, and get:
P(I will not be shot)is 90% and P(I will not be stabbed) is 90%. Because the cost of being shot, and the cost of being stabbed, are so very high, then the threshold for being convinced must be very high as well—set it to 99.9%. Since P=90% for each of these, then it does not reach my threshold for being convinced.
Therefore I am not convinced that I will not be shot and I am not convinced that I will not be stabbed. Therefore I will not go without my bulletproof body armor and I will not go without my stab-proof body armor.
So the rule seems to work. The fact that these are mutually exclusive dangers doesn’t seem to affect the outcome. [Added: For what I consider to be a more useful discussion of the topic, see my other answer.]
{Added:see my other answer for a concise answer, which however leaves out a lot that I think important to discuss}
For starters, I think there is no problem understanding these two precautions against mutually exclusive dangers in terms of probability assignments, what I consider the more fundamental level of how we think. In fact, I consider this fact—that we do prepare for mutually exclusive dangers—as evidence that our fundamental way of thinking really is better described in terms of probability assignments than in terms of binary beliefs.
Talk about binary beliefs is folk psychology. As Wikipedia says:
Folk psychology embraces everyday concepts like “beliefs”, “desires”, “fear”, and “hope”.
People who think about mind and brain sometimes express misgivings about folk psychology, sometimes going so far as to suggest that things like beliefs and desires no more exist than do witches exist. I’m actually taking folk psychology somewhat seriously in granting that in addition to a fundamental, Bayesian level of cognition, I think there is also a more superficial, folk psychological level—that (binary) beliefs exist in a way that witches do not exist. I’ve actually gone and described a role that binary, folk psychological beliefs can play in the mental economy, as a mediator between Bayesian probability assignment and binary action.
But a problem immediately arises, in that, mapping probability assignments to different actions, different thresholds apply for different actions. When that arises, then the function of declaring a (binary) belief (publicly or to silently to oneself) breaks down, because the threshold for declaring belief appropriate to one action is inappropriate to another. I attempted to illustrate this breakdown with the two dialogs between Bob and Max. Bob revises his threshold up mid-conversation when he discovers that the actions he is called upon to perform in light of his stated beliefs are riskier than he had anticipated.
I think that in certain break-down situations, it can become problematic to assign binary, folk-psychological beliefs at all, and so we should fall back on Bayesian probability assignments to describe what the brain is doing. The idea of the Bayesian brain might also of course break down, it’s also just an approximation, but I think it’s a closer approximation. So in those break-down situations, my inclination is to refrain from asserting that a person believes, or fails to believe, something. My preference is to try to understand his behavior in terms of a probability that he has assigned to a possibility, rather than in terms of believing or failing to believe.
Sadly, I think that there is a strong tendency to insist that there is one unique true answer to a question that we have been answering all our lives. I think that, for example, a small child who has not yet learned that the planet is a sphere, “up” is one direction which doesn’t depend on where the child is. And if you send that small child into space, he might immediately wonder, “which way is up?” In fact, even many adults may, in their gut, wonder, “which way is up?”, because deep in their gut they believe that there must be an answer to this, even though intellectually they understand that “up” does not always make sense. The gut feeling that there is a universal “up” that applies to everything arises when someone takes a globe or map of the Earth and turns it upside down. It just looks upside down, even though we understand intellectually that “up” and “down” don’t truly apply here. Similarly, in science fiction space battles where all the ships are oriented in relation to a universal “up”.
Similarly, I think that is a strong tendency to insist that there is one unique and true answer to the question, “what do I believe”. And so we answer the question, “what do I believe”, and we hold on tightly to the answer. Because of this, I think that introspection about “what I believe” is suspect.
As I said, I have not entirely figured out the implicit rules that underly what we (declare to ourselves silently that we) believe. I’ve acknowledged that for P<50%, we seem to withhold (declaration of) belief regardless of what our value assignments are. That being the case, I’m not entirely sure how to answer questions about belief in the case of precautions against dangers with P<50%.
I find it extremely interesting, however, that Pascal actually seems to have bit the bullet and advocated (declaration of) belief even when P<<50%, for sufficiently extreme value assignments.
So in those break-down situations, my inclination is to refrain from asserting that a person believes, or fails to believe, something. My preference is to try to understand his behavior in terms of a probability that he has assigned to a possibility, rather than in terms of believing or failing to believe.
I think this is the most common position held on this board—that’s why I found your model confusing.
It seems the edge cases that make it break are very common (for example, taking precautions against a flip of heads and a flip of tails). Moreover, I think the reason it doesn’t work on probabilities below 50% is the same as the reason it doesn’t work on probabilities >= 50%. What lesson do you intend to impart by it?
As an aside, my understanding of Pascal’s wager is that it is an exhortation to seek out the best possible evidence, rather than to “believe something because it would be beneficial if you did” (which doesn’t really make a lot of sense).
Okay, this makes sense, though I think I’d use ‘belief’ differently.
What does it mean in a situation where I take precautions against two possible but mutually exclusive dangers?
Here’s a concise answer that straightforwardly applies the rule I already stated. Since my rule only applies above 50% and since P(being shot)=10% (as I recall), then we must consider the negation. Suppose P(I will be shot) is 10% and P(I will be stabbed) is 10% and suppose that (for some reason) “I will be shot” and “I will be stabbed” are mutually exclusive. Since P<50% for each of these we turn it around, and get:
P(I will not be shot)is 90% and P(I will not be stabbed) is 90%. Because the cost of being shot, and the cost of being stabbed, are so very high, then the threshold for being convinced must be very high as well—set it to 99.9%. Since P=90% for each of these, then it does not reach my threshold for being convinced.
Therefore I am not convinced that I will not be shot and I am not convinced that I will not be stabbed. Therefore I will not go without my bulletproof body armor and I will not go without my stab-proof body armor.
So the rule seems to work. The fact that these are mutually exclusive dangers doesn’t seem to affect the outcome. [Added: For what I consider to be a more useful discussion of the topic, see my other answer.]
{Added:see my other answer for a concise answer, which however leaves out a lot that I think important to discuss}
For starters, I think there is no problem understanding these two precautions against mutually exclusive dangers in terms of probability assignments, what I consider the more fundamental level of how we think. In fact, I consider this fact—that we do prepare for mutually exclusive dangers—as evidence that our fundamental way of thinking really is better described in terms of probability assignments than in terms of binary beliefs.
Talk about binary beliefs is folk psychology. As Wikipedia says:
People who think about mind and brain sometimes express misgivings about folk psychology, sometimes going so far as to suggest that things like beliefs and desires no more exist than do witches exist. I’m actually taking folk psychology somewhat seriously in granting that in addition to a fundamental, Bayesian level of cognition, I think there is also a more superficial, folk psychological level—that (binary) beliefs exist in a way that witches do not exist. I’ve actually gone and described a role that binary, folk psychological beliefs can play in the mental economy, as a mediator between Bayesian probability assignment and binary action.
But a problem immediately arises, in that, mapping probability assignments to different actions, different thresholds apply for different actions. When that arises, then the function of declaring a (binary) belief (publicly or to silently to oneself) breaks down, because the threshold for declaring belief appropriate to one action is inappropriate to another. I attempted to illustrate this breakdown with the two dialogs between Bob and Max. Bob revises his threshold up mid-conversation when he discovers that the actions he is called upon to perform in light of his stated beliefs are riskier than he had anticipated.
I think that in certain break-down situations, it can become problematic to assign binary, folk-psychological beliefs at all, and so we should fall back on Bayesian probability assignments to describe what the brain is doing. The idea of the Bayesian brain might also of course break down, it’s also just an approximation, but I think it’s a closer approximation. So in those break-down situations, my inclination is to refrain from asserting that a person believes, or fails to believe, something. My preference is to try to understand his behavior in terms of a probability that he has assigned to a possibility, rather than in terms of believing or failing to believe.
Sadly, I think that there is a strong tendency to insist that there is one unique true answer to a question that we have been answering all our lives. I think that, for example, a small child who has not yet learned that the planet is a sphere, “up” is one direction which doesn’t depend on where the child is. And if you send that small child into space, he might immediately wonder, “which way is up?” In fact, even many adults may, in their gut, wonder, “which way is up?”, because deep in their gut they believe that there must be an answer to this, even though intellectually they understand that “up” does not always make sense. The gut feeling that there is a universal “up” that applies to everything arises when someone takes a globe or map of the Earth and turns it upside down. It just looks upside down, even though we understand intellectually that “up” and “down” don’t truly apply here. Similarly, in science fiction space battles where all the ships are oriented in relation to a universal “up”.
Similarly, I think that is a strong tendency to insist that there is one unique and true answer to the question, “what do I believe”. And so we answer the question, “what do I believe”, and we hold on tightly to the answer. Because of this, I think that introspection about “what I believe” is suspect.
As I said, I have not entirely figured out the implicit rules that underly what we (declare to ourselves silently that we) believe. I’ve acknowledged that for P<50%, we seem to withhold (declaration of) belief regardless of what our value assignments are. That being the case, I’m not entirely sure how to answer questions about belief in the case of precautions against dangers with P<50%.
I find it extremely interesting, however, that Pascal actually seems to have bit the bullet and advocated (declaration of) belief even when P<<50%, for sufficiently extreme value assignments.
I think this is the most common position held on this board—that’s why I found your model confusing.
It seems the edge cases that make it break are very common (for example, taking precautions against a flip of heads and a flip of tails). Moreover, I think the reason it doesn’t work on probabilities below 50% is the same as the reason it doesn’t work on probabilities >= 50%. What lesson do you intend to impart by it?
As an aside, my understanding of Pascal’s wager is that it is an exhortation to seek out the best possible evidence, rather than to “believe something because it would be beneficial if you did” (which doesn’t really make a lot of sense).