If I think that I have a 10% chance of being shot today, and I wear a bulletproof vest in response, that is not the same as being convinced that I will be shot.
Your actual belief in different things does not, so far as I can tell, depend on how useful it is to act as if those things are true. How you act in response to your beliefs does.
Edit:
Actually, wait a sec.
By this logic, Pascal can be convinced of God’s existence even if the probability he assigns to it is much less than 50% - which admittedly seems to represent a breakdown in my understanding of “convinced”, but I still think it works above 50%.
Just follow through on the fact that you noticed this.
You have only pointed out an incompleteness in my account that I already pointed out. I pointed out that below 50%, the account I gave of being convinced no longer seems to hold.
The perfect is the enemy of the good. That an account does not cover all cases does not mean the account is not on the right track. A strong attack on the account would be to offer a better account. JoshuaZ already offered an alternative account by implication, which (as I understand it) that belief is simply a constant cutoff, for example, a probability assignment above 80% is belief, or maybe 50%, or maybe 90%.
But here’s the thing: if you believe something, aren’t you willing to act on it? We regularly explain our actions in terms of beliefs. For example, suppose you walk out of the house taking your wife’s car keys. You get to your car, notice that you can’t start the engine, and at that point discover that you are holding your wife’s car keys. Suppose she asks you, “why did you take my keys”? The answer seems obvious: “I took these keys because I believed they were my car keys.” Isn’t that obvious? Of course that’s why you took them.
To restate, you did something that would have been successful had those keys been your keys. To restate, you acted in a way that would have been successful had your belief been true.
And I think this is generally a principle by which we explain our actions, particularly our mistaken actions. The explanation is that we acted in a way that would have worked out had our beliefs been correct. And so, your actions reveal your beliefs. By taking your wife’s car keys, you reveal your belief that they are your car keys.
So your actions reveal your beliefs. But here’s the problem: your actions are a product of a combination of your probability assignments and your value assignments, the costs and benefits. That’s why you are more ready to take risky action when the downside is low and the upside is high, and less ready to take risky action when the downside is high and the upside is low. So your actions are a product of a combination of probability assignments and value assignments.
But your actions meanwhile are in accordance with your beliefs.
Conclusion follows: your beliefs are a product of a combination of probability assignments and value assignments.
Now, as I said, this picture is incomplete. But it seems to hold within certain limits.
But here’s the thing: if you believe something, aren’t you willing to act on it? We regularly explain our actions in terms of beliefs. For example, suppose you walk out of the house taking your wife’s car keys. You get to your car, notice that you can’t start the engine, and at that point discover that you are holding your wife’s car keys. Suppose she asks you, “why did you take my keys”? The answer seems obvious: “I took these keys because I believed they were my car keys.” Isn’t that obvious? Of course that’s why you took them.
A utility maximizing Bayesian doesn’t say “oh, this has the highest probability so I’ll act like that’s true.” A utility maximizing Bayesian says “what course of action will give me the highest expected return given the probability distribution I have for all my hypotheses?” To use an example that might help, suppose A declares that they are going to toss two standard six-sided fair die and take the sum of the two values. If anyone guesses the correct result then A will pay the guesser $10. I assign a low probability to the result being “7” but that’s still my best guess. And one can construct other situations (if for example the payoff was $1000 if one correctly guessed and the guess happened to be an even number then guessing 6 or guessing 8 makes the most sense). Does that help?
A utility maximizing Bayesian says “what course of action will give me the highest expected return given the probability distribution I have for all my hypotheses?”
That matches my own description of what the brain does. I wrote briefly:
your actions are a product of a combination of your probability assignments and your value assignments, the costs and benefits.
which I explain elsewhere in more detail, and which matches your description of the utility maximizing Bayesian. It is the combination of your probability assignments and your value assignments which produces your expected return for each course of action you might take.
Does that help?
Depends what you mean. You are agreeing with my account, with the exception that you are saying that this describes a “utility maximizing Bayesian”, and I am saying that it describes any brain (more or less). That is, I think that brains work more or less in accordance with Bayesian principles, at least in certain areas. I can’t think that the brain’s calculation is tremendously precise, but I expect that it good enough for survival.
Here’s a simple idea: everything we do is an action. To speak is to do something. Therefore speech is an action. Speech is declaration of belief. So declaration of belief is an action.
Now, let us consider what CuSithBell says:
Your actual belief in different things does not, so far as I can tell, depend on how useful it is to act as if those things are true. How you act in response to your beliefs does.
So, he agrees that how you act depends on utility. But, contrary to what he appears to believe, to declare a belief is to act—the action is linguistic. Therefore how you declare your beliefs depends on utility—that is, on the utility of making that declaration.
The utility of a declaration depends on its context, on how the declaration is used. And declarations are used. We make assertions, draw inferences, and consequently, act. So our actions depend on our statements. So our statements must be adjusted to the actions that depend on them. If someone is considering a highly risky undertaking, then we will avoid making assertions of belief unless our probability assignments are very high.
Maybe people have noticed this. People adjusting their statements, even retracting certain assertions of belief, once they discover that those statements are going to be put to a more risky use than they had thought. Maybe they have noticed it and believed it to be an inconsistency? No—it’s not an inconsistency. It’s a natural consequence of the process by which we decide where the threshold is. Here’s a bit of dialog:
Bob: There are no such thing as ghosts.
Max: Let’s stay in this haunted house overnight.
Bob: Forget it!
Max: Why not?
Bob: Ghosts!
For one purpose (which involves no personal downside), Bob declares a disbelief in ghosts. For another purpose (which involves a significant personal downside if he’s wrong), Bob revises his statement. Here’s another one:
Bob: Bullets please. My revolver is empty.
Max: How do you know?
Bob: How do you think I know?
Max: Point it at your head and pull the trigger.
Bob: No!
Max: Why not?
Bob: Why do you think?
For one purpose (getting bullets), the downside is small, so Bob has no trouble saying that he knows his revolver is empty. For the other purpose, the downside is enormous, so Bob does not say that he knows it’s empty.
So, he agrees that how you act depends on utility. But, contrary to what he appears to believe, to declare a belief is to act—the action is linguistic. Therefore how you declare your beliefs depends on utility—that is, on the utility of making that declaration.
I apologize for giving you the impression I disagree with this. By ‘being convinced’, I thought you were talking about belief states rather than declarations of belief, and thence these errors are arose (yes?).
I think that belief is a kind of internal declaration of belief, because it serves essentially the same function (internally) as declaration of belief serves (externally). Please allow me to explain.
There are two pictures of how the brain works which don’t match up comfortably. On one picture, the brain assigns a probability to something. On the other picture, the brain either believes, or fails to believe, something. The reason they don’ t match up is that in the first picture the range of possible brain-states is continuous, ranging from P=0 to p=1. But in the second picture, the range of possible brain-states is binary: one state is the state of belief, the other is the state of failure to believe.
So the question then is, how do we reconcile these two pictures? My current view is that on a more fundamental level, our brains assign [probabilities (edited)]. And on a more superficial level, which is partially informed by the fundamental level, we flip a switch between two states: belief and failure to believe.
I think a key question here is: why do we have these two levels, the continuous level which assigns probabilities, and the binary level which flips a switch between two states? I think the reason for the second level is that action is (usually) binary. If you try to draw a map from probability assignment to best course of action (physical action involving our legs and arms), what you find is that the optimal leg/arm action quite often does not range continuously as probability assignment ranges from 0 to 1. Rather, at some threshold value, the optimal leg/arm action switches from one action to another, quite different action—with nothing in between.
So the level of action is a level populated by distinct courses of action with nothing in between, rather than a continuous range of action. What I think, then, is that the binary level of belief versus failure to believe is a kind of half-way point between probability assignments and leg/arm action. What it is, is a translation of assignment of probability (which ranges continuously from zero to one) into a non-continuous, binary belief which is immediately translatable into decision and then into leg/arm action.
But as has I think been agreed on, the optimal course of action does not depend merely on probability assignments. It also depends on value assignments. So, depending on your value assignments, the optimal course of action may switch from A to B at P=60%, or alternatively at P=80%, etc. In the case of crossing the street, I argued that the optimal course of action switches at P>99.9%.
But binary belief (i.e. belief versus non-belief), I think, is immediately translatable into decision and action. That, I think, is the function of binary belief. But in that case, since optimal action switches at different P depending on value assignments, then belief must also switch between belief and failure to believe at different P depending on value assignments.
Here’s a concise answer that straightforwardly applies the rule I already stated. Since my rule only applies above 50% and since P(being shot)=10% (as I recall), then we must consider the negation. Suppose P(I will be shot) is 10% and P(I will be stabbed) is 10% and suppose that (for some reason) “I will be shot” and “I will be stabbed” are mutually exclusive. Since P<50% for each of these we turn it around, and get:
P(I will not be shot)is 90% and P(I will not be stabbed) is 90%. Because the cost of being shot, and the cost of being stabbed, are so very high, then the threshold for being convinced must be very high as well—set it to 99.9%. Since P=90% for each of these, then it does not reach my threshold for being convinced.
Therefore I am not convinced that I will not be shot and I am not convinced that I will not be stabbed. Therefore I will not go without my bulletproof body armor and I will not go without my stab-proof body armor.
So the rule seems to work. The fact that these are mutually exclusive dangers doesn’t seem to affect the outcome. [Added: For what I consider to be a more useful discussion of the topic, see my other answer.]
{Added:see my other answer for a concise answer, which however leaves out a lot that I think important to discuss}
For starters, I think there is no problem understanding these two precautions against mutually exclusive dangers in terms of probability assignments, what I consider the more fundamental level of how we think. In fact, I consider this fact—that we do prepare for mutually exclusive dangers—as evidence that our fundamental way of thinking really is better described in terms of probability assignments than in terms of binary beliefs.
Talk about binary beliefs is folk psychology. As Wikipedia says:
Folk psychology embraces everyday concepts like “beliefs”, “desires”, “fear”, and “hope”.
People who think about mind and brain sometimes express misgivings about folk psychology, sometimes going so far as to suggest that things like beliefs and desires no more exist than do witches exist. I’m actually taking folk psychology somewhat seriously in granting that in addition to a fundamental, Bayesian level of cognition, I think there is also a more superficial, folk psychological level—that (binary) beliefs exist in a way that witches do not exist. I’ve actually gone and described a role that binary, folk psychological beliefs can play in the mental economy, as a mediator between Bayesian probability assignment and binary action.
But a problem immediately arises, in that, mapping probability assignments to different actions, different thresholds apply for different actions. When that arises, then the function of declaring a (binary) belief (publicly or to silently to oneself) breaks down, because the threshold for declaring belief appropriate to one action is inappropriate to another. I attempted to illustrate this breakdown with the two dialogs between Bob and Max. Bob revises his threshold up mid-conversation when he discovers that the actions he is called upon to perform in light of his stated beliefs are riskier than he had anticipated.
I think that in certain break-down situations, it can become problematic to assign binary, folk-psychological beliefs at all, and so we should fall back on Bayesian probability assignments to describe what the brain is doing. The idea of the Bayesian brain might also of course break down, it’s also just an approximation, but I think it’s a closer approximation. So in those break-down situations, my inclination is to refrain from asserting that a person believes, or fails to believe, something. My preference is to try to understand his behavior in terms of a probability that he has assigned to a possibility, rather than in terms of believing or failing to believe.
Sadly, I think that there is a strong tendency to insist that there is one unique true answer to a question that we have been answering all our lives. I think that, for example, a small child who has not yet learned that the planet is a sphere, “up” is one direction which doesn’t depend on where the child is. And if you send that small child into space, he might immediately wonder, “which way is up?” In fact, even many adults may, in their gut, wonder, “which way is up?”, because deep in their gut they believe that there must be an answer to this, even though intellectually they understand that “up” does not always make sense. The gut feeling that there is a universal “up” that applies to everything arises when someone takes a globe or map of the Earth and turns it upside down. It just looks upside down, even though we understand intellectually that “up” and “down” don’t truly apply here. Similarly, in science fiction space battles where all the ships are oriented in relation to a universal “up”.
Similarly, I think that is a strong tendency to insist that there is one unique and true answer to the question, “what do I believe”. And so we answer the question, “what do I believe”, and we hold on tightly to the answer. Because of this, I think that introspection about “what I believe” is suspect.
As I said, I have not entirely figured out the implicit rules that underly what we (declare to ourselves silently that we) believe. I’ve acknowledged that for P<50%, we seem to withhold (declaration of) belief regardless of what our value assignments are. That being the case, I’m not entirely sure how to answer questions about belief in the case of precautions against dangers with P<50%.
I find it extremely interesting, however, that Pascal actually seems to have bit the bullet and advocated (declaration of) belief even when P<<50%, for sufficiently extreme value assignments.
So in those break-down situations, my inclination is to refrain from asserting that a person believes, or fails to believe, something. My preference is to try to understand his behavior in terms of a probability that he has assigned to a possibility, rather than in terms of believing or failing to believe.
I think this is the most common position held on this board—that’s why I found your model confusing.
It seems the edge cases that make it break are very common (for example, taking precautions against a flip of heads and a flip of tails). Moreover, I think the reason it doesn’t work on probabilities below 50% is the same as the reason it doesn’t work on probabilities >= 50%. What lesson do you intend to impart by it?
As an aside, my understanding of Pascal’s wager is that it is an exhortation to seek out the best possible evidence, rather than to “believe something because it would be beneficial if you did” (which doesn’t really make a lot of sense).
If I think that I have a 10% chance of being shot today, and I wear a bulletproof vest in response, that is not the same as being convinced that I will be shot.
Your actual belief in different things does not, so far as I can tell, depend on how useful it is to act as if those things are true. How you act in response to your beliefs does.
Edit:
Actually, wait a sec.
Just follow through on the fact that you noticed this.
You have only pointed out an incompleteness in my account that I already pointed out. I pointed out that below 50%, the account I gave of being convinced no longer seems to hold.
The perfect is the enemy of the good. That an account does not cover all cases does not mean the account is not on the right track. A strong attack on the account would be to offer a better account. JoshuaZ already offered an alternative account by implication, which (as I understand it) that belief is simply a constant cutoff, for example, a probability assignment above 80% is belief, or maybe 50%, or maybe 90%.
But here’s the thing: if you believe something, aren’t you willing to act on it? We regularly explain our actions in terms of beliefs. For example, suppose you walk out of the house taking your wife’s car keys. You get to your car, notice that you can’t start the engine, and at that point discover that you are holding your wife’s car keys. Suppose she asks you, “why did you take my keys”? The answer seems obvious: “I took these keys because I believed they were my car keys.” Isn’t that obvious? Of course that’s why you took them.
To restate, you did something that would have been successful had those keys been your keys. To restate, you acted in a way that would have been successful had your belief been true.
And I think this is generally a principle by which we explain our actions, particularly our mistaken actions. The explanation is that we acted in a way that would have worked out had our beliefs been correct. And so, your actions reveal your beliefs. By taking your wife’s car keys, you reveal your belief that they are your car keys.
So your actions reveal your beliefs. But here’s the problem: your actions are a product of a combination of your probability assignments and your value assignments, the costs and benefits. That’s why you are more ready to take risky action when the downside is low and the upside is high, and less ready to take risky action when the downside is high and the upside is low. So your actions are a product of a combination of probability assignments and value assignments.
But your actions meanwhile are in accordance with your beliefs.
Conclusion follows: your beliefs are a product of a combination of probability assignments and value assignments.
Now, as I said, this picture is incomplete. But it seems to hold within certain limits.
A utility maximizing Bayesian doesn’t say “oh, this has the highest probability so I’ll act like that’s true.” A utility maximizing Bayesian says “what course of action will give me the highest expected return given the probability distribution I have for all my hypotheses?” To use an example that might help, suppose A declares that they are going to toss two standard six-sided fair die and take the sum of the two values. If anyone guesses the correct result then A will pay the guesser $10. I assign a low probability to the result being “7” but that’s still my best guess. And one can construct other situations (if for example the payoff was $1000 if one correctly guessed and the guess happened to be an even number then guessing 6 or guessing 8 makes the most sense). Does that help?
That matches my own description of what the brain does. I wrote briefly:
which I explain elsewhere in more detail, and which matches your description of the utility maximizing Bayesian. It is the combination of your probability assignments and your value assignments which produces your expected return for each course of action you might take.
Depends what you mean. You are agreeing with my account, with the exception that you are saying that this describes a “utility maximizing Bayesian”, and I am saying that it describes any brain (more or less). That is, I think that brains work more or less in accordance with Bayesian principles, at least in certain areas. I can’t think that the brain’s calculation is tremendously precise, but I expect that it good enough for survival.
Here’s a simple idea: everything we do is an action. To speak is to do something. Therefore speech is an action. Speech is declaration of belief. So declaration of belief is an action.
Now, let us consider what CuSithBell says:
So, he agrees that how you act depends on utility. But, contrary to what he appears to believe, to declare a belief is to act—the action is linguistic. Therefore how you declare your beliefs depends on utility—that is, on the utility of making that declaration.
The utility of a declaration depends on its context, on how the declaration is used. And declarations are used. We make assertions, draw inferences, and consequently, act. So our actions depend on our statements. So our statements must be adjusted to the actions that depend on them. If someone is considering a highly risky undertaking, then we will avoid making assertions of belief unless our probability assignments are very high.
Maybe people have noticed this. People adjusting their statements, even retracting certain assertions of belief, once they discover that those statements are going to be put to a more risky use than they had thought. Maybe they have noticed it and believed it to be an inconsistency? No—it’s not an inconsistency. It’s a natural consequence of the process by which we decide where the threshold is. Here’s a bit of dialog:
Bob: There are no such thing as ghosts.
Max: Let’s stay in this haunted house overnight.
Bob: Forget it!
Max: Why not?
Bob: Ghosts!
For one purpose (which involves no personal downside), Bob declares a disbelief in ghosts. For another purpose (which involves a significant personal downside if he’s wrong), Bob revises his statement. Here’s another one:
Bob: Bullets please. My revolver is empty.
Max: How do you know?
Bob: How do you think I know?
Max: Point it at your head and pull the trigger.
Bob: No!
Max: Why not?
Bob: Why do you think?
For one purpose (getting bullets), the downside is small, so Bob has no trouble saying that he knows his revolver is empty. For the other purpose, the downside is enormous, so Bob does not say that he knows it’s empty.
I apologize for giving you the impression I disagree with this. By ‘being convinced’, I thought you were talking about belief states rather than declarations of belief, and thence these errors are arose (yes?).
I think that belief is a kind of internal declaration of belief, because it serves essentially the same function (internally) as declaration of belief serves (externally). Please allow me to explain.
There are two pictures of how the brain works which don’t match up comfortably. On one picture, the brain assigns a probability to something. On the other picture, the brain either believes, or fails to believe, something. The reason they don’ t match up is that in the first picture the range of possible brain-states is continuous, ranging from P=0 to p=1. But in the second picture, the range of possible brain-states is binary: one state is the state of belief, the other is the state of failure to believe.
So the question then is, how do we reconcile these two pictures? My current view is that on a more fundamental level, our brains assign [probabilities (edited)]. And on a more superficial level, which is partially informed by the fundamental level, we flip a switch between two states: belief and failure to believe.
I think a key question here is: why do we have these two levels, the continuous level which assigns probabilities, and the binary level which flips a switch between two states? I think the reason for the second level is that action is (usually) binary. If you try to draw a map from probability assignment to best course of action (physical action involving our legs and arms), what you find is that the optimal leg/arm action quite often does not range continuously as probability assignment ranges from 0 to 1. Rather, at some threshold value, the optimal leg/arm action switches from one action to another, quite different action—with nothing in between.
So the level of action is a level populated by distinct courses of action with nothing in between, rather than a continuous range of action. What I think, then, is that the binary level of belief versus failure to believe is a kind of half-way point between probability assignments and leg/arm action. What it is, is a translation of assignment of probability (which ranges continuously from zero to one) into a non-continuous, binary belief which is immediately translatable into decision and then into leg/arm action.
But as has I think been agreed on, the optimal course of action does not depend merely on probability assignments. It also depends on value assignments. So, depending on your value assignments, the optimal course of action may switch from A to B at P=60%, or alternatively at P=80%, etc. In the case of crossing the street, I argued that the optimal course of action switches at P>99.9%.
But binary belief (i.e. belief versus non-belief), I think, is immediately translatable into decision and action. That, I think, is the function of binary belief. But in that case, since optimal action switches at different P depending on value assignments, then belief must also switch between belief and failure to believe at different P depending on value assignments.
Okay, this makes sense, though I think I’d use ‘belief’ differently.
What does it mean in a situation where I take precautions against two possible but mutually exclusive dangers?
Here’s a concise answer that straightforwardly applies the rule I already stated. Since my rule only applies above 50% and since P(being shot)=10% (as I recall), then we must consider the negation. Suppose P(I will be shot) is 10% and P(I will be stabbed) is 10% and suppose that (for some reason) “I will be shot” and “I will be stabbed” are mutually exclusive. Since P<50% for each of these we turn it around, and get:
P(I will not be shot)is 90% and P(I will not be stabbed) is 90%. Because the cost of being shot, and the cost of being stabbed, are so very high, then the threshold for being convinced must be very high as well—set it to 99.9%. Since P=90% for each of these, then it does not reach my threshold for being convinced.
Therefore I am not convinced that I will not be shot and I am not convinced that I will not be stabbed. Therefore I will not go without my bulletproof body armor and I will not go without my stab-proof body armor.
So the rule seems to work. The fact that these are mutually exclusive dangers doesn’t seem to affect the outcome. [Added: For what I consider to be a more useful discussion of the topic, see my other answer.]
{Added:see my other answer for a concise answer, which however leaves out a lot that I think important to discuss}
For starters, I think there is no problem understanding these two precautions against mutually exclusive dangers in terms of probability assignments, what I consider the more fundamental level of how we think. In fact, I consider this fact—that we do prepare for mutually exclusive dangers—as evidence that our fundamental way of thinking really is better described in terms of probability assignments than in terms of binary beliefs.
Talk about binary beliefs is folk psychology. As Wikipedia says:
People who think about mind and brain sometimes express misgivings about folk psychology, sometimes going so far as to suggest that things like beliefs and desires no more exist than do witches exist. I’m actually taking folk psychology somewhat seriously in granting that in addition to a fundamental, Bayesian level of cognition, I think there is also a more superficial, folk psychological level—that (binary) beliefs exist in a way that witches do not exist. I’ve actually gone and described a role that binary, folk psychological beliefs can play in the mental economy, as a mediator between Bayesian probability assignment and binary action.
But a problem immediately arises, in that, mapping probability assignments to different actions, different thresholds apply for different actions. When that arises, then the function of declaring a (binary) belief (publicly or to silently to oneself) breaks down, because the threshold for declaring belief appropriate to one action is inappropriate to another. I attempted to illustrate this breakdown with the two dialogs between Bob and Max. Bob revises his threshold up mid-conversation when he discovers that the actions he is called upon to perform in light of his stated beliefs are riskier than he had anticipated.
I think that in certain break-down situations, it can become problematic to assign binary, folk-psychological beliefs at all, and so we should fall back on Bayesian probability assignments to describe what the brain is doing. The idea of the Bayesian brain might also of course break down, it’s also just an approximation, but I think it’s a closer approximation. So in those break-down situations, my inclination is to refrain from asserting that a person believes, or fails to believe, something. My preference is to try to understand his behavior in terms of a probability that he has assigned to a possibility, rather than in terms of believing or failing to believe.
Sadly, I think that there is a strong tendency to insist that there is one unique true answer to a question that we have been answering all our lives. I think that, for example, a small child who has not yet learned that the planet is a sphere, “up” is one direction which doesn’t depend on where the child is. And if you send that small child into space, he might immediately wonder, “which way is up?” In fact, even many adults may, in their gut, wonder, “which way is up?”, because deep in their gut they believe that there must be an answer to this, even though intellectually they understand that “up” does not always make sense. The gut feeling that there is a universal “up” that applies to everything arises when someone takes a globe or map of the Earth and turns it upside down. It just looks upside down, even though we understand intellectually that “up” and “down” don’t truly apply here. Similarly, in science fiction space battles where all the ships are oriented in relation to a universal “up”.
Similarly, I think that is a strong tendency to insist that there is one unique and true answer to the question, “what do I believe”. And so we answer the question, “what do I believe”, and we hold on tightly to the answer. Because of this, I think that introspection about “what I believe” is suspect.
As I said, I have not entirely figured out the implicit rules that underly what we (declare to ourselves silently that we) believe. I’ve acknowledged that for P<50%, we seem to withhold (declaration of) belief regardless of what our value assignments are. That being the case, I’m not entirely sure how to answer questions about belief in the case of precautions against dangers with P<50%.
I find it extremely interesting, however, that Pascal actually seems to have bit the bullet and advocated (declaration of) belief even when P<<50%, for sufficiently extreme value assignments.
I think this is the most common position held on this board—that’s why I found your model confusing.
It seems the edge cases that make it break are very common (for example, taking precautions against a flip of heads and a flip of tails). Moreover, I think the reason it doesn’t work on probabilities below 50% is the same as the reason it doesn’t work on probabilities >= 50%. What lesson do you intend to impart by it?
As an aside, my understanding of Pascal’s wager is that it is an exhortation to seek out the best possible evidence, rather than to “believe something because it would be beneficial if you did” (which doesn’t really make a lot of sense).