Colloquial bets are offered by skeevy con artists who probably know something you don’t. Bayesian bets, on the other hand, are offered by nature.
That distinction seems a bit unclear, since con artists are a part of nature, and nature certainly knows something you don’t.
Here’s a toy situation where a Bayesian is willing to state their beliefs, but isn’t willing to accept bets on them. Imagine that I flip a coin, look at the result, but don’t tell it to you. You believe that the coin came up heads with probability 1⁄2, but you don’t want to make a standing offer to accept either side of the bet, because then I could just take your money.
In the general case, what should a Bayesian do when they’re offered a bet? I think they should either accept it, or update to a state of belief that makes the bet unprofitable (“you offered the bet because you know the coin came up heads, so I won’t take it”). That covers both bets offered by nature and bets offered by con artists. Also it’s useful in arguments, you can offer a bet and force your opponent to either accept it or publicly update their beliefs.
Also it’s useful in arguments, you can offer a bet and force your opponent to either accept it or publicly update their beliefs.
The kind of update on your beliefs you might make may not necessarily be the kind of belief the bet is supposed to demonstrate, however. For instance, in your example, you believe that the coin is a fair coin. Someone flips it and says “If you think this is a fair coin, I’ll bet you that the coin came up heads”. Because he would probably offer you the bet if it came up heads, you should update on the belief that the coin came up heads this time. However, you shouldn’t update much, if at all, on the belief that the coin is fair.
But the bet is being presented as a test of your belief that the coin is fair. So the fact that you updated doesn’t actually indicate that you have changed your mind on the important aspect of the bet.
I find the ‘Bayesians’ offering bets to be a very annoying phenomenon for mostly this reason. Let’s say I want to convince you that I know something. I can start offering bets on it, trading future money for today persuasion (which is also a resource that can then be used to make more money elsewhere and come up ahead even if the bets were losing. This persuasion can in some cases even be used to try to win the bet after all).
edit: also, with regards to “you offered the bet because you know the coin came up heads, so I won’t take it”, I can anticipate this and “offer” you a losing bet, knowing that the offer will make you update and the bet won’t take a place (or is unlikely).
‘Con’ in con artists stands for confidence, and acting confidently (offering apparent bets, i.e. bluffing) is a big part of it.
Thanks—I edited to make it a bit more clear. The hope was to distinguish between “feeling like you’re being offered a bet by an adversarial agent” and “feeling like you have to choose between all available actions”. It seems to me that most people associate “betting” with the former, while many aspiring Bayesians associate “betting” with the latter.
No, the difference is that con artists are another intelligence, and you are in competition. Anytime you are in competition against a better more expert intelligence, it is an important difference.
The activities of others are important data, because they are often rationally motivated. If a con artist offers me a bet, that tells me that he values his side of the bet more. If an expert investor sells a stock, they must believe the stock is worth less than some alternate investment. So when playing assume that odds are bad enough to justify their actions.
Not sure where your comment disagrees with mine. I think you’re describing the same thing as “update to a state of belief that makes the bet unprofitable”.
That distinction seems a bit unclear, since con artists are a part of nature, and nature certainly knows something you don’t.
Here’s a toy situation where a Bayesian is willing to state their beliefs, but isn’t willing to accept bets on them. Imagine that I flip a coin, look at the result, but don’t tell it to you. You believe that the coin came up heads with probability 1⁄2, but you don’t want to make a standing offer to accept either side of the bet, because then I could just take your money.
In the general case, what should a Bayesian do when they’re offered a bet? I think they should either accept it, or update to a state of belief that makes the bet unprofitable (“you offered the bet because you know the coin came up heads, so I won’t take it”). That covers both bets offered by nature and bets offered by con artists. Also it’s useful in arguments, you can offer a bet and force your opponent to either accept it or publicly update their beliefs.
The kind of update on your beliefs you might make may not necessarily be the kind of belief the bet is supposed to demonstrate, however. For instance, in your example, you believe that the coin is a fair coin. Someone flips it and says “If you think this is a fair coin, I’ll bet you that the coin came up heads”. Because he would probably offer you the bet if it came up heads, you should update on the belief that the coin came up heads this time. However, you shouldn’t update much, if at all, on the belief that the coin is fair.
But the bet is being presented as a test of your belief that the coin is fair. So the fact that you updated doesn’t actually indicate that you have changed your mind on the important aspect of the bet.
I find the ‘Bayesians’ offering bets to be a very annoying phenomenon for mostly this reason. Let’s say I want to convince you that I know something. I can start offering bets on it, trading future money for today persuasion (which is also a resource that can then be used to make more money elsewhere and come up ahead even if the bets were losing. This persuasion can in some cases even be used to try to win the bet after all).
edit: also, with regards to “you offered the bet because you know the coin came up heads, so I won’t take it”, I can anticipate this and “offer” you a losing bet, knowing that the offer will make you update and the bet won’t take a place (or is unlikely).
‘Con’ in con artists stands for confidence, and acting confidently (offering apparent bets, i.e. bluffing) is a big part of it.
Nassim Taleb has a paper on betting and long tails: “On the Difference between Binary Prediction and True Exposure with Implications for Forecasting Tournaments and Decision Making Research”.
Thanks—I edited to make it a bit more clear. The hope was to distinguish between “feeling like you’re being offered a bet by an adversarial agent” and “feeling like you have to choose between all available actions”. It seems to me that most people associate “betting” with the former, while many aspiring Bayesians associate “betting” with the latter.
No, the difference is that con artists are another intelligence, and you are in competition. Anytime you are in competition against a better more expert intelligence, it is an important difference.
The activities of others are important data, because they are often rationally motivated. If a con artist offers me a bet, that tells me that he values his side of the bet more. If an expert investor sells a stock, they must believe the stock is worth less than some alternate investment. So when playing assume that odds are bad enough to justify their actions.
Not sure where your comment disagrees with mine. I think you’re describing the same thing as “update to a state of belief that makes the bet unprofitable”.