Besides, I don’t need money right now anyway, at least to continue my research activities. I’d only be able to achieve significant amounts of extra good if I had quite a lot more money.
This points to a more fully general argument against using bets to operationalize a person’s confidence in their claim. After all, no resource (status, time, or money) translates into personal utility in linear fashion.
Even if resources and utility had a linear relationship, bets can be positive-sum for both participants, negative sum, or mixed. Eliezer and Bryan might both “earn” more than a couple hundred bucks in reputation, or even dollars, just by being known to have made this bet. I can also imagine a counterpart to Bryan for whom taking a bet with Bryan at all would be costly, perceived as unseemly. By contrast, Bryan builds his reputation partly on being a betting man, and I suspect he enjoys the activity for its own sake. I think this should be taken into account in interpreting people’s willingness or refusal to bet.
Small bets seem to still be useful as a first measure to undermine punditry and motivate precise and explicit reasoning about empirical likelihoods. Insisting that a person making a confident claim ought to back it with favorable odds for the person on the other side of the bet seems to also be a good anti-punditry measure.
Overall, considering these points has downgraded my belief in the value of betting as a way to establish people’s true confidence levels. Refusal to take a bet to back one’s confident claims still doesn’t look good, on the margin. But it’s not devastating. We also shouldn’t naively interpret betting odds as real statements about the better’s exact confidence levels.
From this perspective, it seems like one of the virtue of real-money prediction markets, as opposed to personal bets, is that they’re relatively anonymous. This removes most of the concern that people’s eagerness or unwillingness to bet is due to reputational concerns about the act of betting, rather than reputational concerns about the prospect of being right or wrong. I haven’t worked out the math, but it also seems like averaging would tend to eliminate the problems with differing utility functions, another point in favor of prediction markets.
Edit: This argument against extracting confidence information from bets is still, I think, correct. I’d now go further and say that you can’t extract any information at all from a bet on the end of the world, unless you also assume the participants are acting as though they do not understand basic finance.
Toy model:
Imagine that, for you and your counterpart, making $1000 is worth 1 utility point to you, and losing $1000 is worth −2 utility points. Then you can work out your bet in terms of utility point odds, and then reconvert to dollars to enact the bet.
This becomes more complex if you and your counterpart assign different utilities to money. Let’s make some simplifying assumptions. We’ll ignore opportunity cost, assume your net worth doesn’t change, and assume zero inflation.
Let’s assume also that everybody gets utility points equal to the square root of their net worth in dollars.
Bryan has $10,000, 100 utility points. Eliezer has $100, 10 utility points. Eliezer wants to bet at 2:1 odds in utility points that the world will end in 2030. They choose 1 utility point as an upfront payment from Bryan to Eliezer, and 2 utility points as the payment from Eliezer to Bryan if the world doesn’t end.
For Eliezer to get 1 utility point, he needs $21. But that would only cost Bryan about 0.1 utility points.
If Eliezer loses his bet, he’d need to end up having a total of 8 utility points, while Bryan would need to end up having 102 utility points. So Eliezer would need to give up $57 of his $, and Bryan would need to gain $525.
Because of this, Bryan and Eliezer can only place a bet if they care about money about the same as each other, and it’s not even clear that money odds will reflect their actual utility.
This points to a more fully general argument against using bets to operationalize a person’s confidence in their claim. After all, no resource (status, time, or money) translates into personal utility in linear fashion.
Even if resources and utility had a linear relationship, bets can be positive-sum for both participants, negative sum, or mixed. Eliezer and Bryan might both “earn” more than a couple hundred bucks in reputation, or even dollars, just by being known to have made this bet. I can also imagine a counterpart to Bryan for whom taking a bet with Bryan at all would be costly, perceived as unseemly. By contrast, Bryan builds his reputation partly on being a betting man, and I suspect he enjoys the activity for its own sake. I think this should be taken into account in interpreting people’s willingness or refusal to bet.
Small bets seem to still be useful as a first measure to undermine punditry and motivate precise and explicit reasoning about empirical likelihoods. Insisting that a person making a confident claim ought to back it with favorable odds for the person on the other side of the bet seems to also be a good anti-punditry measure.
Overall, considering these points has downgraded my belief in the value of betting as a way to establish people’s true confidence levels. Refusal to take a bet to back one’s confident claims still doesn’t look good, on the margin. But it’s not devastating. We also shouldn’t naively interpret betting odds as real statements about the better’s exact confidence levels.
From this perspective, it seems like one of the virtue of real-money prediction markets, as opposed to personal bets, is that they’re relatively anonymous. This removes most of the concern that people’s eagerness or unwillingness to bet is due to reputational concerns about the act of betting, rather than reputational concerns about the prospect of being right or wrong. I haven’t worked out the math, but it also seems like averaging would tend to eliminate the problems with differing utility functions, another point in favor of prediction markets.
Edit: This argument against extracting confidence information from bets is still, I think, correct. I’d now go further and say that you can’t extract any information at all from a bet on the end of the world, unless you also assume the participants are acting as though they do not understand basic finance.
Toy model:
Imagine that, for you and your counterpart, making $1000 is worth 1 utility point to you, and losing $1000 is worth −2 utility points. Then you can work out your bet in terms of utility point odds, and then reconvert to dollars to enact the bet.
This becomes more complex if you and your counterpart assign different utilities to money. Let’s make some simplifying assumptions. We’ll ignore opportunity cost, assume your net worth doesn’t change, and assume zero inflation.
Let’s assume also that everybody gets utility points equal to the square root of their net worth in dollars.
Bryan has $10,000, 100 utility points. Eliezer has $100, 10 utility points. Eliezer wants to bet at 2:1 odds in utility points that the world will end in 2030. They choose 1 utility point as an upfront payment from Bryan to Eliezer, and 2 utility points as the payment from Eliezer to Bryan if the world doesn’t end.
For Eliezer to get 1 utility point, he needs $21. But that would only cost Bryan about 0.1 utility points.
If Eliezer loses his bet, he’d need to end up having a total of 8 utility points, while Bryan would need to end up having 102 utility points. So Eliezer would need to give up $57 of his $, and Bryan would need to gain $525.
Because of this, Bryan and Eliezer can only place a bet if they care about money about the same as each other, and it’s not even clear that money odds will reflect their actual utility.