Money is only valuable to me prior to the point of no return, so the value to me of a bet that pays off after that point is reached is approximately zero. In fact it’s not just money that has this property. This means that no matter how good the odds are that you offer me, and even if you pay up front, I’m better off just taking out a low-interest loan instead.
As I understand it, you’re arguing that if Eliezer wants $100 now that he doesn’t need to pay off for 13 years, it would be cheaper to take it in loan interest than to make a bet with Bryan.
Using this loan calculator with the default 6% interest over 13 years compounded annually, Eliezer would owe $213.29 when his loan matured, rather than the $200 he’d owe Bryan if he loses the bet.
Eliezer enjoys $100 now, whether he bets with Bryan or takes out a loan.
If Eliezer loses the bet, his outcome is approximately the same whether he took out the loan or bet with Bryan: he owes $200. If he wins the bet, his outcome is also the same whether he took out the loan or bet with Bryan. Bryan’s outcomes are also identical whether he lends out $100 at 6% annually compounded interest for 13 years, or bets with Eliezer. He’s out $100 now, makes back the same either way if he wins, and isn’t worried about money anymore if he loses.
Edit: I no longer think you can adjust by inflating the odds. If Bryan offers less favorable betting odds than Eliezer could get at the bank, then Eliezer could just take out the biggest loan he can get and ignore Bryan’s offer to bet. I no longer think you can extract information on people’s confidence about the end of the world based on a bet, unless you assume they’re both acting as if they didn’t understand basic finance.
Edit 2: However, the limiting factor here is the opportunity cost for Eliezer. The opportunity to take any loan at all, including an equivalent bet with Bryan, should look attractive to Eliezer (if we ignore the dollar vs. utility objection). Hence, if he were unwilling to take a bet with Bryan, or wanted to keep it small, then this should still be some evidence that he’s not as confident in his claim as he’s projecting. An apocalypticist who won’t take out massive loans on the expectation he’ll never have to pay them pack is not behaving in a manner consistent with his statements.
It seems like we could deal with this by inflating the odds. For example, if Eliezer bet Bryan at 9:1 odds, then Eliezer would get $100 now, and Bryan would make back $1000 if he wins, an $800 surplus over what he’d have gotten loaning his money out. Likewise, if Eliezer loses the bet, he would lose much more money paying back Bryan than he’d have lost taking out a 13-year loan.
So it seems like we can deal with this problem with an adjustment for opportunity cost. Eliezer and Bryan’s bet is very close to a refusal to bet at all, since there is no different in outcome for either party whether they loan or bet, and no matter who wins. The real stakes for such a bet is something like the odds beyond adjustments for opportunity cost. In this case, if Eliezer was paid $100 up front by Bryan, and had to pay back about $400 if he lost the bet in 13 years, this would seem to me to be actually equivalent to a bet at 2:1 odds.
In general, the formula to calculate the “true odds” of the bet would be:
([Payment if Eliezer loses the bet] - [Interest of equivalent loan])/[Up-front payment by Bryan]
Besides, I don’t need money right now anyway, at least to continue my research activities. I’d only be able to achieve significant amounts of extra good if I had quite a lot more money.
This points to a more fully general argument against using bets to operationalize a person’s confidence in their claim. After all, no resource (status, time, or money) translates into personal utility in linear fashion.
Even if resources and utility had a linear relationship, bets can be positive-sum for both participants, negative sum, or mixed. Eliezer and Bryan might both “earn” more than a couple hundred bucks in reputation, or even dollars, just by being known to have made this bet. I can also imagine a counterpart to Bryan for whom taking a bet with Bryan at all would be costly, perceived as unseemly. By contrast, Bryan builds his reputation partly on being a betting man, and I suspect he enjoys the activity for its own sake. I think this should be taken into account in interpreting people’s willingness or refusal to bet.
Small bets seem to still be useful as a first measure to undermine punditry and motivate precise and explicit reasoning about empirical likelihoods. Insisting that a person making a confident claim ought to back it with favorable odds for the person on the other side of the bet seems to also be a good anti-punditry measure.
Overall, considering these points has downgraded my belief in the value of betting as a way to establish people’s true confidence levels. Refusal to take a bet to back one’s confident claims still doesn’t look good, on the margin. But it’s not devastating. We also shouldn’t naively interpret betting odds as real statements about the better’s exact confidence levels.
From this perspective, it seems like one of the virtue of real-money prediction markets, as opposed to personal bets, is that they’re relatively anonymous. This removes most of the concern that people’s eagerness or unwillingness to bet is due to reputational concerns about the act of betting, rather than reputational concerns about the prospect of being right or wrong. I haven’t worked out the math, but it also seems like averaging would tend to eliminate the problems with differing utility functions, another point in favor of prediction markets.
Edit: This argument against extracting confidence information from bets is still, I think, correct. I’d now go further and say that you can’t extract any information at all from a bet on the end of the world, unless you also assume the participants are acting as though they do not understand basic finance.
Toy model:
Imagine that, for you and your counterpart, making $1000 is worth 1 utility point to you, and losing $1000 is worth −2 utility points. Then you can work out your bet in terms of utility point odds, and then reconvert to dollars to enact the bet.
This becomes more complex if you and your counterpart assign different utilities to money. Let’s make some simplifying assumptions. We’ll ignore opportunity cost, assume your net worth doesn’t change, and assume zero inflation.
Let’s assume also that everybody gets utility points equal to the square root of their net worth in dollars.
Bryan has $10,000, 100 utility points. Eliezer has $100, 10 utility points. Eliezer wants to bet at 2:1 odds in utility points that the world will end in 2030. They choose 1 utility point as an upfront payment from Bryan to Eliezer, and 2 utility points as the payment from Eliezer to Bryan if the world doesn’t end.
For Eliezer to get 1 utility point, he needs $21. But that would only cost Bryan about 0.1 utility points.
If Eliezer loses his bet, he’d need to end up having a total of 8 utility points, while Bryan would need to end up having 102 utility points. So Eliezer would need to give up $57 of his $, and Bryan would need to gain $525.
Because of this, Bryan and Eliezer can only place a bet if they care about money about the same as each other, and it’s not even clear that money odds will reflect their actual utility.
Bryan says he knows all this stuff, and these were just the best odds he could get. New interpretation: Bryan’s having fun betting, and Eliezer’s smart enough to know that if he loses, he just got a long-term loan from Bryan at a somewhat favorable rate.
That’s true of Caplan’s bet, but I think you could correct for this in the odds you set. It would be useful to explicitly distinguish between the correction for opportunity cost and interest from the odds of the bet itself.
I’m more interested in schemes to bet reputation/status or labor.
I agree that reputation (I’d say specifically credibility) is the important thing to wager, but I think any public bet implicitly does that.
If, in 2030, there are still humans on Earth’s surface, then the takeaway is “AI x-risk proponent Yudkowsky proved wrong in bet”, and Yudkowsky loses credibility. (See Ehrlich’s famous bet for an example of this pattern.) The upside is raising concern about AI x-risk in the present (2022).
This is a good trade-off if you think increasing concern about AI x-risk in 2022-2029 is worth decreasing concern about AI x-risk in 2030+. Of course, if AGI turns out to be invented before 2030, the trade-off seems good. In the event that it’s not, the trade-off seems bad.
Well, in that case I’ll have you know that about two years ago I made a bet for $1000 with Tobias Baumann. Resolution date was also 2030. Can send more details if you like.
And in general if anyone wants to send me money now, I’ll promise to pay you back with interest in 2030. But please only do this if it doesn’t create significant administrative overhead for me… in fact come to think of it this probably would, I don’t see how this is worth it for me… hmm..
Anyone who would accord me higher status and respect if they saw me making such bets should totally make such bets with me. Unless I don’t care about their respect… which is probably true for most people… but not most people on LW...
The problem with this is that the normal market offers better odds. Just take out a low-interest loan.
The other problem with this is that money isn’t important right now. I’m more interested in schemes to bet reputation/status or labor.
(I thought about this a bit last year: https://www.lesswrong.com/posts/4FhiSuNv4QbtKDzL8/how-can-i-bet-on-short-timelines https://www.lesswrong.com/posts/kYa4dHP5MDnqmav2w/is-this-a-good-way-to-bet-on-short-timelines)
From your linked post:
As I understand it, you’re arguing that if Eliezer wants $100 now that he doesn’t need to pay off for 13 years, it would be cheaper to take it in loan interest than to make a bet with Bryan.
Using this loan calculator with the default 6% interest over 13 years compounded annually, Eliezer would owe $213.29 when his loan matured, rather than the $200 he’d owe Bryan if he loses the bet.
Eliezer enjoys $100 now, whether he bets with Bryan or takes out a loan.
If Eliezer loses the bet, his outcome is approximately the same whether he took out the loan or bet with Bryan: he owes $200. If he wins the bet, his outcome is also the same whether he took out the loan or bet with Bryan. Bryan’s outcomes are also identical whether he lends out $100 at 6% annually compounded interest for 13 years, or bets with Eliezer. He’s out $100 now, makes back the same either way if he wins, and isn’t worried about money anymore if he loses.
Edit: I no longer think you can adjust by inflating the odds. If Bryan offers less favorable betting odds than Eliezer could get at the bank, then Eliezer could just take out the biggest loan he can get and ignore Bryan’s offer to bet. I no longer think you can extract information on people’s confidence about the end of the world based on a bet, unless you assume they’re both acting as if they didn’t understand basic finance.
Edit 2: However, the limiting factor here is the opportunity cost for Eliezer. The opportunity to take any loan at all, including an equivalent bet with Bryan, should look attractive to Eliezer (if we ignore the dollar vs. utility objection). Hence, if he were unwilling to take a bet with Bryan, or wanted to keep it small, then this should still be some evidence that he’s not as confident in his claim as he’s projecting. An apocalypticist who won’t take out massive loans on the expectation he’ll never have to pay them pack is not behaving in a manner consistent with his statements.
It seems like we could deal with this by inflating the odds. For example, if Eliezer bet Bryan at 9:1 odds, then Eliezer would get $100 now, and Bryan would make back $1000 if he wins, an $800 surplus over what he’d have gotten loaning his money out. Likewise, if Eliezer loses the bet, he would lose much more money paying back Bryan than he’d have lost taking out a 13-year loan.So it seems like wecandeal with this problem with an adjustment for opportunity cost. Eliezer and Bryan’s bet is very close to a refusal to bet at all, since there is no different in outcome for either party whether they loan or bet, and no matter who wins. The real stakes for such a bet is something like the oddsbeyondadjustments for opportunity cost. In this case, if Eliezer was paid $100 up front by Bryan, and had to pay back about $400 if he lost the bet in 13 years, this would seem to me to be actually equivalent to a bet at 2:1 odds.In general, the formula to calculate the “true odds” of the bet would be:([Payment if Eliezer loses the bet] - [Interest of equivalent loan])/[Up-front payment by Bryan]This points to a more fully general argument against using bets to operationalize a person’s confidence in their claim. After all, no resource (status, time, or money) translates into personal utility in linear fashion.
Even if resources and utility had a linear relationship, bets can be positive-sum for both participants, negative sum, or mixed. Eliezer and Bryan might both “earn” more than a couple hundred bucks in reputation, or even dollars, just by being known to have made this bet. I can also imagine a counterpart to Bryan for whom taking a bet with Bryan at all would be costly, perceived as unseemly. By contrast, Bryan builds his reputation partly on being a betting man, and I suspect he enjoys the activity for its own sake. I think this should be taken into account in interpreting people’s willingness or refusal to bet.
Small bets seem to still be useful as a first measure to undermine punditry and motivate precise and explicit reasoning about empirical likelihoods. Insisting that a person making a confident claim ought to back it with favorable odds for the person on the other side of the bet seems to also be a good anti-punditry measure.
Overall, considering these points has downgraded my belief in the value of betting as a way to establish people’s true confidence levels. Refusal to take a bet to back one’s confident claims still doesn’t look good, on the margin. But it’s not devastating. We also shouldn’t naively interpret betting odds as real statements about the better’s exact confidence levels.
From this perspective, it seems like one of the virtue of real-money prediction markets, as opposed to personal bets, is that they’re relatively anonymous. This removes most of the concern that people’s eagerness or unwillingness to bet is due to reputational concerns about the act of betting, rather than reputational concerns about the prospect of being right or wrong. I haven’t worked out the math, but it also seems like averaging would tend to eliminate the problems with differing utility functions, another point in favor of prediction markets.
Edit: This argument against extracting confidence information from bets is still, I think, correct. I’d now go further and say that you can’t extract any information at all from a bet on the end of the world, unless you also assume the participants are acting as though they do not understand basic finance.
Toy model:
Imagine that, for you and your counterpart, making $1000 is worth 1 utility point to you, and losing $1000 is worth −2 utility points. Then you can work out your bet in terms of utility point odds, and then reconvert to dollars to enact the bet.
This becomes more complex if you and your counterpart assign different utilities to money. Let’s make some simplifying assumptions. We’ll ignore opportunity cost, assume your net worth doesn’t change, and assume zero inflation.
Let’s assume also that everybody gets utility points equal to the square root of their net worth in dollars.
Bryan has $10,000, 100 utility points. Eliezer has $100, 10 utility points. Eliezer wants to bet at 2:1 odds in utility points that the world will end in 2030. They choose 1 utility point as an upfront payment from Bryan to Eliezer, and 2 utility points as the payment from Eliezer to Bryan if the world doesn’t end.
For Eliezer to get 1 utility point, he needs $21. But that would only cost Bryan about 0.1 utility points.
If Eliezer loses his bet, he’d need to end up having a total of 8 utility points, while Bryan would need to end up having 102 utility points. So Eliezer would need to give up $57 of his $, and Bryan would need to gain $525.
Because of this, Bryan and Eliezer can only place a bet if they care about money about the same as each other, and it’s not even clear that money odds will reflect their actual utility.
I just emailed Bryan to point the loan opportunity cost issue out. I’ll update if I hear back.
Thanks for looking into this!
Bryan says he knows all this stuff, and these were just the best odds he could get. New interpretation: Bryan’s having fun betting, and Eliezer’s smart enough to know that if he loses, he just got a long-term loan from Bryan at a somewhat favorable rate.
IIRC Eliezer made some joke about how Bryan’s never lost a bet and maybe he can leverage this miraculous regularity to reduce AI risk :)
That’s true of Caplan’s bet, but I think you could correct for this in the odds you set. It would be useful to explicitly distinguish between the correction for opportunity cost and interest from the odds of the bet itself.
I agree that reputation (I’d say specifically credibility) is the important thing to wager, but I think any public bet implicitly does that.
If, in 2030, there are still humans on Earth’s surface, then the takeaway is “AI x-risk proponent Yudkowsky proved wrong in bet”, and Yudkowsky loses credibility. (See Ehrlich’s famous bet for an example of this pattern.) The upside is raising concern about AI x-risk in the present (2022).
This is a good trade-off if you think increasing concern about AI x-risk in 2022-2029 is worth decreasing concern about AI x-risk in 2030+. Of course, if AGI turns out to be invented before 2030, the trade-off seems good. In the event that it’s not, the trade-off seems bad.
Well, in that case I’ll have you know that about two years ago I made a bet for $1000 with Tobias Baumann. Resolution date was also 2030. Can send more details if you like.
And in general if anyone wants to send me money now, I’ll promise to pay you back with interest in 2030. But please only do this if it doesn’t create significant administrative overhead for me… in fact come to think of it this probably would, I don’t see how this is worth it for me… hmm..
Anyone who would accord me higher status and respect if they saw me making such bets should totally make such bets with me. Unless I don’t care about their respect… which is probably true for most people… but not most people on LW...