Eliezer and Bryan’s bet is 1:1 odds for a CPI-adjusted $100 bet that the world won’t be destroyed by Jan. 1, 2030.
After reading comments on my post “How to place a bet on the end of the world,” which was motivated by Bryan Caplan’s description of his bet with Eliezer, I concluded that you can’t extract information on confidence from odds on apocalyptic bets. Explanation is in the comments.
Bryan told me via email that these are the best odds he could get from Eliezer.
I think the best way to think about their bet is that it’s just for fun. We shouldn’t try to discern the true probabilities they hold for this outcome from it, or their level of confidence from its size. If it’s a tax on bullshit, it’s a very small tax given that Eliezer has built his career on the issue.
If I rationalize Eliezer’s behavior as being a wise strategy for preventing AI doom, I’d say that he’s making the consequences of failure vivid, making the risk seem real, calling out mistakenly optimistic forecasts loudly, all in the service of making sure the problem gets serious, sustained attention. Giving precise forecasts would give people who are invested in AI progress a chance to dunk on him and undermine his credibility by being able to point out precisely when and how he was wrong, while neglecting the gigantic consequences if they themselves are wrong about the consequences of continued AI capabilities research. Until the incentives he faces chance, I expect his behavior to remain roughly the same.
You’re absolutely right, that bet with Bryan gives very little information! Definitely doesn’t compare at all to a proper track record of many questions.
I think your explanation for his behavior is good. I don’t think it’s justified, or at least, I am deeply suspicious of him thinking anything thematically similar to “I have to obfuscate my forecasting competence, for the good of the world, but I’ll still tell people I’m good at it”. The more likely prior is just that people don’t want to lose influence/prestige. It’s like a Nobel laureate making predictions, and then not seeming so special after.
Giving precise forecasts would give people who are invested in AI progress a chance to dunk on him and undermine his credibility by being able to point out precisely when and how he was wrong, while neglecting the gigantic consequences if they themselves are wrong about the consequences of continued AI capabilities research. Until the incentives he faces chance, I expect his behavior to remain roughly the same.
But then anyone who makes a precise bet could lose out in the same way. I assume you don’t believe that getting in general is wrong, so where does the asymmetry come from? Yudkowsky is excused betting because he’s actually right?
Eliezer and Bryan’s bet is 1:1 odds for a CPI-adjusted $100 bet that the world won’t be destroyed by Jan. 1, 2030.
After reading comments on my post “How to place a bet on the end of the world,” which was motivated by Bryan Caplan’s description of his bet with Eliezer, I concluded that you can’t extract information on confidence from odds on apocalyptic bets. Explanation is in the comments.
Bryan told me via email that these are the best odds he could get from Eliezer.
I think the best way to think about their bet is that it’s just for fun. We shouldn’t try to discern the true probabilities they hold for this outcome from it, or their level of confidence from its size. If it’s a tax on bullshit, it’s a very small tax given that Eliezer has built his career on the issue.
If I rationalize Eliezer’s behavior as being a wise strategy for preventing AI doom, I’d say that he’s making the consequences of failure vivid, making the risk seem real, calling out mistakenly optimistic forecasts loudly, all in the service of making sure the problem gets serious, sustained attention. Giving precise forecasts would give people who are invested in AI progress a chance to dunk on him and undermine his credibility by being able to point out precisely when and how he was wrong, while neglecting the gigantic consequences if they themselves are wrong about the consequences of continued AI capabilities research. Until the incentives he faces chance, I expect his behavior to remain roughly the same.
Ah by the way, I think the link you posted accidentally links to this post.
Fixed, thanks!
You’re absolutely right, that bet with Bryan gives very little information! Definitely doesn’t compare at all to a proper track record of many questions.
I think your explanation for his behavior is good. I don’t think it’s justified, or at least, I am deeply suspicious of him thinking anything thematically similar to “I have to obfuscate my forecasting competence, for the good of the world, but I’ll still tell people I’m good at it”. The more likely prior is just that people don’t want to lose influence/prestige. It’s like a Nobel laureate making predictions, and then not seeming so special after.
But then anyone who makes a precise bet could lose out in the same way. I assume you don’t believe that getting in general is wrong, so where does the asymmetry come from? Yudkowsky is excused betting because he’s actually right?