Assuming that your AI timelines are well-approximated by “likely more than three years”, Zvi’s post on prediction market desiderata suggests that post-AGI evaluation is pretty dead-on-arrival for creating liquid prediction markets. Even laying aside the conditional-on-AGI dimension, the failures of “quick resolution” (years) and “probable resolution” (~20%, by your numbers) are crippling for the prospect of professionals or experts investing serious resources in making profitable predictions.
Note that you can solve this by chaining markets together, i.e., having a market every year asking what the next market will predict, where the last market is 1y before AGI. This hasn’t been tried much in reality, though.
Clever, but it hasn’t been tried for a good reason. If, say, the next five years of markets are all untethered from reality (but consistent with each other), there’s no way to get paid for bringing them into line with expected reality except by putting on the trades and holding them for five years. (The natural one-year trade will just resolve to the unfair market price of the next-year-market market and there’s nothing to do about it except wait for longer.)
The chained markets end up being no more fair than if they all settled to the final expiry directly.
Yes, I can imagine cases where this setup wouldn’t be enough.
Though note that you could still buy the shares the last year. Also, if the market corrects by 10% each year (i.e., a value of a share of yes increases from 10 to 20% to 30% to 40%, etc. each year), it might still be worth it (note that the market would resolve each year to the value of a share, not to 0 or 100).
Also note that the current way in which prediction markets are structured is, as you point out, dumb: you bet 5 depreciating dollars which then go into escrow, rather than $5 worth of, say, S&P 500 shares, which increase in value. But this could change.
I’d agree this would work poorly in traditional Prediction Markets. Not so sure about Prediction Tournaments, or other Prediction Market systems that could exist. Others could be heavily subsidized, and the money on hold could be invested in more standard asset classes.
I understand Zvi’s points as being relatively universal to systems where you want to use rewards to incentivize participants to work hard to get good answers.
No matter how the payouts work, a p% chance that your questions don’t resolve is (to first order) equivalent to a p% tax on investment in making better predictions, and a years-long tie-up kills iterative growth and selection/amplification cycles as well limiting the return-on-investment-in-general-prediction-skill to a one-shot game. I don’t think these issues go away if you reward predictions differently, since they’re general features of the relation between the up-front investment in making better predictions and the to-come potential reward for doing so well.
(A counterpoint I’ll entertain is Zvi’s caveat to “quick resolution”—which also caveats “probable resolution”—that sufficient liquidity can substitute for resolution. But bootstrapping that liquidity itself seems like a Hard Problem, so I’d need to further be convinced that it’s tractable here.)
If the reason your questions won’t resolve is that you are dead or that none of your money at all will be useful, I think things are a bit different.
That said, one major ask is that the forecasters believe the AGI will happen in between, which seems to me like an even bigger issue :)
I’d estimate there’s a 2% chance of this being considered “useful” in 10 years, and in those cases would estimate it to be worth $10k to $20 million of value (90% ci). Would you predict <0.1%?
I’m still thinking about what quantitative estimates I’d stand behind. I think I’d believe that a prize-based competitive prediction system with all eval deferred until and conditioned on AGI is <4% to add more than $1mln of value to [just pay some smart participants for their best-efforts opinions].
(If I thought harder about corner-cases, I think I could come up with a stronger statement.)
If the reason your questions won’t resolve is that you are dead or that none of your money at all will be useful, I think things are a bit different.
I’m confused; to restate the above, I think that a p% chance that your predictions don’t matter (for any reason: game rained out, you’re dead, your money isn’t useful) is (to first order) equivalent to a p% tax on investment in making better predictions. What do you think is different?
one major ask is that the forecasters believe the AGI will happen in between, which seems to me like an even bigger issue
Sure, that’s an issue, but I think that requiring participants to all assume short AGI timelines is tractable in a way that the delayed/improbable resolution issues are not.
I can imagine a market without resolution issues that assumes participants all believe short AGI timelines could support 12 semi-professional traders subsidized by interested stakeholders. I don’t believe that a market with resolution issues as above can elicit serious investment in getting its answers right from half that many. (I recognize that I’m eliding my definition of “serious investment” here.)
For the first question, I’m happy we identified this as an issue. I think it is quite different. If you think there’s a good chance you will die soon, then your marginal money will likely not be that valuable to you. It’s a lot more valuable in the case that you survive.
For example, say you found out tomorrow that there’s a 50% chance everyone will die in one week. (Gosh this is a downer example) You also get to place an investment for $50, that will pay out in two weeks for $70. Is the expected value of the bet really equivalent to (70/2)-50 = -$5? If you don’t expect to spend all of your money in one week, I think it’s still a good deal.
I’d note that Superforecasters have performed better than Prediction Markets, in what I believe are relatively small groups (<20 people). While I think that Prediction Markets could theoretically work, I’m much more confident in systems like those of Superforecasters, where they wouldn’t have to make explicit bets. That said, you could argue that their time is the cost, so the percentage chance still matters. (Of course, the alternative, of giving them money to enjoy for 5-15 years before 50% death, also seems pretty bad)
Assuming that your AI timelines are well-approximated by “likely more than three years”, Zvi’s post on prediction market desiderata suggests that post-AGI evaluation is pretty dead-on-arrival for creating liquid prediction markets. Even laying aside the conditional-on-AGI dimension, the failures of “quick resolution” (years) and “probable resolution” (~20%, by your numbers) are crippling for the prospect of professionals or experts investing serious resources in making profitable predictions.
Note that you can solve this by chaining markets together, i.e., having a market every year asking what the next market will predict, where the last market is 1y before AGI. This hasn’t been tried much in reality, though.
Clever, but it hasn’t been tried for a good reason. If, say, the next five years of markets are all untethered from reality (but consistent with each other), there’s no way to get paid for bringing them into line with expected reality except by putting on the trades and holding them for five years. (The natural one-year trade will just resolve to the unfair market price of the next-year-market market and there’s nothing to do about it except wait for longer.)
The chained markets end up being no more fair than if they all settled to the final expiry directly.
Yes, I can imagine cases where this setup wouldn’t be enough.
Though note that you could still buy the shares the last year. Also, if the market corrects by 10% each year (i.e., a value of a share of yes increases from 10 to 20% to 30% to 40%, etc. each year), it might still be worth it (note that the market would resolve each year to the value of a share, not to 0 or 100).
Also note that the current way in which prediction markets are structured is, as you point out, dumb: you bet 5 depreciating dollars which then go into escrow, rather than $5 worth of, say, S&P 500 shares, which increase in value. But this could change.
I’d agree this would work poorly in traditional Prediction Markets. Not so sure about Prediction Tournaments, or other Prediction Market systems that could exist. Others could be heavily subsidized, and the money on hold could be invested in more standard asset classes.
*(Note: I said >20%, not exactly 20%)
I understand Zvi’s points as being relatively universal to systems where you want to use rewards to incentivize participants to work hard to get good answers.
No matter how the payouts work, a p% chance that your questions don’t resolve is (to first order) equivalent to a p% tax on investment in making better predictions, and a years-long tie-up kills iterative growth and selection/amplification cycles as well limiting the return-on-investment-in-general-prediction-skill to a one-shot game. I don’t think these issues go away if you reward predictions differently, since they’re general features of the relation between the up-front investment in making better predictions and the to-come potential reward for doing so well.
(A counterpoint I’ll entertain is Zvi’s caveat to “quick resolution”—which also caveats “probable resolution”—that sufficient liquidity can substitute for resolution. But bootstrapping that liquidity itself seems like a Hard Problem, so I’d need to further be convinced that it’s tractable here.)
If the reason your questions won’t resolve is that you are dead or that none of your money at all will be useful, I think things are a bit different.
That said, one major ask is that the forecasters believe the AGI will happen in between, which seems to me like an even bigger issue :)
I’d estimate there’s a 2% chance of this being considered “useful” in 10 years, and in those cases would estimate it to be worth $10k to $20 million of value (90% ci). Would you predict <0.1%?
I’m still thinking about what quantitative estimates I’d stand behind. I think I’d believe that a prize-based competitive prediction system with all eval deferred until and conditioned on AGI is <4% to add more than $1mln of value to [just pay some smart participants for their best-efforts opinions].
(If I thought harder about corner-cases, I think I could come up with a stronger statement.)
I’m confused; to restate the above, I think that a p% chance that your predictions don’t matter (for any reason: game rained out, you’re dead, your money isn’t useful) is (to first order) equivalent to a p% tax on investment in making better predictions. What do you think is different?
Sure, that’s an issue, but I think that requiring participants to all assume short AGI timelines is tractable in a way that the delayed/improbable resolution issues are not.
I can imagine a market without resolution issues that assumes participants all believe short AGI timelines could support 12 semi-professional traders subsidized by interested stakeholders. I don’t believe that a market with resolution issues as above can elicit serious investment in getting its answers right from half that many. (I recognize that I’m eliding my definition of “serious investment” here.)
For the first question, I’m happy we identified this as an issue. I think it is quite different. If you think there’s a good chance you will die soon, then your marginal money will likely not be that valuable to you. It’s a lot more valuable in the case that you survive.
For example, say you found out tomorrow that there’s a 50% chance everyone will die in one week. (Gosh this is a downer example) You also get to place an investment for $50, that will pay out in two weeks for $70. Is the expected value of the bet really equivalent to (70/2)-50 = -$5? If you don’t expect to spend all of your money in one week, I think it’s still a good deal.
I’d note that Superforecasters have performed better than Prediction Markets, in what I believe are relatively small groups (<20 people). While I think that Prediction Markets could theoretically work, I’m much more confident in systems like those of Superforecasters, where they wouldn’t have to make explicit bets. That said, you could argue that their time is the cost, so the percentage chance still matters. (Of course, the alternative, of giving them money to enjoy for 5-15 years before 50% death, also seems pretty bad)