I understand Zvi’s points as being relatively universal to systems where you want to use rewards to incentivize participants to work hard to get good answers.
No matter how the payouts work, a p% chance that your questions don’t resolve is (to first order) equivalent to a p% tax on investment in making better predictions, and a years-long tie-up kills iterative growth and selection/amplification cycles as well limiting the return-on-investment-in-general-prediction-skill to a one-shot game. I don’t think these issues go away if you reward predictions differently, since they’re general features of the relation between the up-front investment in making better predictions and the to-come potential reward for doing so well.
(A counterpoint I’ll entertain is Zvi’s caveat to “quick resolution”—which also caveats “probable resolution”—that sufficient liquidity can substitute for resolution. But bootstrapping that liquidity itself seems like a Hard Problem, so I’d need to further be convinced that it’s tractable here.)
If the reason your questions won’t resolve is that you are dead or that none of your money at all will be useful, I think things are a bit different.
That said, one major ask is that the forecasters believe the AGI will happen in between, which seems to me like an even bigger issue :)
I’d estimate there’s a 2% chance of this being considered “useful” in 10 years, and in those cases would estimate it to be worth $10k to $20 million of value (90% ci). Would you predict <0.1%?
I’m still thinking about what quantitative estimates I’d stand behind. I think I’d believe that a prize-based competitive prediction system with all eval deferred until and conditioned on AGI is <4% to add more than $1mln of value to [just pay some smart participants for their best-efforts opinions].
(If I thought harder about corner-cases, I think I could come up with a stronger statement.)
If the reason your questions won’t resolve is that you are dead or that none of your money at all will be useful, I think things are a bit different.
I’m confused; to restate the above, I think that a p% chance that your predictions don’t matter (for any reason: game rained out, you’re dead, your money isn’t useful) is (to first order) equivalent to a p% tax on investment in making better predictions. What do you think is different?
one major ask is that the forecasters believe the AGI will happen in between, which seems to me like an even bigger issue
Sure, that’s an issue, but I think that requiring participants to all assume short AGI timelines is tractable in a way that the delayed/improbable resolution issues are not.
I can imagine a market without resolution issues that assumes participants all believe short AGI timelines could support 12 semi-professional traders subsidized by interested stakeholders. I don’t believe that a market with resolution issues as above can elicit serious investment in getting its answers right from half that many. (I recognize that I’m eliding my definition of “serious investment” here.)
For the first question, I’m happy we identified this as an issue. I think it is quite different. If you think there’s a good chance you will die soon, then your marginal money will likely not be that valuable to you. It’s a lot more valuable in the case that you survive.
For example, say you found out tomorrow that there’s a 50% chance everyone will die in one week. (Gosh this is a downer example) You also get to place an investment for $50, that will pay out in two weeks for $70. Is the expected value of the bet really equivalent to (70/2)-50 = -$5? If you don’t expect to spend all of your money in one week, I think it’s still a good deal.
I’d note that Superforecasters have performed better than Prediction Markets, in what I believe are relatively small groups (<20 people). While I think that Prediction Markets could theoretically work, I’m much more confident in systems like those of Superforecasters, where they wouldn’t have to make explicit bets. That said, you could argue that their time is the cost, so the percentage chance still matters. (Of course, the alternative, of giving them money to enjoy for 5-15 years before 50% death, also seems pretty bad)
I understand Zvi’s points as being relatively universal to systems where you want to use rewards to incentivize participants to work hard to get good answers.
No matter how the payouts work, a p% chance that your questions don’t resolve is (to first order) equivalent to a p% tax on investment in making better predictions, and a years-long tie-up kills iterative growth and selection/amplification cycles as well limiting the return-on-investment-in-general-prediction-skill to a one-shot game. I don’t think these issues go away if you reward predictions differently, since they’re general features of the relation between the up-front investment in making better predictions and the to-come potential reward for doing so well.
(A counterpoint I’ll entertain is Zvi’s caveat to “quick resolution”—which also caveats “probable resolution”—that sufficient liquidity can substitute for resolution. But bootstrapping that liquidity itself seems like a Hard Problem, so I’d need to further be convinced that it’s tractable here.)
If the reason your questions won’t resolve is that you are dead or that none of your money at all will be useful, I think things are a bit different.
That said, one major ask is that the forecasters believe the AGI will happen in between, which seems to me like an even bigger issue :)
I’d estimate there’s a 2% chance of this being considered “useful” in 10 years, and in those cases would estimate it to be worth $10k to $20 million of value (90% ci). Would you predict <0.1%?
I’m still thinking about what quantitative estimates I’d stand behind. I think I’d believe that a prize-based competitive prediction system with all eval deferred until and conditioned on AGI is <4% to add more than $1mln of value to [just pay some smart participants for their best-efforts opinions].
(If I thought harder about corner-cases, I think I could come up with a stronger statement.)
I’m confused; to restate the above, I think that a p% chance that your predictions don’t matter (for any reason: game rained out, you’re dead, your money isn’t useful) is (to first order) equivalent to a p% tax on investment in making better predictions. What do you think is different?
Sure, that’s an issue, but I think that requiring participants to all assume short AGI timelines is tractable in a way that the delayed/improbable resolution issues are not.
I can imagine a market without resolution issues that assumes participants all believe short AGI timelines could support 12 semi-professional traders subsidized by interested stakeholders. I don’t believe that a market with resolution issues as above can elicit serious investment in getting its answers right from half that many. (I recognize that I’m eliding my definition of “serious investment” here.)
For the first question, I’m happy we identified this as an issue. I think it is quite different. If you think there’s a good chance you will die soon, then your marginal money will likely not be that valuable to you. It’s a lot more valuable in the case that you survive.
For example, say you found out tomorrow that there’s a 50% chance everyone will die in one week. (Gosh this is a downer example) You also get to place an investment for $50, that will pay out in two weeks for $70. Is the expected value of the bet really equivalent to (70/2)-50 = -$5? If you don’t expect to spend all of your money in one week, I think it’s still a good deal.
I’d note that Superforecasters have performed better than Prediction Markets, in what I believe are relatively small groups (<20 people). While I think that Prediction Markets could theoretically work, I’m much more confident in systems like those of Superforecasters, where they wouldn’t have to make explicit bets. That said, you could argue that their time is the cost, so the percentage chance still matters. (Of course, the alternative, of giving them money to enjoy for 5-15 years before 50% death, also seems pretty bad)