Isn’t the fact that Manifold is not really a real-money prediction market very important here? If there was real money on the table, for example, it’s less likely that the 1/1/26 market would have been “forgotten”—the original traders would have had money on the line to discipline their attention.
Every time someone calls Manifold (or Metaculus) a “prediction market”, god kills an arbitrageur [even though both platforms are still great!].
Real-money markets do have stronger incentives for sharps to scour for arbitrage, so the 1/1/26 market would have been more likely to be noticed before months had gone by.
However (depending on the fee structure for resolving N/A markets), real-money markets have even stronger incentives for sharps to stay away entirely from spurious conditional markets, since they’d be throwing away cash and not just Internet points. Never ever ever cite out-of-the-money conditional markets.
Even in real-money prediction markets, “how much real money?” is a crucial question for deciding whether to trust the market or not. If you had a tonne of questions and no easy way to find the “forgotten” markets, and each market has (say) 10s-100s of dollars of orders on the book, then the people skilled enough to do the work likely have better ways to turn their time (and capital) into money. For example, I think some of the more niche Betfair markets are probably not worth taking particularly seriously.
Metaculus does not have this problem, since it is not a market and there is no cost to make a prediction. I expect long-shot conditionals on Metaculus to be more meaningful, then, since everyone is incentivized to predict their true beliefs.
The cost to make a prediction is time. The incentive of making it look like “Metaculus thinks X” is still present. The incentive to predict correctly is attenuated to the extent that it’s a long-shot conditional or a far future prediction. So Metaculus can still have the same class of problem.
The reasoning you gave sounds sensible, but it doesn’t comport with observations. Only questions with a small number of predictors (e.g. n<10) appear to have significant problems with misaligned incentives, and even then, those issues come up a small minority of the time.
I believe that is because the culture on Metaculus of predicting one’s true beliefs tends to override any other incentives downstream of being interested enough in the concept to have an opinion.
Time can be a factor, but not as much for long-shot conditionals or long time horizon questions. The time investment to predict on a question you don’t expect to update regularly can be on the order of 1 minute.
Some forecasters aim to maximize baseline score, and some aim to maximize peer score. That influences each forecaster’s decision to predict or not, but it doesn’t seem to have a significant impact on the aggregate.
Maximizing peer score incentivizes forecasters to stay away from questions where they are strongly in agreement with the community. (That choice doesn’t affect the community prediction in those cases.) Maximizing baseline score incentives forecasters to stay away from questions on which they would predict with high uncertainty, which slightly selects for people who at least believe they have some insight.
Questions that would resolve in 100 years or only if something crazy happens have essentially no relationship with scoring, so with no external incentives in any direction, people do what they want on those questions, which is almost always to predict their true beliefs.
I don’t think we disagree on culture. I was specifically disagreeing with the claim that Metaculus doesn’t have this problem “because it is not a market and there is no cost to make a prediction”. Your point that culture can override or complement incentives is well made.
Isn’t the fact that Manifold is not really a real-money prediction market very important here? If there was real money on the table, for example, it’s less likely that the 1/1/26 market would have been “forgotten”—the original traders would have had money on the line to discipline their attention.
Every time someone calls Manifold (or Metaculus) a “prediction market”, god kills an arbitrageur [even though both platforms are still great!].
Real-money markets do have stronger incentives for sharps to scour for arbitrage, so the 1/1/26 market would have been more likely to be noticed before months had gone by.
However (depending on the fee structure for resolving N/A markets), real-money markets have even stronger incentives for sharps to stay away entirely from spurious conditional markets, since they’d be throwing away cash and not just Internet points. Never ever ever cite out-of-the-money conditional markets.
Even in real-money prediction markets, “how much real money?” is a crucial question for deciding whether to trust the market or not. If you had a tonne of questions and no easy way to find the “forgotten” markets, and each market has (say) 10s-100s of dollars of orders on the book, then the people skilled enough to do the work likely have better ways to turn their time (and capital) into money. For example, I think some of the more niche Betfair markets are probably not worth taking particularly seriously.
Metaculus does not have this problem, since it is not a market and there is no cost to make a prediction. I expect long-shot conditionals on Metaculus to be more meaningful, then, since everyone is incentivized to predict their true beliefs.
The cost to make a prediction is time. The incentive of making it look like “Metaculus thinks X” is still present. The incentive to predict correctly is attenuated to the extent that it’s a long-shot conditional or a far future prediction. So Metaculus can still have the same class of problem.
The reasoning you gave sounds sensible, but it doesn’t comport with observations. Only questions with a small number of predictors (e.g. n<10) appear to have significant problems with misaligned incentives, and even then, those issues come up a small minority of the time.
I believe that is because the culture on Metaculus of predicting one’s true beliefs tends to override any other incentives downstream of being interested enough in the concept to have an opinion.
Time can be a factor, but not as much for long-shot conditionals or long time horizon questions. The time investment to predict on a question you don’t expect to update regularly can be on the order of 1 minute.
Some forecasters aim to maximize baseline score, and some aim to maximize peer score. That influences each forecaster’s decision to predict or not, but it doesn’t seem to have a significant impact on the aggregate. Maximizing peer score incentivizes forecasters to stay away from questions where they are strongly in agreement with the community. (That choice doesn’t affect the community prediction in those cases.) Maximizing baseline score incentives forecasters to stay away from questions on which they would predict with high uncertainty, which slightly selects for people who at least believe they have some insight.
Questions that would resolve in 100 years or only if something crazy happens have essentially no relationship with scoring, so with no external incentives in any direction, people do what they want on those questions, which is almost always to predict their true beliefs.
I don’t think we disagree on culture. I was specifically disagreeing with the claim that Metaculus doesn’t have this problem “because it is not a market and there is no cost to make a prediction”. Your point that culture can override or complement incentives is well made.