Metaculus does not have this problem, since it is not a market and there is no cost to make a prediction. I expect long-shot conditionals on Metaculus to be more meaningful, then, since everyone is incentivized to predict their true beliefs.
The cost to make a prediction is time. The incentive of making it look like “Metaculus thinks X” is still present. The incentive to predict correctly is attenuated to the extent that it’s a long-shot conditional or a far future prediction. So Metaculus can still have the same class of problem.
The reasoning you gave sounds sensible, but it doesn’t comport with observations. Only questions with a small number of predictors (e.g. n<10) appear to have significant problems with misaligned incentives, and even then, those issues come up a small minority of the time.
I believe that is because the culture on Metaculus of predicting one’s true beliefs tends to override any other incentives downstream of being interested enough in the concept to have an opinion.
Time can be a factor, but not as much for long-shot conditionals or long time horizon questions. The time investment to predict on a question you don’t expect to update regularly can be on the order of 1 minute.
Some forecasters aim to maximize baseline score, and some aim to maximize peer score. That influences each forecaster’s decision to predict or not, but it doesn’t seem to have a significant impact on the aggregate.
Maximizing peer score incentivizes forecasters to stay away from questions where they are strongly in agreement with the community. (That choice doesn’t affect the community prediction in those cases.) Maximizing baseline score incentives forecasters to stay away from questions on which they would predict with high uncertainty, which slightly selects for people who at least believe they have some insight.
Questions that would resolve in 100 years or only if something crazy happens have essentially no relationship with scoring, so with no external incentives in any direction, people do what they want on those questions, which is almost always to predict their true beliefs.
I don’t think we disagree on culture. I was specifically disagreeing with the claim that Metaculus doesn’t have this problem “because it is not a market and there is no cost to make a prediction”. Your point that culture can override or complement incentives is well made.
Metaculus does not have this problem, since it is not a market and there is no cost to make a prediction. I expect long-shot conditionals on Metaculus to be more meaningful, then, since everyone is incentivized to predict their true beliefs.
The cost to make a prediction is time. The incentive of making it look like “Metaculus thinks X” is still present. The incentive to predict correctly is attenuated to the extent that it’s a long-shot conditional or a far future prediction. So Metaculus can still have the same class of problem.
The reasoning you gave sounds sensible, but it doesn’t comport with observations. Only questions with a small number of predictors (e.g. n<10) appear to have significant problems with misaligned incentives, and even then, those issues come up a small minority of the time.
I believe that is because the culture on Metaculus of predicting one’s true beliefs tends to override any other incentives downstream of being interested enough in the concept to have an opinion.
Time can be a factor, but not as much for long-shot conditionals or long time horizon questions. The time investment to predict on a question you don’t expect to update regularly can be on the order of 1 minute.
Some forecasters aim to maximize baseline score, and some aim to maximize peer score. That influences each forecaster’s decision to predict or not, but it doesn’t seem to have a significant impact on the aggregate. Maximizing peer score incentivizes forecasters to stay away from questions where they are strongly in agreement with the community. (That choice doesn’t affect the community prediction in those cases.) Maximizing baseline score incentives forecasters to stay away from questions on which they would predict with high uncertainty, which slightly selects for people who at least believe they have some insight.
Questions that would resolve in 100 years or only if something crazy happens have essentially no relationship with scoring, so with no external incentives in any direction, people do what they want on those questions, which is almost always to predict their true beliefs.
I don’t think we disagree on culture. I was specifically disagreeing with the claim that Metaculus doesn’t have this problem “because it is not a market and there is no cost to make a prediction”. Your point that culture can override or complement incentives is well made.