Co-founder of AI-Plans and volunteer with PauseAI.
The risk of human extinction from artificial intelligence is a near-term threat. Time is short, p(doom) is high, and anyone can take simple, practical actions right now to help prevent the worst outcomes.
The reasoning you gave sounds sensible, but it doesn’t comport with observations. Only questions with a small number of predictors (e.g. n<10) appear to have significant problems with misaligned incentives, and even then, those issues come up a small minority of the time.
I believe that is because the culture on Metaculus of predicting one’s true beliefs tends to override any other incentives downstream of being interested enough in the concept to have an opinion.
Time can be a factor, but not as much for long-shot conditionals or long time horizon questions. The time investment to predict on a question you don’t expect to update regularly can be on the order of 1 minute.
Some forecasters aim to maximize baseline score, and some aim to maximize peer score. That influences each forecaster’s decision to predict or not, but it doesn’t seem to have a significant impact on the aggregate. Maximizing peer score incentivizes forecasters to stay away from questions where they are strongly in agreement with the community. (That choice doesn’t affect the community prediction in those cases.) Maximizing baseline score incentives forecasters to stay away from questions on which they would predict with high uncertainty, which slightly selects for people who at least believe they have some insight.
Questions that would resolve in 100 years or only if something crazy happens have essentially no relationship with scoring, so with no external incentives in any direction, people do what they want on those questions, which is almost always to predict their true beliefs.