“Superforecasters learning to choose easier questions”
Just wanted to note that it’s not easier questions, per se, it’s ones where you have a marginal advantage due to information or skill asymmetry. And because it’s a competition, at least sometimes, you have an incentive to predict on questions that are being ignored as well. There are definitely fewer people forecasting more intrinsically uncertain questions, but since participants get scored with the superforecaster median for questions they don’t answer, that’s a resource allocation question, rather than the system interfering with the real world. We see this happening broadly when prediction scoring systems don’t match incentives, but I’ve discussed that elsewhere, and there was a recent LW post on the point as well.
Mostly, this type of interference is from real-world goals to predictions, rather than the reverse. We do see some interference in prediction markets in order to change real world outcomes happens, in the first half of the 20th century: “The newspapers periodically contained charges that the partisans were manipulating the reported betting odds to create a bandwagon effect.” (Rhode and Strumpf, 2003)
Thanks. I keep missing this one, because Good Judgment Open, the platform used to select forecasters, rewards both Brier score and relative Brier score.
I read that line differently, though I agree with your remarks. “Superforecasters learning to choose easier questions” was, to me, at least as much about the suite of questions posed to the forecasters as the questions each individual forecaster chooses to answer. If a forecasting firm wants to build a reputation, they could potentially learn how to ask questions that look harder to answer than they really are.
That’s a good point. For some of the questions, that’s a reasonable criticism, but as GJ Inc. becomes increasingly based on client-driven questions, it’s a less viable strategy.
Just wanted to note that it’s not easier questions, per se, it’s ones where you have a marginal advantage due to information or skill asymmetry. And because it’s a competition, at least sometimes, you have an incentive to predict on questions that are being ignored as well. There are definitely fewer people forecasting more intrinsically uncertain questions, but since participants get scored with the superforecaster median for questions they don’t answer, that’s a resource allocation question, rather than the system interfering with the real world. We see this happening broadly when prediction scoring systems don’t match incentives, but I’ve discussed that elsewhere, and there was a recent LW post on the point as well.
Mostly, this type of interference is from real-world goals to predictions, rather than the reverse. We do see some interference in prediction markets in order to change real world outcomes happens, in the first half of the 20th century: “The newspapers periodically contained charges that the partisans were manipulating the reported betting odds to create a bandwagon effect.” (Rhode and Strumpf, 2003)
Thanks. I keep missing this one, because Good Judgment Open, the platform used to select forecasters, rewards both Brier score and relative Brier score.
Yes—GJO isn’t actually quite doing superforecasting as the book describes—for example, it’s not team-based.
I read that line differently, though I agree with your remarks. “Superforecasters learning to choose easier questions” was, to me, at least as much about the suite of questions posed to the forecasters as the questions each individual forecaster chooses to answer. If a forecasting firm wants to build a reputation, they could potentially learn how to ask questions that look harder to answer than they really are.
That’s a good point. For some of the questions, that’s a reasonable criticism, but as GJ Inc. becomes increasingly based on client-driven questions, it’s a less viable strategy.