Does anybody know of research studying whether prediction markets/forecasting averages become more accurate if you exclude non-superforecaster predictions vs. including them?
To be specific, say you run a forecasting tournament with 1,000 participants. After determining the Brier score of each participant, you compute what the Brier score would be for the average of the best 20 participants vs. the average of all 1000 participants. Which average would typically have a lower Brier score—the average of the best 20 participants’ predictions, or the average of all 1000 participants’ predictions?
Does anybody know of research studying whether prediction markets/forecasting averages become more accurate if you exclude non-superforecaster predictions vs. including them?
To be specific, say you run a forecasting tournament with 1,000 participants. After determining the Brier score of each participant, you compute what the Brier score would be for the average of the best 20 participants vs. the average of all 1000 participants. Which average would typically have a lower Brier score—the average of the best 20 participants’ predictions, or the average of all 1000 participants’ predictions?