There’s a close analogue, which is getting accepted as a superforecaster by the Good Judgement Project by performing in the top 1%, I beleive, on Good Judgement Open. (They have badges of some sort as well for superforecasters.) I’ll also note that the top-X metaculus score is a weird and not great metric to try to get people to maximize, because it rewards participation as well as accuracy—for example, you can get tons of points by just always guessing the metaculus average, and updating frequently—though you’ll never overtake the top people. And contra ike, as a rank 50-100 “metaculuser” who doesn’t have time to predict on everything and get my score higher, I think we should privilege that distinction over all the people who rank higher than me on metaculus. ;)
I will say that I think there’s already a reasonable amount of prestige in certain circles for being a superforecaster, especially in EA- and LW-adjacent areas, though it’s hard for me to disentangle how much prestige is from that versus other things I have been doing around the same time, like getting a PhD.
And contra ike, as a rank 50-100 “metaculuser” who doesn’t have time to predict on everything and get my score higher, I think we should privilege that distinction over all the people who rank higher than me on metaculus. ;)
Having been a rank 50-100 “metaculuser” myself before I completely agree (currently at rank 112).
Good to see so many of us moderately good forecasters are agreeing—now we just need to average then extremize the forecast of how good an idea this is. ;)
I don’t know what performance measure is used to select superforecasters, but updating frequently seems to usually improve your accuracy score on GJopen as well (see “Activity Loading” in this thread on the EA forum. )
Yes, it’s super important to update frequently when the scores are computed as time-weighted. And for Mataculus, that’s a useful thing, since viewers want to know what the current best guess is, but it’s not the only way to do scoring. But saying frequent updating makes you better at forecasting isn’t actually a fact about how accurate the individual forecasts are—it’s a fact about how they are scored.
There’s a close analogue, which is getting accepted as a superforecaster by the Good Judgement Project by performing in the top 1%, I beleive, on Good Judgement Open. (They have badges of some sort as well for superforecasters.) I’ll also note that the top-X metaculus score is a weird and not great metric to try to get people to maximize, because it rewards participation as well as accuracy—for example, you can get tons of points by just always guessing the metaculus average, and updating frequently—though you’ll never overtake the top people. And contra ike, as a rank 50-100 “metaculuser” who doesn’t have time to predict on everything and get my score higher, I think we should privilege that distinction over all the people who rank higher than me on metaculus. ;)
I will say that I think there’s already a reasonable amount of prestige in certain circles for being a superforecaster, especially in EA- and LW-adjacent areas, though it’s hard for me to disentangle how much prestige is from that versus other things I have been doing around the same time, like getting a PhD.
Yes, you should definitely milk your PhD for as much status as possible, Dr. Manheim.
Having been a rank 50-100 “metaculuser” myself before I completely agree (currently at rank 112).
Good to see so many of us moderately good forecasters are agreeing—now we just need to average then extremize the forecast of how good an idea this is. ;)
I don’t know what performance measure is used to select superforecasters, but updating frequently seems to usually improve your accuracy score on GJopen as well (see “Activity Loading” in
this thread on the EA forum. )
Yes, it’s super important to update frequently when the scores are computed as time-weighted. And for Mataculus, that’s a useful thing, since viewers want to know what the current best guess is, but it’s not the only way to do scoring. But saying frequent updating makes you better at forecasting isn’t actually a fact about how accurate the individual forecasts are—it’s a fact about how they are scored.