I actually didn’t realize that is how karma works here. Thanks. I think the critique would be that, as I understand it, we would be rewarding popularity rather than accuracy with this model.
Well, the key challenge with forecasting is that you have to choose questions that can be empirically verified.
Most of the posts here aren’t forecasts, though some are, and those often do have explicit forecasts.
When they don’t contain forecasts, sometimes they imply them.
I think a “strong” post implies many plausible forecasts, or gives a reasonable suggestion as to how to make them.
For example, Eliezer has a post titled “belief in belief” that advises that beliefs should pay rent in terms of concrete predictions. This doesn’t obviously imply an easy-to-empirically-evaluate prediction, except that readers may be sympathetic to this idea and find it useful.
I actually didn’t realize that is how karma works here. Thanks. I think the critique would be that, as I understand it, we would be rewarding popularity rather than accuracy with this model.
Yeah, and that’s not necessarily a bad thing. In theory, it rewards people who are strong content producers, rather than critics.
Undoubtedly true.
However. I suspect most here would claim they want accurate content rather than “strong” content.
We should be candid about what we are optimizing for either way.
Well, the key challenge with forecasting is that you have to choose questions that can be empirically verified.
Most of the posts here aren’t forecasts, though some are, and those often do have explicit forecasts.
When they don’t contain forecasts, sometimes they imply them.
I think a “strong” post implies many plausible forecasts, or gives a reasonable suggestion as to how to make them.
For example, Eliezer has a post titled “belief in belief” that advises that beliefs should pay rent in terms of concrete predictions. This doesn’t obviously imply an easy-to-empirically-evaluate prediction, except that readers may be sympathetic to this idea and find it useful.