Good points! This covers a lot of ground that we’ve been thinking about.
So the thing I’m wondering here is what makes this “amplification” in more than a trivial sense.
To be honest, I’m really not sure what word is best here. “Amplification” is the word we used for this post. I’ve also thought about calling this sort of thing “Proliferation” after “Instillation” here and have previously referred to this method as Prediction-Augmented Evaluation Systems. I agree that the employee case could also be considered a kind of amplification according to this terminology. If you have preferences or other ideas for names for this, I’d be eager to hear!
but has the disadvantage that you can’t directly give rewards for other criteria like “how well is this explained”. You also can’t reward research on topics that you don’t do deep dives on.
Very true, at least at this stage of development of Foretold. I’ve written some more thinking on this here. Traditional prediction markets don’t do a good job incentivizing participants to share descriptions and research, but ideally future systems would. There are ways we are working on to improve this with Foretold. A very simple setup would be one that gives people points/money for writing comments that are upvoted by important predictors.
I think it’s worth comparing this more explicitly to the most straightforward alternative, which is “ask people to send you information and probability distributions, then use your intuition or expertise or whatever other criteria you like to calculate how valuable their submission is, then send them a proportional amount of money.”
This isn’t incredibly far from what we’re going for, but I think the additional presence of a visible aggregate and the ability for forecasters to learn / compete with each other are going to be useful in expectation. I also would want this to be a very systematized process, because then there is a lot of optimization that could arguably be done. The big downside of forecasting systems is that they are less flexible than free-form solutions, but one big upside is that it may be possible to optimize them in different ways. For instance, eventually there could be significant data science pipelines, and lots of statistics for accuracy and calibration, that would be difficult to attain in free form setups. I think in the short term online forecasting setups will be relatively expensive, but it’s possible that with some work they could become significantly more effective for certain types of problems.
I’d definitely agree that good crowdsourced forecasting questions need to be in some sort of sweet spot of “difficult enough to make external-forecasting useful, but open/transparent enough to make external-forecasting possible.”
Good points! This covers a lot of ground that we’ve been thinking about.
To be honest, I’m really not sure what word is best here. “Amplification” is the word we used for this post. I’ve also thought about calling this sort of thing “Proliferation” after “Instillation” here and have previously referred to this method as Prediction-Augmented Evaluation Systems. I agree that the employee case could also be considered a kind of amplification according to this terminology. If you have preferences or other ideas for names for this, I’d be eager to hear!
Very true, at least at this stage of development of Foretold. I’ve written some more thinking on this here. Traditional prediction markets don’t do a good job incentivizing participants to share descriptions and research, but ideally future systems would. There are ways we are working on to improve this with Foretold. A very simple setup would be one that gives people points/money for writing comments that are upvoted by important predictors.
This isn’t incredibly far from what we’re going for, but I think the additional presence of a visible aggregate and the ability for forecasters to learn / compete with each other are going to be useful in expectation. I also would want this to be a very systematized process, because then there is a lot of optimization that could arguably be done. The big downside of forecasting systems is that they are less flexible than free-form solutions, but one big upside is that it may be possible to optimize them in different ways. For instance, eventually there could be significant data science pipelines, and lots of statistics for accuracy and calibration, that would be difficult to attain in free form setups. I think in the short term online forecasting setups will be relatively expensive, but it’s possible that with some work they could become significantly more effective for certain types of problems.
I’d definitely agree that good crowdsourced forecasting questions need to be in some sort of sweet spot of “difficult enough to make external-forecasting useful, but open/transparent enough to make external-forecasting possible.”