This would very much confuse things. Predictions resolve based on observed, measurable events. Models never do. You now have conflicting motives: you want to bet on things that move the market toward your prediction, but you want to trick others into models that give you betting opportunities.
It wouldn’t work in prediction markets (which is confused by the fact that people often use the word prediction market to refer to other things), but I’ve played around with the idea for prediction polls/prediction tournaments where you show people’s explanations probabilistically weighted by their “explanation score”, then pay out points based on how correlated seeing their explanation is with other people making good predictions.
This provides a counter-incentive to the normal prediction tournament incentives of hiding information.
This would very much confuse things. Predictions resolve based on observed, measurable events. Models never do. You now have conflicting motives: you want to bet on things that move the market toward your prediction, but you want to trick others into models that give you betting opportunities.
It wouldn’t work in prediction markets (which is confused by the fact that people often use the word prediction market to refer to other things), but I’ve played around with the idea for prediction polls/prediction tournaments where you show people’s explanations probabilistically weighted by their “explanation score”, then pay out points based on how correlated seeing their explanation is with other people making good predictions.
This provides a counter-incentive to the normal prediction tournament incentives of hiding information.
These concerns seems slightly overblown given that the comments sections of metaculus seem reasonable with people sharing info?
This is basically guaranteed to get worse as more money gets involved, and I’m interested in it working in situations where lots of money is at stake.
Fair