Person A says “Google’s stock is going to go down—the world is flat, and when people realize this, the Global Positioning System (GPS) will seem less valuable.
Person B says “you’re very right A. But given the power and influence they wield so far in order to get people to have that belief, I don’t see the truth coming out anytime soon—and even if it did, when people look for someone to blame they won’t re-examine their beliefs and methods of adopting them that got them wrong. Instead, they will google ‘who is to blame, who kept the truth from us about the shape of the earth?’ A scapegoat will be chosen and how ridiculous it is won’t matter...because everyone trusts google.”
Buying stocks need not stem from models you consider worth considering.
What you want should be a different layer. Perhaps a prediction market that includes ‘automatic traders’ and prediction markets* on their performance?
(* Likely the same as the original market, though perhaps with less investment.)
In any case, the market is “black box”. This rewards being right, whether your reasons for being right are wrong. Perhaps what you want is not a current (opaque) consensus about the future, but a (transparent) consensus about the past*?
*One that updates as more information becomes available might be useful.
This would very much confuse things. Predictions resolve based on observed, measurable events. Models never do. You now have conflicting motives: you want to bet on things that move the market toward your prediction, but you want to trick others into models that give you betting opportunities.
It wouldn’t work in prediction markets (which is confused by the fact that people often use the word prediction market to refer to other things), but I’ve played around with the idea for prediction polls/prediction tournaments where you show people’s explanations probabilistically weighted by their “explanation score”, then pay out points based on how correlated seeing their explanation is with other people making good predictions.
This provides a counter-incentive to the normal prediction tournament incentives of hiding information.
I figured out what bugs me about prediction markets. I would really like for functionality built in for people to share their model considerations.
Person A says “Google’s stock is going to go down—the world is flat, and when people realize this, the Global Positioning System (GPS) will seem less valuable.
Person B says “you’re very right A. But given the power and influence they wield so far in order to get people to have that belief, I don’t see the truth coming out anytime soon—and even if it did, when people look for someone to blame they won’t re-examine their beliefs and methods of adopting them that got them wrong. Instead, they will google ‘who is to blame, who kept the truth from us about the shape of the earth?’ A scapegoat will be chosen and how ridiculous it is won’t matter...because everyone trusts google.”
Buying stocks need not stem from models you consider worth considering.
What you want should be a different layer. Perhaps a prediction market that includes ‘automatic traders’ and prediction markets* on their performance?
(* Likely the same as the original market, though perhaps with less investment.)
In any case, the market is “black box”. This rewards being right, whether your reasons for being right are wrong. Perhaps what you want is not a current (opaque) consensus about the future, but a (transparent) consensus about the past*?
*One that updates as more information becomes available might be useful.
This would very much confuse things. Predictions resolve based on observed, measurable events. Models never do. You now have conflicting motives: you want to bet on things that move the market toward your prediction, but you want to trick others into models that give you betting opportunities.
It wouldn’t work in prediction markets (which is confused by the fact that people often use the word prediction market to refer to other things), but I’ve played around with the idea for prediction polls/prediction tournaments where you show people’s explanations probabilistically weighted by their “explanation score”, then pay out points based on how correlated seeing their explanation is with other people making good predictions.
This provides a counter-incentive to the normal prediction tournament incentives of hiding information.
These concerns seems slightly overblown given that the comments sections of metaculus seem reasonable with people sharing info?
This is basically guaranteed to get worse as more money gets involved, and I’m interested in it working in situations where lots of money is at stake.
Fair