I think it would be worthwhile to present an example of a possible market. Even without payment what you are proposing seems quite complex and thinking through a concrete example without payment would make it more clear how the thing would work.
Valid, I’m still working on writing up properly the version with full math which is much more complicated, without that math and without payment it consists of people telling their beliefs and being mysteriously believed about them, because everyone knows everyones incentivised to be honest and sincere and the Agreement Theorem says that means they’ll agree when they all know everyone elses reasoning.
Possible Example that I think is the minimum case for any kind of market information system like this:
weather.com wants accurate predictions 7 days in advance for a list of measurements that will happen at weather measurement stations around the world, to inform its customers.
It proposes a naive prior, something like every measurement being a random sample from the past history.
It offers to pay $1 million in reward per single expected bit of information about the average sensor which it uses to assess the outcome, predicted before the weekly deadline. That means that if the rain-sensors are all currently estimated at 10% chance of rain, and you move half of them to 15% and the other half to 5%, you should expect 1 million in profit for improving the rain predictions (conditional on your update actually being legitimate).
The market consists of many meteorologists looking at past data and other relevant information they can find elsewhere, and sharing the beliefs they reach about what the measurements will be in the future, in the form of a statistical model / distribution over possible sensor values. After making their own models, they can compare them and consider the ideas others thought of that they didn’t, until the Agreement Theorem says that should reach a common agreed prediction about the likelihood of combinations of outcomes.
How they reach agreement is up to them, but to prevent overconfidence you’ve got the threat that others can just bet against you and if you’re wrong you’ll lose, and to prevent underconfidence you’ve got the offer from the customer that they’ll pay out for higher information predictions.
That distribution becomes the output of the information market, and the customer pays for the information in terms of how much information it contains over their naive prior, according to their agreed returns.
How payment works is basically that everyone is kept honest by being paid in carefully shaped bets designed to be profitable in expectation if their claims are true and losing in expectation if their claims are false or if the information is made up. If the market knows you’re making it up they can call your bluff before the prediction goes out by strongly betting against you, but there doesn’t need to be another trader willing to do that: If the difference in prediction caused by you is not a step towards more accuracy then your bet will on-average lose and you’d be better off not playing.
This is insanely high-risk for something like a single boolean market, where often your bet will lose by simple luck, but with a huge array of mostly uncorrelated features to predict anyone actually adding information can expect to win enough bets on average to get their earned profit.
I think it would be worthwhile to present an example of a possible market. Even without payment what you are proposing seems quite complex and thinking through a concrete example without payment would make it more clear how the thing would work.
Valid, I’m still working on writing up properly the version with full math which is much more complicated, without that math and without payment it consists of people telling their beliefs and being mysteriously believed about them, because everyone knows everyones incentivised to be honest and sincere and the Agreement Theorem says that means they’ll agree when they all know everyone elses reasoning.
Possible Example that I think is the minimum case for any kind of market information system like this:
weather.com wants accurate predictions 7 days in advance for a list of measurements that will happen at weather measurement stations around the world, to inform its customers.
It proposes a naive prior, something like every measurement being a random sample from the past history.
It offers to pay $1 million in reward per single expected bit of information about the average sensor which it uses to assess the outcome, predicted before the weekly deadline. That means that if the rain-sensors are all currently estimated at 10% chance of rain, and you move half of them to 15% and the other half to 5%, you should expect 1 million in profit for improving the rain predictions (conditional on your update actually being legitimate).
The market consists of many meteorologists looking at past data and other relevant information they can find elsewhere, and sharing the beliefs they reach about what the measurements will be in the future, in the form of a statistical model / distribution over possible sensor values. After making their own models, they can compare them and consider the ideas others thought of that they didn’t, until the Agreement Theorem says that should reach a common agreed prediction about the likelihood of combinations of outcomes.
How they reach agreement is up to them, but to prevent overconfidence you’ve got the threat that others can just bet against you and if you’re wrong you’ll lose, and to prevent underconfidence you’ve got the offer from the customer that they’ll pay out for higher information predictions.
That distribution becomes the output of the information market, and the customer pays for the information in terms of how much information it contains over their naive prior, according to their agreed returns.
How payment works is basically that everyone is kept honest by being paid in carefully shaped bets designed to be profitable in expectation if their claims are true and losing in expectation if their claims are false or if the information is made up. If the market knows you’re making it up they can call your bluff before the prediction goes out by strongly betting against you, but there doesn’t need to be another trader willing to do that: If the difference in prediction caused by you is not a step towards more accuracy then your bet will on-average lose and you’d be better off not playing.
This is insanely high-risk for something like a single boolean market, where often your bet will lose by simple luck, but with a huge array of mostly uncorrelated features to predict anyone actually adding information can expect to win enough bets on average to get their earned profit.