Hang on, I just realized there’s a much simpler way to analyze the situations I described, which also works for more complicated variants like “Bob gets a 50% chance to learn the outcome, but you get a 10% chance to modify it afterward”. Since money isn’t created out of nothing, any such situation is a zero-sum game. Both players can easily guarantee themselves a payoff of 0 by refusing all offers. Therefore the value of the game is 0. Nash equilibrium, subgame-perfect equilibrium, no matter. Rational players don’t play.
That leads to the second question: which assumptions should we relax to get a nontrivial model of a prediction market, and how do we analyze it?
Robin Hanson argues that prediction markets should be subsidized by those who want the information. (They can also be subsidized by “noise” traders who are not maximizing their expected money from the prediction market.) Under these conditions, the expected value for rational traders can be positive.
Good link, thanks. So Robin knows that zero-sum markets will be “no-trade” in the theoretical limit. Can you explain a little about the mechanism of subsidizing a prediction market? Just give stuff to participants? But then the game stays constant-sum...
Basically, you’d like to reward everyone according to the amount of information they contribute. The game isn’t constant sum overall since the amount of information people bring to the market can vary. Ideally, you’d still like the total subsidy to be bounded so there’s no chance for infinite liability.
Depending on how the market is structured, if someone thinks another person has strictly more information than them, they should disclose that fact and receive no payout (at least in expectation). Hanson’s market scoring rules reward everyone according to how much they improve on the last person’s prediction. If Bob participates in the market before you, you should just match his prediction. If you participate before him, you can give what information you do have and then he’ll add his unique information later.
Many thanks for the pointer to LMSR! That seems to answer all my questions.
(Why aren’t scoring rules mentioned in the Wikipedia article on prediction markets? I had a vague idea of what prediction parkets were, but it turns out I missed the most important part, and asked a whole bunch of ignorant questions… Anyway, it’s a relief to finally understand this stuff.)
They should be. Just a matter of someone stepping up to write that section. The modern theory on market makers has existed for less than a decade and only matured in the last few years, so it just hasn’t had time to percolate out. Even here on Less Wrong, where prediction markets are very salient and Hanson is well known, there isn’t a good explanation of the state of the art. I have a sequence in the works on prediction markets, scoring rules, and mechanism design in an attempt to correct that.
Good link, thanks. So Robin knows that zero-sum markets will be “no-trade” in the theoretical limit. Can you explain a little about the mechanism of subsidizing a prediction market? Just give stuff to participants? But then the game stays constant-sum...
There’s no problem with the game being constant sum.
The assumption you should relax is that of an objective probability. If you treat probabilities as purely subjective, and that saying that P(X)=1/3 means that my decision procedure thinks the world with not X is twice as important as the world with X, then we can make a trade.
Lets say I say P(X)=1/3 and you say P(X)=2/3, and I bet you a dollar that not X. Then I pay you a dollar in the world that I do not care about as much, and you pay me a dollar in the world that you do not care about as much. Everyone wins.
This model of probability is kind of out there, but I am seriously considering that it might be the best model. Wei Dai argues for it here.
I know Wei’s model and like it a lot, but it doesn’t solve this problem. With subjective probabilities, the exchange of information between players in a market becomes very complicated, like Aumann agreement but everyone has an incentive to mislead everyone else. How do you update when the other guy announces that they’re willing to make such-and-such bet? That depends on why they announce it, and what they anticipate your reaction to be. When you’re playing poker and the other guy raises, how do you update your subjective probabilities about their cards? Hmm, depends on their strategy. And what does their strategy depend on? Probably Nash equilibrium considerations. That’s why I’d prefer to see a solution stated in game-theoretic terms, rather than subjective probabilities.
ETA: see JGWeissman’s and badger’s comments, they’re what I wanted to hear. The answer is that we relax the assumption of zero-sum, and set up a complex system of payouts to market participants based on how much information they give to the central participant. It turns out that can be done just right, so the Nash equilibrium for everyone is to tell their true beliefs to the central participant and get a fair price in return.
Game theory in these setting is built on subjective probabilities! The standard solution concept in incomplete-information games is even known as Bayes-Nash equilibrium.
The LMSR is stronger strategically than Nash equilibrium, assuming everyone participates only once. In that case, it’s a dominant strategy to be honest, rather than just a best response. If people participate multiple times, the Bayes-Nash equilibrium is harder to characterize. See Gao et al (2013)] for the best current description, which roughly says you shouldn’t reveal any information until the very last moment. The paper has an overview of the LMSR for anyone interested.
You will not take a bet with Bob. If he does not know the result of the coin, he will not take anything worse than even odds.
You should clearly not offer him even odds. If you offer him anything else, he will accept if and only if he knows you will lose.
Hang on, I just realized there’s a much simpler way to analyze the situations I described, which also works for more complicated variants like “Bob gets a 50% chance to learn the outcome, but you get a 10% chance to modify it afterward”. Since money isn’t created out of nothing, any such situation is a zero-sum game. Both players can easily guarantee themselves a payoff of 0 by refusing all offers. Therefore the value of the game is 0. Nash equilibrium, subgame-perfect equilibrium, no matter. Rational players don’t play.
That leads to the second question: which assumptions should we relax to get a nontrivial model of a prediction market, and how do we analyze it?
Robin Hanson argues that prediction markets should be subsidized by those who want the information. (They can also be subsidized by “noise” traders who are not maximizing their expected money from the prediction market.) Under these conditions, the expected value for rational traders can be positive.
Good link, thanks. So Robin knows that zero-sum markets will be “no-trade” in the theoretical limit. Can you explain a little about the mechanism of subsidizing a prediction market? Just give stuff to participants? But then the game stays constant-sum...
Basically, you’d like to reward everyone according to the amount of information they contribute. The game isn’t constant sum overall since the amount of information people bring to the market can vary. Ideally, you’d still like the total subsidy to be bounded so there’s no chance for infinite liability.
Depending on how the market is structured, if someone thinks another person has strictly more information than them, they should disclose that fact and receive no payout (at least in expectation). Hanson’s market scoring rules reward everyone according to how much they improve on the last person’s prediction. If Bob participates in the market before you, you should just match his prediction. If you participate before him, you can give what information you do have and then he’ll add his unique information later.
Many thanks for the pointer to LMSR! That seems to answer all my questions.
(Why aren’t scoring rules mentioned in the Wikipedia article on prediction markets? I had a vague idea of what prediction parkets were, but it turns out I missed the most important part, and asked a whole bunch of ignorant questions… Anyway, it’s a relief to finally understand this stuff.)
They should be. Just a matter of someone stepping up to write that section. The modern theory on market makers has existed for less than a decade and only matured in the last few years, so it just hasn’t had time to percolate out. Even here on Less Wrong, where prediction markets are very salient and Hanson is well known, there isn’t a good explanation of the state of the art. I have a sequence in the works on prediction markets, scoring rules, and mechanism design in an attempt to correct that.
That would be great! If you need someone to read drafts, I’d be very willing :-)
There’s no problem with the game being constant sum.
I always assumed it was by selling prediction securities for less than they will ultimately pay out.
The assumption you should relax is that of an objective probability. If you treat probabilities as purely subjective, and that saying that P(X)=1/3 means that my decision procedure thinks the world with not X is twice as important as the world with X, then we can make a trade.
Lets say I say P(X)=1/3 and you say P(X)=2/3, and I bet you a dollar that not X. Then I pay you a dollar in the world that I do not care about as much, and you pay me a dollar in the world that you do not care about as much. Everyone wins.
This model of probability is kind of out there, but I am seriously considering that it might be the best model. Wei Dai argues for it here.
I know Wei’s model and like it a lot, but it doesn’t solve this problem. With subjective probabilities, the exchange of information between players in a market becomes very complicated, like Aumann agreement but everyone has an incentive to mislead everyone else. How do you update when the other guy announces that they’re willing to make such-and-such bet? That depends on why they announce it, and what they anticipate your reaction to be. When you’re playing poker and the other guy raises, how do you update your subjective probabilities about their cards? Hmm, depends on their strategy. And what does their strategy depend on? Probably Nash equilibrium considerations. That’s why I’d prefer to see a solution stated in game-theoretic terms, rather than subjective probabilities.
ETA: see JGWeissman’s and badger’s comments, they’re what I wanted to hear. The answer is that we relax the assumption of zero-sum, and set up a complex system of payouts to market participants based on how much information they give to the central participant. It turns out that can be done just right, so the Nash equilibrium for everyone is to tell their true beliefs to the central participant and get a fair price in return.
Game theory in these setting is built on subjective probabilities! The standard solution concept in incomplete-information games is even known as Bayes-Nash equilibrium.
The LMSR is stronger strategically than Nash equilibrium, assuming everyone participates only once. In that case, it’s a dominant strategy to be honest, rather than just a best response. If people participate multiple times, the Bayes-Nash equilibrium is harder to characterize. See Gao et al (2013)] for the best current description, which roughly says you shouldn’t reveal any information until the very last moment. The paper has an overview of the LMSR for anyone interested.
Thanks for the link to Gao et al. It looks like the general problem is still unsolved, would be interesting to figure it out...
Maybe I should try to turn this comment into a full discussion post.