I joined the discord to look at the discussion. It was posted three separate times, and it seems that it’s been dismissed out of hand without much effort to understand it.
First time it was posted it was pretty much ignored.
Second time it was dismissed without any discussion.
Third time someone said that they believe they discussed it already, and Jack added this comment
Yeah, it’s possible I’ve missed something critical, but I think the paper assumes that the agents get payouts from the market mechanism, and can’t pay each other. this is a totally unrealistic assumption in most prediction markets including manifold, although one could imagine ways to make it (slightly) more realistic by hiding everyone’s identities as crypto prediction markets sort of do. Under that assumption, the paper says we can make self-resolving prediction markets basically incentive compatible. The main problem this paper tackles is: if you make a prediction, you profit if other people update more in the direction of your prediction, so you may be able to strategically predict overconfidently to try to profit more. This paper says you can structure the payouts so that people have near-zero incentive to strategically mislead with your bets. This is pretty interesting and not an obvious result! But if you can just pay off others to resolve the way you want, then this totally fails anyway.
I’m not sure how true this is, and if it is, how bad it would actually be in practice (which is why it’s worth testing empirically), but I’ll ask the author for his thoughts on this and share his response. I’ve already had some back and forth with him about other questions I had.
Some things worth noting:
There’s discussion there about self-resolving markets that don’t use this model, like Jack’s article, which aren’t directly relevant here.
This is the first proof of concept ever, so it makes sense that it will have a bunch of limitations, but it’s plausible they can be overcome, so I wouldn’t be quick to abandon it.
Even if it’s not good enough for fully self-resolving prediction markets, I think you could use it for “partially” self-resolving prediction markets in cases where it’s uncertain if if the market is verifiable, like conditional markets and replication markets. So if you can’t verify the result the market self-resolves, instead of resolving to N/A and refunding the participants. That way you have an increased incentive to participate, because you know the market will resolve either way, but it also grounds you in truth because you know it may resolve based on real events and not based on the self-resolving mechanism.
By “can’t pay each other” is the author referring to the fact that agents can engage in side-trades outside the market? Or that agents are interacting with a mechanism and not directly trading with each other?
In the latter case, I’d point to Appendix E where we explain that we can just easily cast our mechanism as one where traders are trading with an automated market maker, so yes, agents still aren’t trading with each other directly, although can trade “securities” with a market maker.
I think this is totally fine since it’s the same “user experience” for traders (it’s just an algorithm, not a person on the other side of the trade). The market maker in the middle greases the wheels of the trade, providing liquidity so you don’t have to wait for a counterparty to arrive at the same time. If one is still unhappy with this, they can just have it operate as a direct exchange anyway. We just can’t allow for that in our theory because of some technical theoretical results (“no-trade theorems”) which are pretty unrealistic, but nothing (in practice) is stopping one from using a continuous double auction to have traders trade with each other.
In the former case, it’s true we don’t consider outside incentives (which traditionally are not included in our kind of analysis anyway). It could be an interesting direction for future work, but I’m not sure that side trades are any more fatal to self-resolving markets than regular prediction markets. For one, you don’t know who the last trader is because of random termination, so market manipulation will probably be inefficient (you might even have to pay others more than you put in). If you just want to pump the price and cash out — this is no different from pump and dumps in regular prediction markets. And in general, it takes a whale to sustain a price at a level unwarranted by fundamentals — and this applies to regular prediction markets too. Another way I view this is by analogy to stock markets — is it easier to buy a stock and then pump the price by paying others to buy the stock too? In some contexts, yes, but this is why market surveillance is helpful and such behavior is illegal. All of this is to say I think that concerns around manipulation apply, but not much more than regular prediction markets…perhaps I misunderstand the concern, in which case a concrete example of the concern would help.
Thanks for mentioning it!
I joined the discord to look at the discussion. It was posted three separate times, and it seems that it’s been dismissed out of hand without much effort to understand it.
First time it was posted it was pretty much ignored.
Second time it was dismissed without any discussion.
Third time someone said that they believe they discussed it already, and Jack added this comment
I’m not sure how true this is, and if it is, how bad it would actually be in practice (which is why it’s worth testing empirically), but I’ll ask the author for his thoughts on this and share his response. I’ve already had some back and forth with him about other questions I had.
Some things worth noting:
There’s discussion there about self-resolving markets that don’t use this model, like Jack’s article, which aren’t directly relevant here.
This is the first proof of concept ever, so it makes sense that it will have a bunch of limitations, but it’s plausible they can be overcome, so I wouldn’t be quick to abandon it.
Even if it’s not good enough for fully self-resolving prediction markets, I think you could use it for “partially” self-resolving prediction markets in cases where it’s uncertain if if the market is verifiable, like conditional markets and replication markets. So if you can’t verify the result the market self-resolves, instead of resolving to N/A and refunding the participants. That way you have an increased incentive to participate, because you know the market will resolve either way, but it also grounds you in truth because you know it may resolve based on real events and not based on the self-resolving mechanism.
Here’s Siddarth’s response: