I really hope we get a real-money version of this some day. I think the incentive problems might actually solve themselves. If something is at 90% and you resolve it negatively so you can cash in with your alt accounts, probably people will notice that you did that and stop participating in your markets. The only markets that get really big will be those created by people who have built up a reputation for fairness.
Whether reputation works may depend on the questions asked. Suppose I ask whether I will enjoy my trip to Miami, a question that may attract people who don’t even know me but have been there and zhe outcome of which cannot be verified. If I can resolve such questions in a way that [edited:] allows me to cash in with my alt accounts, it will take a long time until people can get their suspicions probabilistically confirmed.
it will take a long time until people can get their suspicions probabilistically confirmed
Seems like people would just solve this problem by being distrustful (on these kinds of questions) by default. Question posers would bootstrap trust either via their pre-existing reputations, or by honestly resolving a bunch of easy-to-evaluate questions first.
I’m not sure I understand—are you saying that a subjective personal question is one where you’d be more tempted to resolve incorrectly (or delay resolution)? There’s no clear benefit to the market creator of delaying a resolution (they can’t spend the funds that are committed to the markets), but definitely you’re taking on some risk that the market creator will insider trade or otherwise act unethically on their market.
Purely subjective personal questions are questions where others cannot check reliably whether you resolved in an “unfair” way. So reputation also does not work, at least it takes a lot of time.
I edited the text of my first comment, using the words from Daniel’s comment. Maybe it’s easier to understand now.
Oh, yes, that’s a fair point! I think personal questions may self-correct for this, because they’ll draw in less interest and less volume compared to a general-interest question (so possible fraud on personal questions is less profitable). Creators may have more of an informational incentive to let personal markets work well?
But it is a good point, that personal questions are much harder to audit and thus contribute less to reputation; if we formalize a reputation system it’s one factor to consider!
Thanks! A real money/crypto version of the Manifold is very high on our priorities as well; they do have their own challenges (regulation for real money, technical infrastructure for crypto), but we’re optimistic about being able to solve them.
And the mechanism you describe around reputation for fairness is exactly how we expect things to play out! I do think some more work around surfacing some kind judgment metric could be useful (eg total amount fairly adjudicated) but we have more thinking to do. If anyone has thoughts on what reputational metrics could be useful, let us know!
I’m not sure a formal metric is necessary. Maybe you could just have a “controversy” page associated with each user, where people can complain about how particular questions were resolved, and e.g. post evidence like “An anonymous account bought $10k worth of No when the probability was at 92%, and then an hour later that day the question resolved Yes!” Someone who is really trying to scam people would probably pretty quickly accumulate a pretty damning controversy page that anyone could see at a glance was pretty damning.
The exception to this would be “grey area” questions where it totally is subjective how it should go. For those questions they can make profit via anonymous accounts without anyone being able to tell what’s happening. But hopefully this isn’t a huge deal. For comparison, people will resolve many grey area questions in a biased way anyway, e.g. “Will Trump attempt to illegally hold on to power if he loses the 2020 election?” would probably be resolved positive if a Democrat created the question and negatively if a Republican did. If the amount of bias/noise introduced by illicit profit-making is no bigger than the “baseline” amount of bias/noise inherent in the system, then maybe it’s not worth worrying about.
Originally I was going to suggest paying the question creators 1% of the proceeds of each question. However I think that might not be necessary. They are getting rewarded by having their questions answered, after all.
We do actually pay out the question creators! Right now it’s 4% of profits. We don’t do a great job of making this understandable in the UI though—and predictably (heh) most of our creators are more interested in the question outcome than in earning transaction fees.
A controversy page is interesting—kind of like Airbnb or Amazon reviews, but on a seller rather than on a product.
This is one of those “could easily go wrong in any number of ways” ideas, but...
You could plausibly have reputation encoded in other prediction markets. Like, I create a market “will X happen?” and people don’t know how much to trust me. A trusted user could create markets for any or all of
Will X happen? (Based on their own judgment, not mine.)
Will philh judge correctly whether X happened?
Conditional on X happening, will philh judge that X happened?
Conditional on X not happening, will philh judge that X didn’t happen?
And people could look at those markets to guess how much they should trust me, and people who know something about me can play in them.
Though that first one could also be done with the motivation of getting the profits from the question, where people will prefer to play in the trusted user’s market instead of mine, which seems maybe not great.
There’s still some interface work for making these reputational markets more common and visible, though—if a popular market is judged likely to be fradulently resolved, this should be very noticeable to a new user.
Kleros is another (crypto) solution for deciding in contentious cases; I believe Omen actually supports Kleros-mediated contracts as a fallback for their user-generated markets.
I really hope we get a real-money version of this some day. I think the incentive problems might actually solve themselves. If something is at 90% and you resolve it negatively so you can cash in with your alt accounts, probably people will notice that you did that and stop participating in your markets. The only markets that get really big will be those created by people who have built up a reputation for fairness.
Whether reputation works may depend on the questions asked. Suppose I ask whether I will enjoy my trip to Miami, a question that may attract people who don’t even know me but have been there and zhe outcome of which cannot be verified. If I can resolve such questions in a way that [edited:] allows me to cash in with my alt accounts, it will take a long time until people can get their suspicions probabilistically confirmed.
Seems like people would just solve this problem by being distrustful (on these kinds of questions) by default. Question posers would bootstrap trust either via their pre-existing reputations, or by honestly resolving a bunch of easy-to-evaluate questions first.
I’m not sure I understand—are you saying that a subjective personal question is one where you’d be more tempted to resolve incorrectly (or delay resolution)? There’s no clear benefit to the market creator of delaying a resolution (they can’t spend the funds that are committed to the markets), but definitely you’re taking on some risk that the market creator will insider trade or otherwise act unethically on their market.
Purely subjective personal questions are questions where others cannot check reliably whether you resolved in an “unfair” way. So reputation also does not work, at least it takes a lot of time.
I edited the text of my first comment, using the words from Daniel’s comment. Maybe it’s easier to understand now.
Oh, yes, that’s a fair point! I think personal questions may self-correct for this, because they’ll draw in less interest and less volume compared to a general-interest question (so possible fraud on personal questions is less profitable). Creators may have more of an informational incentive to let personal markets work well?
But it is a good point, that personal questions are much harder to audit and thus contribute less to reputation; if we formalize a reputation system it’s one factor to consider!
Thanks! A real money/crypto version of the Manifold is very high on our priorities as well; they do have their own challenges (regulation for real money, technical infrastructure for crypto), but we’re optimistic about being able to solve them.
And the mechanism you describe around reputation for fairness is exactly how we expect things to play out! I do think some more work around surfacing some kind judgment metric could be useful (eg total amount fairly adjudicated) but we have more thinking to do. If anyone has thoughts on what reputational metrics could be useful, let us know!
I’m not sure a formal metric is necessary. Maybe you could just have a “controversy” page associated with each user, where people can complain about how particular questions were resolved, and e.g. post evidence like “An anonymous account bought $10k worth of No when the probability was at 92%, and then an hour later that day the question resolved Yes!” Someone who is really trying to scam people would probably pretty quickly accumulate a pretty damning controversy page that anyone could see at a glance was pretty damning.
The exception to this would be “grey area” questions where it totally is subjective how it should go. For those questions they can make profit via anonymous accounts without anyone being able to tell what’s happening. But hopefully this isn’t a huge deal. For comparison, people will resolve many grey area questions in a biased way anyway, e.g. “Will Trump attempt to illegally hold on to power if he loses the 2020 election?” would probably be resolved positive if a Democrat created the question and negatively if a Republican did. If the amount of bias/noise introduced by illicit profit-making is no bigger than the “baseline” amount of bias/noise inherent in the system, then maybe it’s not worth worrying about.
Originally I was going to suggest paying the question creators 1% of the proceeds of each question. However I think that might not be necessary. They are getting rewarded by having their questions answered, after all.
We do actually pay out the question creators! Right now it’s 4% of profits. We don’t do a great job of making this understandable in the UI though—and predictably (heh) most of our creators are more interested in the question outcome than in earning transaction fees.
A controversy page is interesting—kind of like Airbnb or Amazon reviews, but on a seller rather than on a product.
This is one of those “could easily go wrong in any number of ways” ideas, but...
You could plausibly have reputation encoded in other prediction markets. Like, I create a market “will X happen?” and people don’t know how much to trust me. A trusted user could create markets for any or all of
Will X happen? (Based on their own judgment, not mine.)
Will philh judge correctly whether X happened?
Conditional on X happening, will philh judge that X happened?
Conditional on X not happening, will philh judge that X didn’t happen?
And people could look at those markets to guess how much they should trust me, and people who know something about me can play in them.
Though that first one could also be done with the motivation of getting the profits from the question, where people will prefer to play in the trusted user’s market instead of mine, which seems maybe not great.
Haha, some of our users have already invented similar markets for seeing if a market will be resolved correctly (e.g. https://manifold.markets/RavenKopelman/will-dr-ps-question-about-trump-bei ). I think this is a pretty promising solution!
There’s still some interface work for making these reputational markets more common and visible, though—if a popular market is judged likely to be fradulently resolved, this should be very noticeable to a new user.
Kleros is another (crypto) solution for deciding in contentious cases; I believe Omen actually supports Kleros-mediated contracts as a fallback for their user-generated markets.
You can use Futuur for real-money markets! Btw they are planning to have user-generated markets as well