Thanks! A real money/crypto version of the Manifold is very high on our priorities as well; they do have their own challenges (regulation for real money, technical infrastructure for crypto), but we’re optimistic about being able to solve them.
And the mechanism you describe around reputation for fairness is exactly how we expect things to play out! I do think some more work around surfacing some kind judgment metric could be useful (eg total amount fairly adjudicated) but we have more thinking to do. If anyone has thoughts on what reputational metrics could be useful, let us know!
I’m not sure a formal metric is necessary. Maybe you could just have a “controversy” page associated with each user, where people can complain about how particular questions were resolved, and e.g. post evidence like “An anonymous account bought $10k worth of No when the probability was at 92%, and then an hour later that day the question resolved Yes!” Someone who is really trying to scam people would probably pretty quickly accumulate a pretty damning controversy page that anyone could see at a glance was pretty damning.
The exception to this would be “grey area” questions where it totally is subjective how it should go. For those questions they can make profit via anonymous accounts without anyone being able to tell what’s happening. But hopefully this isn’t a huge deal. For comparison, people will resolve many grey area questions in a biased way anyway, e.g. “Will Trump attempt to illegally hold on to power if he loses the 2020 election?” would probably be resolved positive if a Democrat created the question and negatively if a Republican did. If the amount of bias/noise introduced by illicit profit-making is no bigger than the “baseline” amount of bias/noise inherent in the system, then maybe it’s not worth worrying about.
Originally I was going to suggest paying the question creators 1% of the proceeds of each question. However I think that might not be necessary. They are getting rewarded by having their questions answered, after all.
We do actually pay out the question creators! Right now it’s 4% of profits. We don’t do a great job of making this understandable in the UI though—and predictably (heh) most of our creators are more interested in the question outcome than in earning transaction fees.
A controversy page is interesting—kind of like Airbnb or Amazon reviews, but on a seller rather than on a product.
This is one of those “could easily go wrong in any number of ways” ideas, but...
You could plausibly have reputation encoded in other prediction markets. Like, I create a market “will X happen?” and people don’t know how much to trust me. A trusted user could create markets for any or all of
Will X happen? (Based on their own judgment, not mine.)
Will philh judge correctly whether X happened?
Conditional on X happening, will philh judge that X happened?
Conditional on X not happening, will philh judge that X didn’t happen?
And people could look at those markets to guess how much they should trust me, and people who know something about me can play in them.
Though that first one could also be done with the motivation of getting the profits from the question, where people will prefer to play in the trusted user’s market instead of mine, which seems maybe not great.
There’s still some interface work for making these reputational markets more common and visible, though—if a popular market is judged likely to be fradulently resolved, this should be very noticeable to a new user.
Kleros is another (crypto) solution for deciding in contentious cases; I believe Omen actually supports Kleros-mediated contracts as a fallback for their user-generated markets.
Thanks! A real money/crypto version of the Manifold is very high on our priorities as well; they do have their own challenges (regulation for real money, technical infrastructure for crypto), but we’re optimistic about being able to solve them.
And the mechanism you describe around reputation for fairness is exactly how we expect things to play out! I do think some more work around surfacing some kind judgment metric could be useful (eg total amount fairly adjudicated) but we have more thinking to do. If anyone has thoughts on what reputational metrics could be useful, let us know!
I’m not sure a formal metric is necessary. Maybe you could just have a “controversy” page associated with each user, where people can complain about how particular questions were resolved, and e.g. post evidence like “An anonymous account bought $10k worth of No when the probability was at 92%, and then an hour later that day the question resolved Yes!” Someone who is really trying to scam people would probably pretty quickly accumulate a pretty damning controversy page that anyone could see at a glance was pretty damning.
The exception to this would be “grey area” questions where it totally is subjective how it should go. For those questions they can make profit via anonymous accounts without anyone being able to tell what’s happening. But hopefully this isn’t a huge deal. For comparison, people will resolve many grey area questions in a biased way anyway, e.g. “Will Trump attempt to illegally hold on to power if he loses the 2020 election?” would probably be resolved positive if a Democrat created the question and negatively if a Republican did. If the amount of bias/noise introduced by illicit profit-making is no bigger than the “baseline” amount of bias/noise inherent in the system, then maybe it’s not worth worrying about.
Originally I was going to suggest paying the question creators 1% of the proceeds of each question. However I think that might not be necessary. They are getting rewarded by having their questions answered, after all.
We do actually pay out the question creators! Right now it’s 4% of profits. We don’t do a great job of making this understandable in the UI though—and predictably (heh) most of our creators are more interested in the question outcome than in earning transaction fees.
A controversy page is interesting—kind of like Airbnb or Amazon reviews, but on a seller rather than on a product.
This is one of those “could easily go wrong in any number of ways” ideas, but...
You could plausibly have reputation encoded in other prediction markets. Like, I create a market “will X happen?” and people don’t know how much to trust me. A trusted user could create markets for any or all of
Will X happen? (Based on their own judgment, not mine.)
Will philh judge correctly whether X happened?
Conditional on X happening, will philh judge that X happened?
Conditional on X not happening, will philh judge that X didn’t happen?
And people could look at those markets to guess how much they should trust me, and people who know something about me can play in them.
Though that first one could also be done with the motivation of getting the profits from the question, where people will prefer to play in the trusted user’s market instead of mine, which seems maybe not great.
Haha, some of our users have already invented similar markets for seeing if a market will be resolved correctly (e.g. https://manifold.markets/RavenKopelman/will-dr-ps-question-about-trump-bei ). I think this is a pretty promising solution!
There’s still some interface work for making these reputational markets more common and visible, though—if a popular market is judged likely to be fradulently resolved, this should be very noticeable to a new user.
Kleros is another (crypto) solution for deciding in contentious cases; I believe Omen actually supports Kleros-mediated contracts as a fallback for their user-generated markets.