Towards no-math, graphical instructions for prediction markets
You are probably familiar with prediction markets. As of the last general update, Robin Hanson notes that while there is increasing research interest and a few general platforms are coming soon, the real goal of implementing inside organizations remains elusive.
I strongly suspect that it will remain so until companies are founded which use them from the beginning. This is because introducing people to new tools is hard. Introducing them to new tools that don’t even have a direct bearing on doing their jobs is even harder. We know that trivial inconveniences are important; it was the dominant theme of the crypto autopsy. If it happens in an environment like Lesswrong with crypto, I expect it will be worse in virtually all other contexts.
If prediction markets are going to be widely adopted, they need to be widely applicable, which basically means that any given schlub needs to be able to make a bet in a market without any math/economics/computing background. Happily, we have extremely robust evidence that any given schlub is perfectly capable of making a bet, courtesy of casinos.
One way to help with this problem is to make it as easy and intuitive as possible to learn. To wit: interactive visualizations. The biggest inspirations are Up and Down the Ladder of Abstraction by Brett Victor, and Metaculus. The first is about exploring system behavior without math, and the training for the second does a pretty good job of letting you make predictions based on the shape of the probability curve (though this doesn’t work as well for understanding the payoffs). Another relevant tool is the Sankey Diagram, which makes the relative sizes of flows apparent at a glance. I don’t know of anything which makes such a thing interactive though—which is a shame, because I bet it would make controls comprehensible to people who don’t know any differential equations or linear algebra. I am currently impressed by Seeing Theory. I ran across a few other websites which are designed to communicate math, and even one about risk, but for some reason all these savages are using Flash.
However, all of these is mostly for teaching math, and that is not actually the goal. Fundamentally the idea is information flows into the market, and only payments need to flow back out. So what we need is a way for people to specify what they think the outcome will be, with what confidence, without numbers.
Naively I am leaning in this direction:
Numbers are presented as areas
Functions are presented as shapes
Operations show up as transformations of the areas/shapes (for example, showing how a given bet affects the overall market, or maybe combining sub-bets).
We want people to be able to translate their beliefs this way pretty easily. Stuff like “the more sure you are, the narrower the shape should be,” where the shape is the distribution curve, is intuitive.
I feel like the instructions could be collapsed into a few drag-and-drop shapes, and a few heuristic guidelines for how to manipulate them to represent a belief accurately. Training people how to do this should easily fit into a single-session course, not more than a day if we include motivation, the payoffs, and a bunch of trial runs.
There have long been lots of unexplored good ideas for interface improvements. But they need to be tested in the context of real systems and users.
I made an astonishing pile of points on SciCast from people who did not understand what the simple “user friendly” interface was doing. Turning “what i think the the distribution is” into a single trade is a fraught process; permit the user to do it repeatedly for a given question and I will actually show up and take absolutely all of their money.
I’d be interested in hearing more about this; I’m unfamiliar with the SciCast interface.
I notice you put user friendly in quotes—do you think it was not in fact user friendly, or was it misleading in an important dimension?
For comparison, I don’t think the Metaculus interface is simple enough to do the job. I agree it would be a fraught process, but my starting assumption is that developing an intuitive and correct UI is always a fraught process.
I realized after my initial comment that it was a bit too terse to be productive. Sorry.
(pre-note: Metaculus is not a market, and I don’t consider it interesting. I’m addressing and interested only in the case of an actual market where binary securities on events are being traded, with other market participants directly, or all through a market maker like SciCast.)
SciCast had two (major & relevant) modes for placing a trade, I remember them as “easy” and “normal”. Normal presented you with a slider from 0% to 100%, you could move the slider back and forth, and as you moved it, you were presented with a continuously updating quote for the “cost” of making the trade. The cost could be negative (equivalent to selling a previously held long position, or closing an existing short position), or positive (going long, or covering a short, whatever). After you made a trade, the interface immediately reflected the percentage that you’d chosen as the new value of the market. While you could go terribly wrong with this interface (and a lot of people did), it was at least possible to do reasonable things.
The Very Bad interface was the “easy” one. Instead of a slider, and instead of being given any feedback about the costs of your trade, you were presented simply with five choices, probably something like <20%, 20-40%, 40-60%, 60-80% and >80%. I don’t remember the exact equations that were used at this point, but it caused you to submit a trade that moved the market from wherever it was to as close as it get to the midpoint(?) of the region, without spending more than 1⁄2 (1/4? something) of your remaining points.
You weren’t shown how many points the trade was about to cost you. And neither interface showed you how many points you already had invested in the positions you were adjusting!
About the worst thing (I remember) that happened to naive users (through no fault of their own, IMO) was to come along to a question where the current state of the market was maybe 5%, and then do an easy mode trade for “<20%”. The system would spend their points to move the market from 5% to 10%.
The worst thing (which the bad interface encouraged), was users who thought that they should be trading until the market forecast matched their forecast. So if the easy mode trade didn’t move the market to where they wanted (or I’d moved the market back in the seconds following their trade), they’d do another easy mode trade. Over and over, with no idea of the position they were building up on a single question.
The short version of the problem, in real-world stock market terms, is: just because you think APPL is only going to go up in the long run doesn’t mean that you dump the entirety of your savings into it via market orders.
Think about kelly betting for a minute, just because the user has given you what they believe the odds to be isn’t enough to compute the correct bet for them. You also need to know their bankroll. And if you’re letting them trade on lots of questions simultaneously, you should be taking in a lot of probability estimates from the user, comparing them all to the state of the market, and making the trades that are profitable: ones where 1) the user believes the market is very wrong and 2) where corrections can be made inexpensively.
(For bonus points the user gives you the conditional probabilities between all those things, too, so that you can avoid betting too much on correlated trades.)
thats, uh, probably enough for now. i’ll keep an eye out for any questions.
Your critique is plausible. I was never a fan of these supposedly simple interfaces.
Interesting! We designed “safe mode” to be safe against the common newbie failure of losing all cash in a few big trades. It achieved that much at least. Each trade spent max of about *1%* of your cash, and moved it at most halfway towards your goal. A dedicated newbie could still burn cash, but it took effort.
Still, from your report, that may have happened more than I thought. If so, it may not even have achieved its main design goal. :-(
“The worst thing” was indeed atrocious. We fixed that fairly quickly, but … ugh.
Thanks for the thoughtful critique. We’re about to reuse the engine. I’m giving this some thought.
i’m not sure if it would fix easy mode, but i feel that the points that go into a trade should be some function of the points that have already gone into that question, such that a user’s total loss on a given question is capped.
That is brutally bad. What was the training information like? Was it even possible for a naive user to become less naive without losing their ass a bunch of times?
I remember the tutorials as focusing on easy mode, and treating it as the thing to start with. With Charles refreshing my memory about how easy mode behaved (thank you), I think there was plenty of time for a slow and cautious user to come in and perhaps question the nature of easy mode before they’d done any damage to their account. Certainly I used it a bit at the start, and still recovered nicely.
Hoping that your users will be slow and/or cautious isn’t an ideal plan, though.
People bet a lot on sports despite sports odds having very confusing notation. many non-professionals trade options or crypto with numerical interfaces. a lot of popular videogames have resources denominated in numbers.
The way info from the non-numerate gets incorporated into financial markets today is that more sophisticated people & firms scrape social media or look at statistics (like generated by consumer activity). markets do not need to be fully accessible for markets to be accurate.
I’m very skeptical of the need to represent functions.
That said I’m always game to try out building new, simpler trading interfaces. At manifold we tried a mobile interface where you swiped left to bet no and right to bet yes. It was kinda fun and saw limited use but we ended up killing it because it was a lot to maintain
I agree with this in general, but it doesn’t seem true for the specific use-case motivating the post. The problem I am thinking about here is how to use a prediction market inside an organization. In this case we cannot rely on anyone who could get the information to put it into the market because the public does not participate—we either get the specific person who actually knows to participate, or the market lacks the information.
I expect this to run into all the usual problems of getting people at work to adopt a toolchain unrelated to their work. These projects normally fail; it looks like it needs to be basically zero effort to bet your information for it to work, which is heroically difficult.
Prediction markets with even very small amounts of traders are remarkably well-calibrated. Try playing around with the data in https://calibration.city/. Albeit these are traders out of a wider population.
I am skeptical of prediction markets as tools within organizations, even very large organizations (like Google’s Gleangen or Microsoft). It hasn’t been very useful, and I don’t think this is a just a UX or culture issue, I think the information just isn’t valuable enough. Better off running polls, doing user studies, or just letting project owners execute their big-brained vision. I more bullish of prediction markets that are part of a product/service, or part of an advertising campaign.
I’m inclined to agree with your skepticism. Lately I attribute the low value of the information to the fact that the organization is the one that generates it in the first place. In practical terms the performance of the project, campaign, etc. will still be driven by the internal incentives for doing the work, and it is not remotely incompatible for bad incentives to go unchanged leading to consistently failing projects that are correctly predicted to consistently fail. In process terms, it’s a bit like what’s happening with AI art when it consumes too much AI art in training.
Yes, and firms already experiment with different economic mechanisms to produce this self-generated information—this is just compensation and employee benefits, including stock options, commissions, bonuses. In this frame, it’s seems like a bad idea to let employees bet against, like projects shipping on time. A negative stake is the least aligned form of compensation possible. There are hacks on top of a pure prediction market you could do to prevent people from having a negative stake. But I think once you realize that the recursive aspect of the market you may as well just … design good compensation.
I’m also more enthusiastic about prediction markets on things mostly outside of employees’ control that are still relevant to business decisions—market trends, actions of competitors and regulators, consumer preferences maybe. Though there’s less reason for these to be internal.