Saul Munn
Announcing Manifest 2023 (Sep 22-24 in Berkeley)
Last Chance: Get tickets to Manifest 2023! (Sep 22-24 in Berkeley)
Manifest 2023
OPTIC: Announcing Intercollegiate Forecasting Tournaments in SF, DC, Boston
Solving Two-Sided Adverse Selection with Prediction Market Matchmaking
Thanks for the response!
Re: concerns about bad incentives, I agree that you can depict the losses associated with manipulating conditional prediction markets as paying a “cost” — even though you’ll probably lose a boatload of money, it might be worth it to lose a boatload of money to manipulate the markets. In the words of Scott Alexander, though:
If you’re wondering why people aren’t going to get an advantage in the economy by committing horrible crimes, the answer is probably the same combination of laws, ethics, and reputational concerns that works everywhere else.
I’m concerned about this, but it feels like a solvable problem.
Re: personal stuff & the negative externalities of publicly revealing probabilities, thanks for pointing these out. I hadn’t thought of it. Added it to the post!
This is great! Have you cross-posted this to the EA Forum? If not, may I?
I think a lot of what I write for rationalist meetups would apply straightforwardly to EA meetups.
agreed. this sort of thing feels completely missing from the EA Groups Resources Centre, and i’d guess it would be a big/important contribution.
This may be a silly question, but- how does cross posting usually work?
iirc, when you’re publishing a post on {LessWrong, the EA forum}, one of the many settings at the bottom is “Cross-Post to {the EA forum, LessWrong},” or something along those lines. there’s some karma requirement for both the EA forum and for LW — if you don’t meet the karma requirement for one, you might need to manually cross-post until you have enough karma.
are there norms on EA forum around say, pseudonyms and real names, or being a certain amount aligned with EA?
re: pseudonyms: though there’s a general, mild preference for post authors to use their real names, using a pseudonym is perfectly fine — and many do (example).
re: alignment: you don’t need to be fully on-board with EA to post on the forum (and many aren’t), but the content of your post should at least relate to EA.
for other questions regarding the norms on the EA forum, here’s a guide to the norms on the EA forum. (on that guide, they have a section on “rules for pseudonymous and multiple accounts” and “privacy and pseudonymity.”)
*i’ll edit & delete this part later, but: i’ll get back to you over email in a bit! caught up with other stuff, and getting to things one at a time :)
If you and your audience have smartphones, we suggest making use of a copy of this spreadsheet and google form.
are “spreadsheet” and “google form” meant to be linked to something?
Link Collection: Impact Markets
Explaining Impact Markets
Things You’re Allowed to Do: University Edition
the name of this post was really confusing for me. i thought it would be about “how to stop defeating akrasia,” not “how to defeat akrasia by stopping.” consider renaming it to be a bit more clear?
the part at the end really reminded me of this piece by dr mciver: https://notebook.drmaciver.com/posts/2022-12-20-17:21.html
relevant: story-based decision-making
Invest in ACX Grants projects!
I really enjoyed this — thank you for writing. I also think the updated version is a lot better than the previous version, and I appreciate the work you put in to update it. I’m really, really looking forward to the other posts in this sequence.
I’d also really enjoy a post that’s on this exact topic, but one that I’d feel comfortable sending to my mom or something, cf “Broad adverse selection (for poets).”
Thanks for the response!
Could you point to some specific areas of magical thinking in the post? and/or in the space?[1] (I’m not claiming that there aren’t any, I definitely think there are. I’m interested to know where I & the space are being overconfident/thinking magically, so that I/it can do less magical thinking.)
The mechanism that Manifold Love uses. In section 2, I put it as “run a bunch of conditional prediction markets on a bunch of key benchmarks for potential pairs between two sides that are normally caught in adverse selection.” I wrote this post to explain the actual mechanism by which I think (conditional) prediction markets might solve these problems, but I also want to note that I definitely do not think that (conditional) prediction markets will definitely for sure 100% totally completely solve these problems. I just think they have potential, and I’m excited to see people giving it a shot.
I agree! In order to get a good prediction from a market, you (probably, see the footnote) need participation to be positive-sum.[3] I think there are a few ways to get this:
Direct subsidies
Since prediction markets create valuable price information, it might make sense to have those who benefit from the price information directly pay. I could imagine this pretty clearly, actually: Manifold Love could charge users for (e.g.) more than 5 matches, and some part of the fee that the user pays goes toward market subsidies. As you pointed out, paying a 3rd party is currently the case for most of my examples — matchmakers, headhunters, real estate agents, etc — so it seems like this sort of thing aligns with the norms & users’ expectations.
Hedging
Some participants bet to hedge other off-market risks. These participants are likely to lose money on prediction markets, and know that ahead of time. That’s because they’re not betting their beliefs; they’re paying the equivalent to an insurance premium.
For prediction markets generally, this seems like the most viable path to getting money flowing into the market. I’m not sure how well it’d work for this sort of setup, though — mainly because the markets are so personal.
This requires finding markets on which participants would want to hedge, which seems like a difficult problem. I give an example before, but I’m pretty unsure what something like this would look like in a lot of the examples I listed in the original essay.
Continuing the example of the labor market from section 2: I could imagine (e.g.) a Google employee buying an ETF-type-thing that bets NO on whether all potential Google employees will remain at Google a year from their hiring date. This protects that Google empoyee against the risk of some huge round of layoffs — they’ve bought “insurance” against that outcome. In doing so, it provides the markets a way to become positive-sum for those participants who’re betting their beliefs.
New traders
This provides an inflow of money, but is (obviously) tied to new traders joining the market. I don’t like this at all, because it’s totally unsustainable and leads to community dynamics like those in crypto. Also, it’s a scheme that’s pyramid-shaped, a “pyramid scheme” if you will.
I’m mainly including this for completeness; I think relying on this is a terrible idea.
Agreed! It’s unclear to me too. This sort of question is answerable by trying the thing and seeing if it works — that’s why I’m excited about for people & companies to try it out and see if it works.
I’m assuming you mean the “prediction market/forecasting space,” so please let me know if that’s not the space to which you’re referring.
I’ll interpret “subsidize” more broadly as “money flowing into the market to make it positive-sum after fees, inflation, etc.”
I’m comfortable working under this assumption for now, but I do want to be clear that I’m not fully convinced this is the case. The stock market is clearly negative-sum for the majority of traders, and yet… traders still join. It seems at least plausible that, as long as the market is positive-sum for some key market participants, the markets still can still provide valuable price information.