Rationalists try to be well calibrated and have good world models, so we should be great at prediction markets, right?
Alas, it looks bad at first glance:
I’ve got a hopeful guess at why people referred from core rationalist sources seem to be losing so many bets, based on my own scores. My manifold score looks pretty bad (-M192 overall profit), but there’s a fun reason for it. 100% of my resolved bets are either positive or neutral, while all but one of my unresolved bets are negative or neutral.
Here’s my full prediction record:
The vast majority of my losses are on things that don’t resolve soon and are widely thought to be unlikely (plus a few tiny not particularly well thought out bets like dropping M15 on LK-99), and I’m for sure losing points there. but my actual track record cached out in resolutions tells a very different story.
I wonder if there are some clever stats that @James Grugett@Austin Chen or others on the team could do to disentangle these effects, and see what the quality-adjusted bets on critical questions like the AI doom ones would be absent this kind of effect. I’d be excited to see the UI showing an extra column on the referrers table showing cashed out predictions only rather than raw profit. Or generally emphasising cached out predictions in the UI more heavily, to mitigate the Keynesian beauty contest style effects of trying to predict distant events.
These datapoints just feel like the result of random fluctuations. Both Writer and Eliezer mostly drove people to participate on the LK-99 stuff where lots of people were confidently wrong. In-general you can see that basically all the top referrers have negative income:
Among the top 10, Eliezer and Writer are somewhat better than the average (and yaboi is a huge outlier, which my guess is would be explained by them doing something quite different than the other people).
Agree, expanding to the top 9[1] makes it clear they’re not unusual in having large negative referral totals. I’d still expect Ratia to be doing better than this, and would guess a bunch of that comes from betting against common positions on doom markets, simulation markets, and other things which won’t resolve anytime soon (and betting at times when the prices are not too good, because of correlations in when that group is paying attention).
Rationalists try to be well calibrated and have good world models, so we should be great at prediction markets, right?
Alas, it looks bad at first glance:
I’ve got a hopeful guess at why people referred from core rationalist sources seem to be losing so many bets, based on my own scores. My manifold score looks pretty bad (-M192 overall profit), but there’s a fun reason for it. 100% of my resolved bets are either positive or neutral, while all but one of my unresolved bets are negative or neutral.
Here’s my full prediction record:
The vast majority of my losses are on things that don’t resolve soon and are widely thought to be unlikely (plus a few tiny not particularly well thought out bets like dropping M15 on LK-99), and I’m for sure losing points there. but my actual track record cached out in resolutions tells a very different story.
I wonder if there are some clever stats that @James Grugett @Austin Chen or others on the team could do to disentangle these effects, and see what the quality-adjusted bets on critical questions like the AI doom ones would be absent this kind of effect. I’d be excited to see the UI showing an extra column on the referrers table showing cashed out predictions only rather than raw profit. Or generally emphasising cached out predictions in the UI more heavily, to mitigate the Keynesian beauty contest style effects of trying to predict distant events.
These datapoints just feel like the result of random fluctuations. Both Writer and Eliezer mostly drove people to participate on the LK-99 stuff where lots of people were confidently wrong. In-general you can see that basically all the top referrers have negative income:
Among the top 10, Eliezer and Writer are somewhat better than the average (and yaboi is a huge outlier, which my guess is would be explained by them doing something quite different than the other people).
Agree, expanding to the top 9[1] makes it clear they’re not unusual in having large negative referral totals. I’d still expect Ratia to be doing better than this, and would guess a bunch of that comes from betting against common positions on doom markets, simulation markets, and other things which won’t resolve anytime soon (and betting at times when the prices are not too good, because of correlations in when that group is paying attention).
Though the rest of the leaderboard seems to be doing much better
The interest rate on manifold makes such investments not worth it anyway, even if everyone had reasonable positions to you.