A LessWrong Crypto Autopsy
Wei Dai, one of the first people Satoshi Nakamoto contacted about Bitcoin, was a frequent Less Wrong contributor. So was Hal Finney, the first person besides Satoshi to make a Bitcoin transaction.
The first mention of Bitcoin on Less Wrong, a post called Making Money With Bitcoin, was in early 2011 - when it was worth 91 cents. Gwern predicted that it could someday be worth “upwards of $10,000 a bitcoin”. He also quoted Moldbug, who advised that:
If Bitcoin becomes the new global monetary system, one bitcoin purchased today (for 90 cents, last time I checked) will make you a very wealthy individual...Even if the probability of Bitcoin succeeding is epsilon, a million to one, it’s still worthwhile for anyone to buy at least a few bitcoins now...I would not put it at a million to one, though, so I recommend that you go out and buy a few bitcoins if you have the technical chops. My financial advice is to not buy more than ten, which should be F-U money if Bitcoin wins.
A few people brought up some other points, like that if it ever became popular people might create a bunch of other cryptocurrencies, or that if there was too much controversy the Bitcoin economy might have to fork. The thread got a hundred or so comments before dying down.
But Bitcoin kept getting mentioned on Less Wrong over the next few years. It’s hard to select highlights, but one of them is surely Ander’s Why You Should Consider Buying Bitcoin Right Now If You Have High Risk Tolerance from January 2015. Again, people made basically the correct points and the correct predictions, and the thread got about a hundred comments before dying down.
I mention all this because of an idea, with a long history in this movement, that “rationalists should win”. They should be able to use their training in critical thinking to recognize more opportunities, make better choices, and end up with more of whatever they want. So far it’s been controversial to what degree we’ve lived up to that hope, or to what degree it’s even realistic.
Well, suppose God had decided, out of some sympathy for our project, to make winning as easy as possible for rationalists. He might have created the biggest investment opportunity of the century, and made it visible only to libertarian programmers willing to dabble in crazy ideas. And then He might have made sure that all of the earliest adapters were Less Wrong regulars, just to make things extra obvious.
This was the easiest test case of our “make good choices” ability that we could possibly have gotten, the one where a multiply-your-money-by-a-thousand-times opportunity basically fell out of the sky and hit our community on its collective head. So how did we do?
I would say we did mediocre.
According to the recent SSC survey, 9% of SSC readers made $1000+ from crypto as of 12/2017. Among people who were referred to SSC from Less Wrong—my stand-in for long-time LW regulars − 15% made over $1000 on crypto, nearly twice as many. A full 3% of LWers made over $100K. That’s pretty good.
On the other hand, 97% of us—including me—didn’t make over $100K. All we would have needed to do was invest $10 (or a few CPU cycles) back when people on LW started recommending it. But we didn’t. How bad should we feel, and what should we learn?
Here are the lessons I’m taking from this.
1: Our epistemic rationality has probably gotten way ahead of our instrumental rationality
When I first saw the posts saying that cryptocurrency investments were a good idea, I agreed with them. I even Googled “how to get Bitcoin” and got a bunch of technical stuff that seemed like a lot of work. So I didn’t do it.
Back in 2016, my father asked me what this whole “cryptocurrency” thing was, and I told him he should invest in Ethereum. He did, and centupled his money. I never got around to it, and didn’t.
On the broader scale, I saw what looked like widespread consensus on a lot of the relevant Less Wrong posts that investing in cryptocurrency was a good idea. The problem wasn’t that we failed at the epistemic task of identifying it as an opportunity. The problem was that not too many people converted that into action.
2: You can only predict the future in broad strokes, but sometimes broad strokes are enough
Gwern’s argument for why Bitcoin might be worth $10,000 doesn’t match what actually happened. He thought it would only reach that level if it became the world currency; instead it’s there for...unclear reasons.
I don’t count this as a complete failed prediction because it seems like he was making sort of the right mental motion—calculate the size of the best-case scenario, calculate the chance of that scenario, and realize there’s no way Bitcoin wasn’t undervalued under a broad range of assumptions.
3: Arguments-from-extreme-upside sometimes do work
I think Moldbug’s comment aged the best of all the ones on the original thread. He said he had no idea what was going to happen, but recommended buying ten bitcoins. If Bitcoin flopped, you were out $10. If it succeeded, you might end up with some crazy stratospheric amount (right now, ten bitcoins = $116,000). Sure, this depends on an assumption that Bitcoin had more than a 1⁄10,000 chance of succeeding at this level, but most people seemed to agree that was true.
This reminds me of eg the argument for cryonics. Most LWers believe there’s a less than 10% chance of cryonics working. But if it does work, you’re immortal. Based on the extraordinary nature of the benefits, the gamble can be worth it even if the chances of success are very low.
We seem to be unusually fond of these arguments—a lot of people cite the astronomical scale of the far future as their reason for caring about superintelligent AI despite the difficulty of anything we do affecting it. These arguments are weird-sounding, easy to dislike, and guaranteed to leave you worse off almost all the time.
But you only need one of them to be right before the people who take them end up better off than the people who don’t. This decade, that one was Bitcoin.
Overall, if this was a test for us, I give the community a C and me personally an F. God arranged for the perfect opportunity to fall into our lap. We vaguely converged onto the right answer in an epistemic sense. And 3 − 15% of us, not including me, actually took advantage of it and got somewhat rich. Good work to everyone who succeeded. And for those of us who failed—well, the world is getting way too weird to expect there won’t be similarly interesting challenges ahead in the future.
- Is Rationalist Self-Improvement Real? by 9 Dec 2019 17:11 UTC; 259 points) (
- 2018 Review: Voting Results! by 24 Jan 2020 2:00 UTC; 135 points) (
- Why was the AI Alignment community so unprepared for this moment? by 15 Jul 2023 0:26 UTC; 121 points) (
- EA Hotel with free accommodation and board for two years by 4 Jun 2018 18:09 UTC; 99 points) (EA Forum;
- People Will Listen by 11 Apr 2021 16:51 UTC; 85 points) (
- ‘Trivial Inconvenience Day’ Retrospective by 28 Mar 2018 5:14 UTC; 32 points) (
- Trivial Inconvenience Day (December 9th at 12 Noon PST) by 7 Dec 2018 1:26 UTC; 32 points) (
- 25 Apr 2020 22:38 UTC; 31 points) 's comment on [Site Meta] Feature Update: More Tags! (Experimental) by (
- Towards no-math, graphical instructions for prediction markets by 4 Jan 2019 16:39 UTC; 30 points) (
- how has this forum changed your life? by 30 Jan 2020 21:54 UTC; 26 points) (
- How was your decade? by 28 Dec 2019 17:14 UTC; 26 points) (
- The Case for Education by 14 Aug 2020 17:26 UTC; 24 points) (EA Forum;
- The Case for Education by 14 Aug 2020 17:20 UTC; 24 points) (
- 17 Feb 2020 7:19 UTC; 19 points) 's comment on [Site Update] Subscriptions, Bookmarks, & Pingbacks by (
- Crypto autopsy reply by 6 Feb 2018 10:32 UTC; 19 points) (
- An Argument To Prioritize “Positively Shaping the Development of Crypto-assets” by 3 Apr 2018 21:59 UTC; 16 points) (EA Forum;
- 21 Sep 2022 15:23 UTC; 15 points) 's comment on Announcing “Effective Dropouts” by (EA Forum;
- Technological unemployment as another test for rationalist winning by 2 May 2023 4:16 UTC; 14 points) (
- A simple way of exploiting AI’s coming economic impact may be highly-impactful by 16 Jul 2023 9:33 UTC; 11 points) (
- A simple way of exploiting AI’s coming economic impact may be highly-impactful by 16 Jul 2023 10:30 UTC; 5 points) (EA Forum;
- 10 Jun 2021 22:42 UTC; 1 point) 's comment on We need a standard set of community advice for how to financially prepare for AGI by (
- 23 Dec 2023 20:13 UTC; 1 point) 's comment on A Sense That More Is Possible by (
- 23 Jun 2023 17:17 UTC; 1 point) 's comment on A Manifold market notice: Binance by (
This post has been a clear example of how rationality has and has not worked in practice. It is also a subject of critical practical importance for future decisions, so it frequently occurs to me as a useful example of how and why rationality does and does not help with (in retrospect) critical decisions.
This post distinguishes between the success of the LW community on identifying crypto and the relative failure on acting on crypto in a way that reminds me of how important it is to actually act on information instead of just processing it mentally.
I think this failure mode of understanding a problem but failing to act on that understanding is a very common one for me and I would expect for other readers. I think both emphasizing that this is a part of the problem to be solved, and illustrating specific benefits from solving that problem in a historical context, where you can actually assign monetary value to those outcomes, is a great way to emphasize the specific value involved in rationality.
Also the discussion quickly converges on a relatively cheap solution of writing up tutorial style documentation for processes like this that you’ve found to be high value. That kind of intro tutorial is one of the most valuable things to read for exactly this reason, because it can close that “understanding->action” gap and I would love to read more articles inspired by this notion that there are plots of value ready to be grasped.
It is important to understand why we fail