What Money Cannot Buy
The problem is, if you’re not a hacker, you can’t tell who the good hackers are. A similar problem explains why American cars are so ugly. I call it the design paradox. You might think that you could make your products beautiful just by hiring a great designer to design them. But if you yourself don’t have good taste, how are you going to recognize a good designer? By definition you can’t tell from his portfolio. And you can’t go by the awards he’s won or the jobs he’s had, because in design, as in most fields, those tend to be driven by fashion and schmoozing, with actual ability a distant third. There’s no way around it: you can’t manage a process intended to produce beautiful things without knowing what beautiful is. American cars are ugly because American car companies are run by people with bad taste.
I don’t know how much I believe this claim about cars, but I certainly believe it about software. A startup without a technical cofounder will usually produce bad software, because someone without software engineering skills does not know how to recognize such skills in someone else. The world is full of bad-to-mediocre “software engineers” who do not produce good software. If you don’t already know a fair bit about software engineering, you will not be able to distinguish them from the people who really know what they’re doing.
Same with user interface design. I’ve worked with a CEO who was good at UI; both the process and the results were visibly superior to others I’ve worked with. But if you don’t already know what good UI design looks like, you’d have no idea—good design is largely invisible.
Yudkowsky makes the case that the same applies to security: you can’t build a secure product with novel requirements without having a security expert as a founder. The world is full of “security experts” who do not, in fact, produce secure systems—I’ve met such people. (I believe they mostly make money by helping companies visibly pretend to have made a real effort at security, which is useful in the event of a lawsuit.) If you don’t already know a fair bit about security, you will not be able to distinguish such people from the people who really know what they’re doing.
But to really drive home the point, we need to go back to 1774.
As the American Revolution was heating up, a wave of smallpox was raging on the other side of the Atlantic. An English dairy farmer named Benjamin Jesty was concerned for his wife and children. He was not concerned for himself, though—he had previously contracted cowpox. Cowpox was contracted by milking infected cows, and was well known among dairy farmers to convey immunity against smallpox.
Unfortunately, neither Jesty’s wife nor his two children had any such advantage. When smallpox began to pop up in Dorset, Jesty decided to take drastic action. He took his family to a nearby farm with a cowpox-infected cow, scratched their arms, and wiped pus from the infected cow on the scratches. Over the next few days, their arms grew somewhat inflamed and they suffered the mild symptoms of cowpox—but it quickly passed. As the wave of smallpox passed through the town, none of the three were infected. Throughout the rest of their lives, through multiple waves of smallpox, they were immune.
The same technique would be popularized twenty years later by Edward Jenner, marking the first vaccine and the beginning of modern medicine.
The same wave of smallpox which ran across England in 1774 also made its way across Europe. In May, it reached Louis XV, King of France. Despite the wealth of a major government and the talents of Europe’s most respected doctors, Louis XV died of smallpox on May 10, 1774.
The point: there is knowledge for which money cannot substitute. Even if Louis XV had offered a large monetary bounty for ways to immunize himself against the pox, he would have had no way to distinguish Benjamin Jesty from the endless crowd of snake-oil sellers and faith healers and humoral balancers. Indeed, top medical “experts” of the time would likely have warned him away from Jesty.
The general pattern:
Take a field in which it’s hard for non-experts to judge performance
Add lots of people who claim to be experts (and may even believe that themselves)
Result: someone who is not already an expert will not be able to buy good performance, even if they throw lots of money at the problem
Now, presumably we can get around this problem by investing the time and effort to become an expert, right? Nope! Where there are snake-oil salesmen, there will also be people offering to teach their secret snake-oil recipe, so that you too can become a master snake-oil maker.
So… what can we do?
The cheapest first step is to do some basic reading on a few different viewpoints and think things through for yourself. Simply reading the “correct horse battery staple” xkcd will be sufficient to recognize a surprising number of really bad “security experts”. It probably won’t get you to a level where you can distinguish the best from the middling—I don’t think I can currently distinguish the best from the middling security experts. But it’s a start.
More generally: it’s often easier to tell which of multiple supposed experts is correct, than to figure everything out from first principles yourself. Besides looking at the object-level product, this often involves looking at incentives in the broader system—see e.g. Inadequate Equilibria. Two specific incentive-based heuristics:
Skin in the game is a good sign—Jesty wanted to save his own family, for instance.
Decoupling from external monetary incentives is useful—in other words, look for hobbyists. People at a classic car meetup or a track day will probably have better taste in car design than the J.D. Powers award.
That said, remember the main message: there is no full substitute for being an expert yourself. Heuristics about incentives can help, but they’re leaky filters at best.
Which brings us to the ultimate solution: try it yourself. Spend time in the field, practicing the relevant skills first-hand; see both what works and what makes sense. Collect data; run trials. See what other people suggest and test those things yourself. Directly study which things actually produce good results.
- When Money Is Abundant, Knowledge Is The Real Wealth by 3 Nov 2020 17:34 UTC; 340 points) (
- Worlds Where Iterative Design Fails by 30 Aug 2022 20:48 UTC; 208 points) (
- Interfaces as a Scarce Resource by 5 Mar 2020 18:20 UTC; 188 points) (
- Everything I Need To Know About Takeoff Speeds I Learned From Air Conditioner Ratings On Amazon by 15 Apr 2022 19:05 UTC; 165 points) (
- Why Not Just Outsource Alignment Research To An AI? by 9 Mar 2023 21:49 UTC; 151 points) (
- Book Launch: “The Carving of Reality,” Best of LessWrong vol. III by 16 Aug 2023 23:52 UTC; 131 points) (
- Voting Results for the 2020 Review by 2 Feb 2022 18:37 UTC; 108 points) (
- Gifts Which Money Cannot Buy by 4 Nov 2020 19:37 UTC; 93 points) (
- Three Notions of “Power” by 30 Oct 2024 6:10 UTC; 89 points) (
- Crisis and opportunity during coronavirus by 12 Mar 2020 20:20 UTC; 78 points) (
- 2020 Review Article by 14 Jan 2022 4:58 UTC; 74 points) (
- An AI Race With China Can Be Better Than Not Racing by 2 Jul 2024 17:57 UTC; 69 points) (
- In Defense Of Making Money by 18 Aug 2022 14:10 UTC; 65 points) (
- Theory and Data as Constraints by 21 Feb 2020 22:00 UTC; 65 points) (
- Relaxation-Based Search, From Everyday Life To Unfamiliar Territory by 10 Nov 2021 21:47 UTC; 58 points) (
- Money Can’t (Easily) Buy Talent by 22 Jan 2021 2:56 UTC; 51 points) (EA Forum;
- What Would Advanced Social Technology Look Like? by 10 Nov 2020 17:55 UTC; 43 points) (
- 25 Jan 2024 2:25 UTC; 39 points) 's comment on Impact Assessment of AI Safety Camp (Arb Research) by (EA Forum;
- Speculations on the Future of Fiction Writing by 28 May 2020 16:34 UTC; 35 points) (
- Recruit the World’s best for AGI Alignment by 30 Mar 2023 16:41 UTC; 34 points) (EA Forum;
- 12 Dec 2021 18:28 UTC; 33 points) 's comment on The Plan by (
- 21 Jun 2022 1:00 UTC; 33 points) 's comment on Where I agree and disagree with Eliezer by (
- 26 Jun 2020 15:04 UTC; 31 points) 's comment on A Personal (Interim) COVID-19 Postmortem by (
- 15 Apr 2022 20:52 UTC; 30 points) 's comment on Everything I Need To Know About Takeoff Speeds I Learned From Air Conditioner Ratings On Amazon by (
- LW4EA: When Money Is Abundant, Knowledge Is The Real Wealth by 9 Aug 2022 13:52 UTC; 25 points) (EA Forum;
- Models of Value of Learning by 7 Jul 2020 19:08 UTC; 24 points) (
- Forecasting is a responsibility by 5 Dec 2020 0:40 UTC; 23 points) (
- 27 Mar 2022 9:42 UTC; 21 points) 's comment on Beyond Blame Minimization by (
- Gell-Mann checks by 26 Sep 2024 22:45 UTC; 20 points) (
- An AI Race With China Can Be Better Than Not Racing by 2 Jul 2024 17:57 UTC; 18 points) (EA Forum;
- 12 Jul 2021 7:26 UTC; 15 points) 's comment on nora’s Quick takes by (EA Forum;
- 6 Aug 2022 15:46 UTC; 15 points) 's comment on “Just hiring people” is sometimes still actually possible by (
- 10 Sep 2024 15:34 UTC; 12 points) 's comment on Morpheus’s Shortform by (
- 2 Sep 2021 15:43 UTC; 10 points) 's comment on A lost 80s/90s metaphor: playing the demo by (
- 10 Mar 2023 20:37 UTC; 8 points) 's comment on Why Not Just Outsource Alignment Research To An AI? by (
- 10 Jun 2022 16:39 UTC; 7 points) 's comment on AGI Ruin: A List of Lethalities by (
- 25 May 2022 23:00 UTC; 6 points) 's comment on How to get people to produce more great exposition? Some strategies and their assumptions by (
- 1 Apr 2022 16:17 UTC; 6 points) 's comment on Blatant Plot Hole in HPMoR [Spoilers] by (
- 9 Jun 2022 8:37 UTC; 5 points) 's comment on AGI Safety FAQ / all-dumb-questions-allowed thread by (
- 21 Aug 2024 1:01 UTC; 5 points) 's comment on You don’t know how bad most things are nor precisely how they’re bad. by (
- 25 Jan 2024 18:59 UTC; 4 points) 's comment on Impact Assessment of AI Safety Camp (Arb Research) by (EA Forum;
- 29 Sep 2024 0:25 UTC; 4 points) 's comment on Ruby’s Quick Takes by (
- 15 Jan 2022 13:39 UTC; 4 points) 's comment on The 300-year journey to the covid vaccine by (
- 28 May 2020 20:29 UTC; 4 points) 's comment on Speculations on the Future of Fiction Writing by (
- 1 Oct 2020 16:03 UTC; 3 points) 's comment on Evaluating Life Extension Advocacy Foundation by (
- 6 Mar 2022 16:05 UTC; 2 points) 's comment on Late 2021 MIRI Conversations: AMA / Discussion by (
- Carving of Reality 3 - LW/ACX Meetup #253 (Wednesday, September 6th 2023) by 6 Sep 2023 5:08 UTC; 2 points) (
- 12 May 2020 14:46 UTC; 2 points) 's comment on Project Proposal: Gears of Aging by (
- 30 Mar 2022 10:14 UTC; 2 points) 's comment on Beyond Blame Minimization: Thoughts from the comments by (
I think this post labels an important facet of the world, and skillfully paints it with examples without growing overlong. I liked it, and think it would make a good addition to the book.
There’s a thing I find sort of fascinating about it from an evaluative perspective, which is that… it really doesn’t stand on its own, and can’t, as it’s grounded in the external world, in webs of deference and trust. Paul Graham makes a claim about taste; do you trust Paul Graham’s taste enough to believe it? It’s a post about expertise that warns about snake oil salesmen, while possibly being snake oil itself. How can you check? “there is no full substitute for being an expert yourself.”
And so in a way it seems like the whole rationalist culture, rendered in miniature: money is less powerful than science, and the true science is found in carefully considered personal experience and the whispers of truth around the internet, more than the halls of academia.