When Money Is Abundant, Knowledge Is The Real Wealth
First Puzzle Piece
By and large, the President of the United States can order people to do things, and they will do those things. POTUS is often considered the most powerful person in the world. And yet, the president cannot order a virus to stop replicating. The president cannot order GDP to increase. The president cannot order world peace.
Are there orders the president could give which would result in world peace, or increasing GDP, or the end of a virus? Probably, yes. Any of these could likely even be done with relatively little opportunity cost. Yet no president in history has known which orders will efficiently achieve these objectives. There are probably some people in the world who know which orders would efficiently increase GDP, but the president cannot distinguish them from the millions of people who claim to know (and may even believe it themselves) but are wrong.
Last I heard, Jeff Bezos was the official richest man in the world. He can buy basically anything money can buy. But he can’t buy a cure for cancer. Is there some way he could spend a billion dollars to cure cancer in five years? Probably, yes. But Jeff Bezos does not know how to do that. Even if someone somewhere in the world does know how to turn a billion dollars into a cancer cure in five years, Jeff Bezos cannot distinguish that person from the thousands of other people who claim to know (and may even believe it themselves) but are wrong.
When non-experts cannot distinguish true expertise from noise, money cannot buy expertise. Knowledge cannot be outsourced; we must understand things ourselves.
Second Puzzle Piece
The Haber process combines one molecule of nitrogen with three molecules of hydrogen to produce two molecules of ammonia—useful for fertilizer, explosives, etc. If I feed a few grams of hydrogen and several tons of nitrogen into the Haber process, I’ll get out a few grams of ammonia. No matter how much more nitrogen I pile in—a thousand tons, a million tons, whatever—I will not get more than a few grams of ammonia. If the reaction is limited by the amount of hydrogen, then throwing more nitrogen at it will not make much difference.
In the language of constraints and slackness: ammonia production is constrained by hydrogen, and by nitrogen. When nitrogen is abundant, the nitrogen constraint is slack; adding more nitrogen won’t make much difference. Conversely, since hydrogen is scarce, the hydrogen constraint is taut; adding more hydrogen will make a difference. Hydrogen is the bottleneck.
Likewise in economic production: if a medieval book-maker requires 12 sheep skins and 30 days’ work from a transcriptionist to produce a book, and the book-maker has thousands of transcriptionist-hours available but only 12 sheep, then he can only make one book. Throwing more transcriptionists at the book-maker will not increase the number of books produced; sheep are the bottleneck.
When some inputs become more or less abundant, bottlenecks change. If our book-maker suddenly acquires tens of thousands of sheep skins, then transcriptionists may become the bottleneck to book-production. In general, when one resource becomes abundant, other resources become bottlenecks.
Putting The Pieces Together
If I don’t know how to efficiently turn power into a GDP increase, or money into a cure for cancer, then throwing more power/money at the problem will not make much difference.
King Louis XV of France was one of the richest and most powerful people in the world. He died of smallpox in 1774, the same year that a dairy farmer successfully immunized his wife and children with cowpox. All that money and power could not buy the knowledge of a dairy farmer—the knowledge that cowpox could safely immunize against smallpox. There were thousands of humoral experts, faith healers, eastern spiritualists, and so forth who would claim to offer some protection against smallpox, and King Louis XV could not distinguish the real solution.
As one resource becomes abundant, other resources become bottlenecks. When wealth and power become abundant, anything wealth and power cannot buy become bottlenecks—including knowledge and expertise.
After a certain point, wealth and power cease to be the taut constraints on one’s action space. They just don’t matter that much. Sure, giant yachts are great for social status, and our lizard-brains love politics. The modern economy is happy to provide outlets for disposing of large amounts of wealth and power. But personally, I don’t care that much about giant yachts. I want a cure for aging. I want weekend trips to the moon. I want flying cars and an indestructible body and tiny genetically-engineered dragons. Money and power can’t efficiently buy that; the bottleneck is knowledge.
Based on my own experience and the experience of others I know, I think knowledge starts to become taut rather quickly—I’d say at an annual income level in the low hundred thousands. With that much income, if I knew exactly the experiments or studies to perform to discover a cure for cancer, I could probably make them happen. (Getting regulatory approval is another matter, but I think that would largely handle itself if people knew the solution—there’s a large profit incentive, after all.) Beyond that level, more money mostly just means more ability to spray and pray for solutions—which is not a promising strategy in our high-dimensional world.
So, two years ago I quit my monetarily-lucrative job as a data scientist and have mostly focused on acquiring knowledge since then. I can worry about money if and when I know what to do with it.
A mindset I recommend trying on from time to time, especially for people with $100k+ income: think of money as an abundant resource. Everything money can buy is “cheap”, because money is “cheap”. Then the things which are “expensive” are the things which money alone cannot buy—including knowledge and understanding of the world. Life lesson from Disney!Rumplestiltskin: there are things which money cannot buy, therefore it is important to acquire such things and use them for barter and investment. In particular, it’s worth looking for opportunities to acquire knowledge and expertise which can be leveraged for more knowledge and expertise.
Investments In Knowledge
Past a certain point, money and power are no longer the limiting factors for me to get what I want. Knowledge becomes the bottleneck instead. At that point, money and power are no longer particularly relevant measures of my capabilities. Pursuing more “wealth” in the usual sense of the word is no longer a very useful instrumental goal. At that point, the type of “wealth” I really need to pursue is knowledge.
If I want to build long-term knowledge-wealth, then the analogy between money-wealth and knowledge-wealth suggests an interesting question: what does a knowledge “investment” look like? What is a capital asset of knowledge, an investment which pays dividends in more knowledge?
Mapping out the internal workings of a system takes a lot of up-front work. It’s much easier to try random molecules and see if they cure cancer, than to map out all the internal signals and cells and interactions which cause cancer. But the latter is a capital investment: once we’ve nailed down one gear in the model, one signal or one mutation or one cell-state, that informs all of our future tests and model-building. If we find that Y mediates the effect of X on Z, then our future studies of the Y-Z interaction can safely ignore X. On the other hand, if we test a random molecule and find that it doesn’t cure cancer, then that tells us little-to-nothing; that knowledge does not yield dividends.
Of course, gears-level models aren’t the only form of capital investment in knowledge. Most tools of applied math and the sciences consist of general models which we can learn once and then apply in many different contexts. They are general-purpose gears which we can recognize in many systems.
Once I understand the internal details of how e.g. capacitors work, I can apply that knowledge to understand not only electronic circuits, but also charged biological membranes. When I understand the math of microeconomics, I can apply it to optimization problems in AI. When I understand shocks and rarefactions in nonlinear PDEs, I can see them in action at the beach or in traffic. And the “core” topics—calculus, linear algebra, differential equations, big-O analysis, Bayesian probability, optimization, dynamical systems, etc—can be applied all over. General-purpose models are a capital investment in knowledge.
I hope that someday my own research will be on that list. That’s the kind of wealth I’m investing in now.
- Voting Results for the 2020 Review by 2 Feb 2022 18:37 UTC; 108 points) (
- Prizes for the 2020 Review by 20 Feb 2022 21:07 UTC; 94 points) (
- Compounding Resource X by 11 Jan 2023 3:14 UTC; 77 points) (
- 2020 Review Article by 14 Jan 2022 4:58 UTC; 74 points) (
- How can I bet on short timelines? by 7 Nov 2020 12:44 UTC; 43 points) (
- How can I bet on short timelines? by 7 Nov 2020 12:45 UTC; 33 points) (EA Forum;
- Networking and Expert Consulting As Rationality Bottlenecks by 5 Feb 2021 5:41 UTC; 33 points) (
- LW4EA: When Money Is Abundant, Knowledge Is The Real Wealth by 9 Aug 2022 13:52 UTC; 25 points) (EA Forum;
- What considerations influence whether I have more influence over short or long timelines? by 5 Nov 2020 19:56 UTC; 21 points) (
- What considerations influence whether I have more influence over short or long timelines? by 5 Nov 2020 19:56 UTC; 21 points) (
- 28 Jan 2021 18:10 UTC; 12 points) 's comment on Covid: Bill Gates and Vaccine Production by (
- 7 Dec 2020 5:04 UTC; 6 points) 's comment on Cultural accumulation by (
- I just watched don’t look up. by 23 Jun 2023 21:22 UTC; 0 points) (
This is one of those posts, like “pain is not the unit of effort,” that combines a memorable and informative and very useful and important slogan with a bunch of argumentation and examples to back up that slogan. I think this type of post is great for the LW review.
When I first read this post, I thought it was boring and unimportant: trivially, there will be some circumstances where knowledge is the bottleneck, because for pretty much all X there will be some circumstances where X is the bottleneck.
However, since then I’ve ended up saying the slogan “when money is abundant, knowledge is the real wealth” probably about a dozen separate times when explaining my career decisions, arguing with others at CLR about what our strategy should be, and even when deliberating to myself about what to do next. I guess longtermist EAs right now do have a surplus of money and a shortage of knowledge (relative to how much knowledge is needed to solve the problems we are trying to solve...) so in retrospect it’s not surprising that this slogan was practically applicable to my life so often.
I do think there are ways the post could be expanded and improved. Come to think of it, I’ll make a mini-comment right here to gesture at the stuff I would add to it if I could:
1. List of other ideas for how to invest in knowledge. For example, building a community with good epistemic norms. Or paying a bunch of people to collect data / info about various world developments and report on them to you. Or paying a bunch of people to write textbooks and summaries and explainer videos and make diagrams illustrating cutting-edge knowledge (yours and others’).
2. Arguments that in fact, right now, longtermist EAs and/or AI-risk-reducers are bottlenecked on knowledge (rather than money, or power/status)
--My own experience doing cost-benefit analyses is that interventions/plans vary in EV by OOMs and that it’s common to find new considerations or updated models that flip the sign entirely, or add or subtract a few OOMs, for a given intervention/plan. This sure seems like a situation in which more knowledge is really helpful compared to just having more money & ability to execute on plans.
--Everyone I’ve talked to in government/policy says that the bottleneck is knowledge. Nobody knows what to advocate for right now because everything is so uncertain and backfirey. (See previous point, lol)
--One might counterargue that it is precisely for the above reasons that we shouldn’t invest in knowledge; we aren’t getting much knowledge out of our research and we still won’t in the future. Instead, our best hope is to accumulate lots of power and resources and then when the crucial crunch time period comes, hopefully it’ll be clear what to do. Because the world might hand us knowledge on a silver platter, so to speak, in the form of new evidence. No need to deduce it in advance.
--(I have a couple responses to the above counterargument, but I take it seriously and think that others should too)
--Reasoning by analogy, making AI go well sure seems like the sort of problem where knowledge is the bottleneck rather than money or power. It seems a lot more like figuring out the laws of physics and building a safe rocket before anyone gets killed in an unsafe rocket, or building a secure merchant drone operating system before anyone else builds an unsecure one, than e.g. preventing malaria or reforming US animal welfare laws.
This post’s claim seems to have a strong and weak version, both of which are asserted at different places in the post.
Strong claim: At some level of wealth and power, knowledge is the most common or only bottleneck for achieving one’s goals.
Weak claim: Things money and power cannot obtain can become the bottleneck for achieving one’s goals.
The claim implied by the title is the strong form. Here is a quote representing the weak form:
“As one resource becomes abundant, other resources become bottlenecks. When wealth and power become abundant, anything wealth and power cannot buy become bottlenecks—including knowledge and expertise.”
Of course, knowing arbitrary facts (N values of an infinite sequence of randomly generated numbers) is not what’s meant by “knowledge and expertise.” What is?
I’d suggest “sufficient and necessary knowledge to achieve a given goal.” A person who can achieve a goal, given some reasonable but not excessive amount of time and money, is an expert at achieving that goal.
As others pointed out, just because a person calls themselves an expert in goal G doesn’t mean that they are. John’s point is that being able to identify an expert in goal G, or an expert in identifying experts in goal G, when you yourself wish to achieve goal G, is its own form of expertise.
This in turn suggests that finding, sharing and verifying expertise is a key problem-solving skill. At any given time, we have:
Questions nobody can answer.
Answers nobody can understand.
Answers nobody can verify.
To these short statements, insert the qualifiers “crucial,” “presently,” “efficiently,” and so on. Some of the most important questions are about other questions, such as “what questions should we be asking?”
I expect these problems to be simultaneous and mutually-reinforcing.
I really appreciate this specific calling out of the audience for this post. It may be limiting, but it is also likely limiting to an audience with a strong overlap with LW readership.
I feel like there’s a catch-22 here, in that there are many problems that probably could be solved with money, but I don’t know how to solve them with money—at least not efficiently. As a very mundane example, I know I could reduce my chance of ankle injury during sports by spending more money on shoes. But I don’t know which shoes will actually be cost-efficient for this, and the last time I bought shoes I stopped using two different pairs after just a couple months.
Unfortunately I think that’s too broad of a topic to cover and I’m digressing.
Overall coming back to this I’m realizing that I don’t actually have any way to act on this piece. even though I am in the intended audience, and I have been making a specific effort in my life to treat money as cheap and plentiful, I am not seeing:
Advice on which subjects are likely to pay dividends, or why
Advice on how to recover larger amounts of time or effort by spending money more efficiently
Discussion of when those tradeoffs would be useful
This seems especially silly not to have given, for example, Zvi’s Covid posts, which are a pretty clear modern day example of the Louis XV smallpox problem.
I would be interested in seeing someone work through how it is that people on LW ended up trusting Zvi’s posts and how that knowledge was built. But I would expect that to turn into social group dynamics and analysis of scientific reasoning, and I’m not sure that I see where the idea of money’s abundancy would even come into it.
Sounds like you want roughly the sequence Inadequate Equilibria.