Suppose you were to reach into the mirror universe, where everything is inverted, and pull out a book that is the exact opposite of Robert Gordon’s The Rise and Fall of American Growth.
Instead of being written by a scholarly economic historian steeped in the study of the past, it would be written by an engineer who has spent his career on futuristic technology. Instead of coming from a prestigious academic press, it would be self-published, with misformatted tables and a cover featuring garish three-dimensional lettering. Instead of sticking to extremely conservative predictions about future technologies, it would speculate audaciously about the limits of the possible, from nanotech to cold fusion. Instead of a sober survey of economic history in one country and time period, it would range widely through engineering, physics and philosophy, exploring the power-to-weight ratio of jet turbines in one chapter, and describing the rise of the counterculture in the next. And instead of proclaiming the death of innovation and the end of growth, it would paint a bold vision of an ambitious technological future.
That book has leapt out of the mirror universe and into an Amazon Kindle edition (priced at 𝜋 dollars): Where Is My Flying Car? A Memoir of Future Past, by J. Storrs Hall.
Hall sets out to tackle the title question: why don’t we have flying cars yet? And indeed, several chapters in the book are devoted to deep dives on the history, engineering, and economics of flying cars. But to fully answer the question, Hall must go much broader and deeper, because he quickly concludes that the barriers to flying cars are not technological or economic—they are cultural and political. To explain the flying car gap is to explain the Great Stagnation itself.
Bold futures
The most valuable thing I took away from the book was an awareness of some powerful technological possibilities.
Flying cars
Before reading the book, I had assumed that the flying car was one of those ideas that sounds good on its face, but turns out not to work or be interesting in practice. Maybe they’re inherently too hard to fly, too dangerous, or just not all that valuable. This book changed my mind, by pointing out a simple analogy: today’s system of flight has all of the inconveniences of railroads, over a century ago. Airplanes are large, mass-transit vehicles that travel only defined, scheduled routes between a small number of stations. This creates two problems for travelers. First is the “three vehicles” problem: you have to get from your origin to the nearest station, and then from your arrival station to your actual destination, changing vehicles each time (and hauling your luggage). Second is the inconvenience of schedule: having to be on time to catch the train or plane, compared to the personal vehicle that is ready immediately whenever you want it. This is driven home when you remember that a 90-minute flight from, say, San Francisco to Los Angeles actually takes you half a day, after travel to and from the airport plus delays at ticketing, security and boarding.
The book points out that the major value in a flying car (as with supersonic) would not be in taking the same trips you do now, only a bit faster. Instead, it would be in taking the trips you don’t take now, because they’re too inconvenient. A flying car would shrink your world, expanding the radius of what you would consider for a commute, a shopping trip, a visit to friends, a business meeting, or a weekend vacation. Indeed, Hall cites literature from travel studies finding that people in all societies travel on average about an hour a day, whether walking barefoot or driving on the highway. And he points out that increasing the effective radius for each of those trips increases the effective area open to you quadratically (doubling your travel radius means four times as many destinations).
Hall did extensive research and analysis for the book, even learning to fly a private aircraft himself. He recounts the history of flying car research and development, which began much earlier and has had many more credible attempts than I realized. He catalogs design approaches, including convertibles (vehicles that convert between flying and driving) and VTOL (vertical take-off and landing). He models engineering tradeoffs and travel times. And he concludes that there is no technological or economic reason why we can’t have flying cars with existing technology—indeed, why we couldn’t have had them already, if sustained work on them had continued past the 1970s.
Nanotechnology
Hall’s degrees are in computer science, but much of his career has been in nanotech, which was surprisingly prominent in the book. He makes clear that he’s not talking about mere nanoscale materials, but rather true nanotech, as envisioned by Feynman in the ’60s and advanced by Eric Drexler in the ’90s: atomically precise manufacturing, placing each atom one at a time exactly where you want it, giving you complete control over the structure of matter. In Hall’s telling, while this technology is obviously a ways off, the physics is sound and many of the basic principles have been worked out.
The potential capabilities of mature nanotech are mind-blowing. The incredible speed alone would dramatically lower the price of literally every physical product. Hall estimates that the entire capital stock of the US—“every single building, factory, highway, railroad, bridge, airplane, train, automobile, truck, and ship”—could be rebuilt in a week. And nanotech would allow materials with extreme properties, such as the strength of diamond, to be used for everyday manufacturing and construction.
The possibilities are straight out of science fiction. The “space pier”, a set of towers a hundred kilometers tall with a magnetic accelerator to shoot payloads into orbit, saving the fuel required to escape Earth’s gravity well and bringing down launch costs by three orders of magnitude. Or the “Weather Machine”, a fleet of quintillions of centimeter-sized balloons floating in the stratosphere, made of nanometer-thick diamond, with remote-controlled mirrors that can reflect light or allow it to pass through, forming a “programmable greenhouse gas” that can regulate temperature and direct solar energy. And of course, affordable flying cars.
Energy, energy, energy
One of the clearest indications of stagnation is the flatlining of energy usage. Because the growth in this metric was mentioned in the autobiography of Henry Adams (grandson of John Quincy Adams), Hall calls the long-term trend of about 7% annual growth in energy usage per capita the “Henry Adams Curve”. In the late 20th century, we fell off of it:
Some techno-optimists, such as Andy McAfee, celebrate the flatlining and even peaking of resource usage curves, saying that we are getting “more from less”. Hall reminds us that more is more. All else being equal, energy efficiency is great. But there’s no reason to believe that flatlining or declining resource usage is optimal for progress. A large part of progress is harnessing ever-more resources and putting them to productive use. And indeed, we’re going to need lots more energy if we’re ever going to get nanotech manufacturing, regular space travel, and of course flying cars. In fact, a good explanation for technological stagnation is that the only technological revolution of the last 50 years, computing, was the only one that didn’t need more power than could be provided by the technology of the 1970s.
Where will all this energy come from? It could come from solar: the amount of power reaching the Earth from the Sun is some 10,000 times greater than the current power requirements of humanity. Of course, it’s hard to harness in practice, owing to cloud cover and pesky inconveniences such as nighttime, but that’s nothing a well-placed fleet of a quintillion remote-controlled aerostats in the stratosphere couldn’t handle.
But the majority of the energy discussion in the book focuses on the amazing potential of nuclear. The upshot is that we ought to have nuclear-powered everything. Nuclear homes with local, compact reactors—they don’t need to be on the grid. Nuclear cars, whether flying or ground. Even nuclear batteries—I was shocked to learn that certain designs of nuclear batteries were actually manufactured decades ago and used safely in implantable pacemakers.
The main benefit, of course, is the insane energy density of nuclear fuel: just over a pound of enriched uranium has as much energy as 10,000 gallons of gasoline or over 100,000 pounds of anthracite coal. With nuclear batteries, no device would ever need to be recharged; with nuclear engines and generators, “your ground car and your home’s power unit will be refueled upon annual maintenance.” This fuel efficiency makes the economics of nuclear look almost identical to that of renewables: the fuel is practically free, compared to the fixed cost of infrastructure: “A wind turbine uses up more lubricating oil than a nuclear plant uses uranium, per kilowatt-hour generated.”
The book describes several potential engineering approaches for nuclear power, not just the established fission plants based on uranium-235 that are in operation today, but everything up through speculative possibilities such as “chainless reactors” that bombard fissionable materials with high-energy neutrons, avoiding any nuclear chain reaction. Hall says that even cold fusion—er, sorry, I mean “low-energy nuclear reactions” (LENR)—deserves more research: although it might still turn out to be an unexploitable phenomenon or even an experimental artifact, there’s something going on that we don’t yet understand. Three chapters are dedicated to nuclear power; my main takeaway is that the variety of possibilities, and the scope and magnitude of the potential here, is breathtaking and underappreciated.
The need for energy is fundamental to the economy, and yet a remarkable feature of our culture is the opposition to almost any form of energy—a pathology that Hall dubs “ergophobia”. (More on this below.)
Level 5
Putting together all this and more, Hall summarizes his vision for the future as a “Second Atomic Age” based on nuclear, nanotech, and artificial intelligence. It’s a vision of continued exponential or even super-exponential progress, a world in which we see improvement in the world of atoms as fast as we’ve recently seen improvement only in the world of bits.
Hall cites the global development advocate Hans Rosling, who classified the world population into four levels of income, on a logarithmic scale from $1/day (extreme poverty) to $64/day (which gets you electricity, a car, a washing machine, etc.). Using this scale, he says (emphasis added):
The miracle of the Industrial Revolution is now easily stated: In 1800, 85% of the world’s population was at Level 1. Today, only 9% is. Over the past half century, the bulk of humanity moved up out of Level 1 to erase the rich-poor gap and make the world wealth distribution roughly bell-shaped.
The average American moved from Level 2 in 1800, to level 3 in 1900, to Level 4 in 2000.
We can state the Great Stagnation story nearly as simply: There is no level 5.
Where Is My Flying Car? paints a vivid picture of what Level 5 would look like, and why we should keep working to get there.
The roots of stagnation
So, why aren’t we on Level 5 yet? What caused the Great Stagnation? What flatlined the Henry Adams Curve? Why don’t we have nanotech manufacturing and nuclear-powered everything? And where is my flying car?
Hall blames a number of political and cultural factors:
Centralized funding
He starts with a case study on nanotech. True nanotech, he says, was killed by federal funding. Well, not by federal funding directly, but by a storm of academic politics that followed predictably from the $500 million National Nanotech Initiative kicked off under President Clinton. With a new pot of money on the table, and with academic funding being largely a zero-sum game, researchers in adjacent fields responded in two ways. First, they rebranded whatever they were doing “nanotech”, even projects such as nanoscale materials science that are unrelated to the original vision of atomically precise manufacturing. Second, they aggressively attacked that original vision. The result was that all the funding and credibility for true nanotech evaporated.
Hall cites a passage from Machiavelli written in the 1500s that describes how politically dangerous it is to attempt to introduce an innovation: all of those who will be the losers if you succeed are galvanized against you, whereas those who would be the winners are much less motivated, given how speculative and uncertain the new innovation is. Seeing sixteenth-century social theory perfectly describe modern academic politics, Hall dubs this “The Machiavelli Effect.” And he cites other instances: cold fusion research, he says, was killed by a similar process.
He concludes that “the increasing centralization and bureaucratization of science and research funding” is a major culprit:
Centralized funding of an intellectual elite makes it easier for cadres, cliques, and the politically skilled to gain control of a field, and they by their nature are resistant to new, outside, non-Ptolemaic ideas. The ivory tower has a moat full of crocodiles.
It is at the least suspicious, one must admit, that the major runup in civilian federal funding for research pretty nearly coincides with the recent period of technological slowdown.
The burden of regulation
Hall quotes a post on a message board suggesting that even if you had built a flying car and were ready to take to the air, you’d be shot down by the FAA, the mayor, the news media, the insurance company, and your neighbors. An even greater regulatory burden applies to nuclear power, which Hall blames for the skyrocketing cost of power plants in the US:
In addition to the direct friction this burden places on innovation, it’s also a drain on human capital:
How much of a drain?
According to a study conducted by Tillinghast-Towers Perrin, the cost of the U.S. tort system consumes about two percent of GDP, on average. If we assume this mostly started around 1980 when lawyers skyrocketed and the airplane industry was destroyed, the long-run compound-interest effect on the economy as a whole is startling: without it our economy today would be twice the size it actually is. This is the closest we can come to measuring the effect of taking more than a million of the country’s most talented and motivated people and put them to work making arguments and filing briefs against each other so their efforts mostly cancel out, instead of inventing, developing, and manufacturing things which could have made life better.
The counterculture
Through maybe the 1950s, visions of the future, although varied, were optimistic. People believed in progress and saw technology as taking us forward to a better world. In the span of a generation, that changed, with the shift becoming prominent by the late 1960s. A “counterculture” arose which did not believe in technology or progress: indeed, a major element of the counterculture was the environmentalist movement, much of which saw technology and industry as actively destroying the Earth.
In H. G. Wells’s The Time Machine, the “Eloi” were a weak, dissolute race of useless people who contribute nothing to society (a parody of the idle rich of 19th-century England). Hall calls the activists of the counterculture the “Eloi Agonistes”, and blames them for “ergophobia” and for excessive regulation:
Unlike a century ago, today for everyone who is working on advancing technological progress, there is someone else who fervently believes that they are saving the planet by stopping them.
Just as much as legal compliance and litigation, social activism is a drain on human capital:
… simply the diversion of so many of the most talented and motivated members of the last several generations from productive pursuits to expensive virtue signaling is one of the main causes of the technological slowdown and the Great Stagnation. If your neighbor is Saving the Planet, it seems somehow less valuable merely to keep clean water running in the mains, or fill potholes, or build bridges. Eloi Agonistes have stolen the respect and gratitude that the people who are actually doing valuable work should be getting.
The shift in values was reflected in, and reinforced by, a shift in science fiction towards the dystopian:
Science fiction has a long and valuable history of providing us with visions of a better world. Verne, Wells, Burroughs, Gernsback—even Bellamy—much less Campbell, Doc Smith, van Vogt, Heinlein, Asimov, Garrett, Piper, Niven, and Pournelle, provided people with places and lives they could imagine and aspire to create. Science fiction since the Sixties has signally failed in that regard; we have been fed, by and large, a diet of Chicken Little soup in a pot of message, ladled out over leg of Frankenstein.
Where did the Eloi Agonistes come from, and why did they rise when they did? Hall suggests a couple of related factors. One, the success of industrial civilization at meeting everyone’s basic needs for food, clothing and shelter pushed people up Maslow’s Hierarchy to seek self-actualization, which they did in the form of social activism. Two, the closing of the frontier meant the loss of a world in which people had to contend directly with nature and reality:
After a long period of sustained social interaction, many forms of self-deception will become baked into the culture, and major social institutions will become in large part vehicles for virtue signalling…. But on the frontier, where a majority of one’s efforts are not in competition with others but directly against nature, self-deception is considerably less valuable. A culture with a substantial frontier is one with at least a countervailing force against the cancerous overgrowth of largely virtue-signalling, cost-diseased institutions.
Personally, I don’t think these explanations tell the whole story. If people needed self-actualization, why choose anti-technology crusades? Why not self-actualize through invention, or art? I think we need to find an explanation not only for the form of people’s behavior, but for the content. Deirdre McCloskey suggests that the intellectual class had turned against capitalism and industry as early as 1848 (and Ayn Rand traces the intellectual roots to the late 1700s, blaming Immanuel Kant for killing the Enlightenment). This remains an open question for me.
There are many writers with optimistic visions of the future. However, the goals I most often hear are all the negation of negatives: cure cancer, eliminate poverty, stop climate change.
This is good, but it is not enough. We should not only cure disease and let everyone live to what is now considered old age—we should cure aging itself and extend human lifespan indefinitely. We should not seek to merely sustain current per-capita energy usage—we should get back on the Henry Adams Curve and increase it. We should not only seek to avoid worsening the climate—we should seek to actively control and optimize it for human ends. We should not merely get the whole world up to Level 4—we should be striving for Level 5.
Aiming only for the former, as some so-called techno-optimists do, is a poor sort of optimism. It is actually calling for very limited progress, followed by stagnation. It is complacency with the status quo, content with bringing the whole world up to the current best standard of living, but not increasing it. In this context, I found Where Is My Flying Car? refreshing. Hall unabashedly calls for unlimited progress in every dimension.
My only significant criticism (well, other than the malformatted data tables) is that the content isn’t tightly organized; the chapters jump around a lot. And there are a number of very deep dives and long digressions on detailed technical topics; I mostly enjoyed these, but if you’re not into them, feel free to skim.
Overall, though, I found the book captivating and it has become one of my favorite books on stagnation and progress. Recommended for all my readers.
Well, it never went to zero, it continues to this day, as one institution then another (e.g. Google a year or two ago) randomly starts a program (...then usually eventually drops it), among other things. I’ve heard the number $500M as an estimate of the amount spent on cold fusion / LENR research in total. (But I can’t immediately find where I heard that.)
I actually spent quite a lot of my free time over several years blogging about that topic, see https://coldfusionblog.net/. I started the blog with an open mind but wound up strongly agreeing with the mainstream scientific consensus: there’s no such thing as cold fusion / LENR. See especially my last post The case against cold fusion experiments. :-)
Very interesting
I have not read the book so I might not give it justice. But while the topic is greatly worthwhile some of the stories given to explain “the roots of stagnation” seem off to me. I will try to explain why.
On nanotech: the NNI was a US initiative, and it is strange to explain a worldwide setback with a dysfunction of US funding. Europe has a completely different science funding system, with EU-scale funding interacting with different national systems and priorities. A research area that got large centralized funding in the EU is that of nanoscale materials, which above is described as contributing to the death of nanotech in order to get a share of the pot of money. Yet, in both US and EU nanoscale materials research is alive, if with somewhat disappointing results, and nanotech is struggling. You could say that research in the EU is tied to the one in the US, which would be true although there is a complex bidirectional relationship. But other geographical areas are more loosely connected, for example Japan has his own research ecosystem to the point that it often takes me much longer to read a Japanese paper—too many references to things I never heard about. No chance that the NNI killed nanotech in Japan as well. My hypothesis (pure speculation) is that nanotech tried to go for the technological payoff too soon, while they needed ten more years of basic research, improved tools etc.
On nuclear: I don’t think it makes sense to discuss nuclear without mentioning the military applications, i.e. the atomic bomb. I think this contributed to the difficulties of the technology through two channels. One, in the public mind nuclear=scary is a very clear and strong association, so good luck to any attempt to deregulate nuclear power. Second, a top priority of governments has been to limit the access to nuclear technology, i.e. the opposite of free market. They were probably correct: in the world imagined above with at-home nuclear reactors, all countries an many non-state actors would have nuclear bombs. By now we would have had a dozen nuclear wars and al quaeda would have blown up Manhattan instead of just destroying two skyscrapers.
I feel that the author of the book may have approached the subject from a libertarian prospective and concluded unsurprisingly that the government is at fault.
I also have more nebulous doubts on the “counterculture trends” explanation. My feeling is that anti-technology culture has always been there, and pro-technology culture has not really gone away. But explaining myself on the issue would make this wall of text even longer :)
The author does overstate the harm from NNI. Drexler’s vision needed larger-scale coordination than just a bunch of small academic labs that needed to focus on publishing papers. It needed something closer to the Apollo program. The details of NNI are just a small part of the political and cultural changes which made it harder to organize an Apollo program in 2000 than it was in 1960.
The author does have libertarian tendencies, but he implies that libertarianism is less important than the difference between a well-run government and a poorly run government. A fair amount of the book is devoted to analyzing why the US government became worse around 1970.
wikipedia shows the same rise of doctor per capita that look suspiciously the same as the lawyer rise:
https://en.wikipedia.org/wiki/File:Physicians_in_the_United_States_per_10,000_people_(1850-2009).svg
While in both cases the growth of the profession started at similar times, doctors seem to have doubled while lawyers quadrupled in the same timeframe.
Hm. Very interesting.
EDIT: I just remembered that I think this is mentioned in The Rise and Fall of American Growth and that it was attributed to an increase in specialization
Thanks for this very comprehensive review. It raises many interesting questions.
I think part of this is that you react against a system that doesn’t give you much status. If the social system allocates most status and resources to people who can master the creation of technology and the allocation of capital, but you’re not capable of that, then you will tend to criticise that system. And of course, most people are not capable of invention and art, or have never been given an opportunity to develop those faculties.
A stable social system needs to have a way of giving everyone access to meaning, especially those who don’t succeed in a conventional, material sense. Valorising technological progress and consumption can provide meaning for some, but not for those who don’t succeed materially. In contrast, a religion like Christianity gave extra meaning to those who suffered, and in this way counterbalaced unequal material outcomes. That’s my interpretation, anyway. As for how one might give everyone access to meaning in a postmodern world, I have some thoughts on that here (Section 11.1).
It would be very interesting to compare this to other countries. My loose impression is that the number of cases relating to tort law increased quite dramatically in Ireland over the past twenty years, such that it has had a big effect on the price of insurance. There are regular news items about such cases. But I don’t see those in the media of other European countries.
In Ireland (and maybe in the US), this problem could be solved by two actions. First, imposing maximum damages via legislation. Consider whiplash. According to one article, “the average amount paid out in Ireland for whiplash was 4.4 times higher than for similar injuries in England and Wales” so “if whiplash claims were capped at a maximum of €5,000, average premiums would drop from €700 to between €550 and €590 for most insured people.”
Second, moving away from punitive damages which seems to have been embraced by the US system but rejected in most European systems.
If regulation killed nuclear power plants in the US, why aren’t there any other countries building nuclear plants more cheaply?
Well, I think there are. See this article, especially South Korea: https://www.vox.com/2016/2/29/11132930/nuclear-power-costs-us-france-korea
Nuclear costs are declining in China.
My impression is that China has copied some of the US regulatory framework, but still allows more discretion.
I used financial data from CGN Power Company to estimate nuclear electricity selling prices. Data for 2011 from this report shows RMB0.3695 ($0.0558) / KWh, declining to RMB0.30 ($0.0425) / KWh in 2020 (from this report).
That’s not enough to have a big effect on the Chinese economy, but it’s enough to show that something’s working better in China than in the US.
I’m not sure nuclear power is fundamentally different in France or South Korea or China. Nuclear plant are heavily regulated centrally planned things everywhere, and for good reasons—nuclear is quite safe if done correctly but has the potential to go immensely wrong. The only example I know well is France, which is one of the very few countries that relies massively on nuclear power and it is thanks to massive government planning and funding from the 60′s onward.
And I want to point something up : we have nuclear power plant thanks to heavily centralised government funded research. Same thing for the computer (Turing), satellite telecommunications, and basically everything (DNA, relativity, quantum mechanic, computers, radioactivity are fields were a huge chunk of the applications (and most of the ground work in fundamental research) have come from public spending and planning. CRISPR-Cas9 is a recent example too. Oh yeah, and AIDS discovery. And Space X by the way also benefits from disguised NASA funding.
Now it could also be argued that the academic world is in itself a kind of free market for ideas—the currency being citations and reputation rather than hard dollars… and in this view public spending in research is useful but government planning less so.
It’s worth distinguishing public spending in general with heavily centralized public spending.
Thomas Kuhn wrote about how productive scientific fields are driven by the researchers in those fields tackling the problems they consider to be tractable and moving the field forward. In many cases those researchers are paid by public spending.
Centralized big public spending projects however need political buy in and require the researchers to justify their research projects by the needs of the political agenda that funds their projects.
Government planning seems to be useful when it comes for scaling up a technology. At the start of the human genome project it was easy to state the goal of sequencing the genome and the requirement to build better sequencing technology to achieve that goal.
Nuclear power plants are both less regulated in South Korea and for a long time the regulations were less followed due to corruption. Now it seems that the South Koreans cracked down on that corruption and want to exit nuclear power as well.
France seems to have rising cost for nuclear energy due to heavier regulation as well but still does it cheaper. Part of the reason seems to be less public inquiries (a form of regulation).
A similar question regarding rising costs in the US is around the cost of public transportation, especially building subway tunnels, where costs can vary by a factor of ten, with the US’s cost being about the highest globally. The blog Pedestrian Observations has talked a lot about it, for instance in this blog post. Per it, the ultimate cause is that the US is unwilling to learn from other countries because it still perceives itself as being top in the world at public transit while much of Europe and East Asia is surpassing it.
Some folks at NYU are doing an interesting project collecting data and case studies on this: Transit Costs Project
I’m generally in agreement with most of these points. One exception is that while I’d like much more nuclear power, I’d still prefer it in large reactors, or at least not so distributed as to be in every home and device. It is a technology dangerous enough to warrant some level of oversight.
On nanotech: true nanoscale manufacturing will always involve phases where you’re assembling nanoscale materials, or assembling using nanoscale components, and frankly the last few decades of materials science have taught us how hard it can be to really engineer and predict the behavior of materials in that scale range, let alone make use of it in macro-scale structures like metamaterials and nanocomposites. We’ve gotten way better at that, now, even if most big companies don’t yet really believe it beyond some very low-hanging fruit (chemicals and materials companies have long memories, and excessive early hype can poison the well for quite a while).
When looking at the question, I had the hypnothesis “Maybe, it became more worthwhile on focusing that the poor get more resources then on focusing that the rich get more”.
I looked at the US Gini numbers and interestingly from 1927 to the start of the Great Stagnation in the 1970s, Gini fell in the US and with the Great Stagnation it started rising.
The rise of the flying car industry
All progress is solving problems in the end.
Yes, but not all of it is well-understood as problem-solving ahead of time:
https://blog.spec.tech/p/is-necessity-actually-the-mother
One of the more contrarian claims of the book is that intermediate level nuclear waste is actually safe.
Is this true? Seems like this claim needs a stronger backup than an online tech webzine editor opinion.
I opened a Skeptics.SE question for the claim.