Book review: Where Is My Flying Car? A Memoir of Future Past, by J.
Storrs Hall (aka Josh).
If you only read the first 3 chapters, you might imagine that this is
the history of just one industry (or the mysterious lack of an
industry).
But this book attributes the absence of that industry to a broad set of
problems that are keeping us poor. He looks at the post-1970 slowdown in
innovation that Cowen describes in The Great
Stagnation[1].
The two books agree on many symptoms, but describe the causes
differently: where Cowen says we ate the low hanging fruit, Josh says
it’s due to someone “spraying
paraquat on the
low-hanging fruit”.
The book is full of mostly good insights. It significantly changed my
opinion of the Great Stagnation.
The book jumps back and forth between polemics about the Great
Strangulation (with a bit too much outrage
porn), and nerdy
descriptions of engineering and piloting problems. I found those large
shifts in tone to be somewhat disorienting—it’s like the author can’t
decide whether he’s an autistic youth who is eagerly describing his
latest obsession, or an angry old man complaining about how the world is
going to hell (I’ve met the author at
Foresight conferences, and got similar but
milder impressions there).
Josh’s main explanation for the Great Strangulation is the rise of Green
fundamentalism[2], but he also describes other cultural /
political factors that seem related. But before looking at those, I’ll
look in some depth at three industries that exemplify the Great
Strangulation.
The good old days of Science Fiction
The leading SF writers of the mid 20th century made predictions for
today that looked somewhat close to what we got in many areas, with a
big set of exceptions in the areas around transportation and space
exploration.
The absence of flying cars is used as an argument against futurists’
ability to predict technology. This can’t be dismissed as just a minor
error of some obscure forecasters. It was a widespread vision of leading
technologists.
Josh provides a decent argument that we should treat that absence as a
clue to why U.S. economic growth slowed in the 1970s, and why growth is
still disappointing.
Were those SF writers clueless optimists, making mostly random
forecasting errors? No! Josh shows that for the least energy intensive
technologies, their optimism was about right, and the more energy
intensive the technology was, the more reality let them down.
Is it just a coincidence that people started worshiping energy
conservation around the start of the Great Stagnation? Josh says no, we
developed ergophobia—no, not the standard
meaning of ergophobia: Josh
has redefined it to mean fear of using energy.
Did flying cars prove to be technically harder than expected?
The simple answer is: mostly no. The people who predicted flying cars
knew a fair amount about the difficulty, and we may have forgotten more
than we’ve learned since then.
Josh describes, in more detail than I wanted, a wide variety of
plausible approaches to building flying cars. None of them clearly
qualify as low-hanging fruit, but they also don’t look farther from our
grasp than did flying machines in 1900.
How serious were the technical obstacles?
Air traffic control
Before reading this book, I assumed that there were serious technical
problems here. In hindsight, that looks dumb.
Josh calculates that there’s room for a million non-pressurized aircraft
at one time, under current rules about distance between planes (assuming
they’re spread out evenly; it doesn’t say all Tesla employees can land
near their office at 9am). And he points out that seagull tornadoes
(see this video) provide
hints that current rules are many orders of magnitude away from any hard
limits.
Regulators’ fear of problems looks like an obstacle, but it’s unclear
whether anyone put much thought into solving them, and it doesn’t look
like the industry got far enough for this issue to be very important.
Skill
It seems unlikely that anywhere near as many people would learn to fly
competently as have learned to drive. So this looks like a large
obstacle for the average family, given 20th century technology.
But we didn’t get close the point where that was a large obstacle to
further adoption. And 21st century technology is making
progress
toward convenient ways of connecting competent pilots with people who
want to fly, except where it’s actively
discouraged.
Cost
If the economic growth of 1945-1970 had continued, we’d be approaching
wealth levels where people on a UBI … oops, I mean on a national basic
income could hope to afford an occasional ride in a flying Uber that
comes to their door. At least if there were no political problems that
drove up costs.
Weather
Weather will make flying cars a less predictable means than ground cars
to get to a given destination. That seems to explain a modest fraction
of people’s reluctance to buy flying cars, but that explains at most a
modest part of the puzzle.
Safety
The leading cause of death among active pilots is … motorcycle
accidents.
I wasn’t able to verify that, and other sources say that general
aviation is roughly as dangerous as
motorcycles.
Motorcycles are dangerous enough that they’d likely be illegal if they
hadn’t been around before the Great Strangulation, so whether either of
those are considered safe enough seems to depend on accidents of
history.
People have irrational fears of risk, but there has also been a rational
trend of people demanding more safety because we can now afford more
safety. I expect this is a moderate part of why early SF writers
overestimated demand for flying cars.
The liability
crisis seems
to have hit general aviation harder than it hit most other industries.
I’m still unclear why.
One of the more ironic regulatory pathologies that has shaped the
world of general aviation is that most of the planes we fly are either
40 years old or homemade—and that we were forced into that position
in the name of safety.
If the small aircraft industry hadn’t mostly shut down, it’s likely that
new planes would have more safety features (airbags? whole-airplane
parachutes?).
The flying car industry hit a number of speedbumps, such as WWII
diverting talent and resources to other types of aviation, then a key
entrepreneur
being distracted by a patent dispute, and then was largely shut down by
liability lawsuits. It seems like progress should have been a bit faster
around 1950-1970 - I’m confused as to whether the industry did well
then.
At any rate, it looks like liability lawsuits were the industry’s
biggest problem, and they combined with a more hostile culture and
expensive energy to stop progress around 1980.
The book shifted my opinion from “those SF writers were confused” to
“flying cars should be roughly as widespread as motorcycles”. We should
be close to having autopilots which eliminate the need for human pilots
(and the same for motorcycles?), and then I’d consider it somewhat
reasonable for the average family to have a flying car.
Nuclear Power
Josh emphasizes the importance of cheap energy for things such as flying
cars, space travel, eradicating poverty, etc., and identifies nuclear
power as the main technology that should have made energy increasingly
affordable. So it seems important to check his claims about what went
wrong with nuclear power.
It shows a trend of costs declining with experience, just like a normal
industry where there’s some competition and where consumers seem to care
about price. Then that trend was replaced by a clear example of cost
disease[3].
I’ve
previouslyblogged
about the value of learning curves (aka experience curve effects) in
forecasting.
This is pretty inconsistent with running out of low-hanging fruit, and
is consistent with a broad class of political problems, including the
hypothesis of hostile regulation, and also the hypothesis that nuclear
markets were once competitive, then switched to having a good deal of
monopoly power.
This is a pretty strong case that something avoidable went wrong, but
leaves a good deal of uncertainty about what went wrong, and Josh seemed
a little too quick to jump to the obvious conclusion here, so I
investigated further[4]. I couldn’t find anyone arguing that
nuclear power hit technical problems around 1970, but then it’s hard to
find many people who try to explain nuclear cost trends at all.
This book chapter
suggests there was a shift from engineering decisions being mostly made
by the companies that were doing the construction, to mostly being
determined by regulators. Since regulators have little incentive to care
about cost, the effect seems fairly similar to the industry becoming a
monopoly. Cost disease seems fairly normal for monopolies.
That chapter also points out the effects of regulatory delays on costs:
“The increase in total construction time … from 7 years in 1971 to 12
years in 1980 roughly doubled the final cost of plants.”[5]
In sum, something went wrong with nuclear power. The problems look more
political than technical. The resulting high cost of energy slowed
economic progress by making some new technologies too expensive, and by
diverting talent to energy conservation. And by protecting the fossil
fuel industries, it caused millions of deaths, and maybe 174 Gt of
unnecessary CO2 emissions (about 31% of all man-made CO2 emissions).
This book convinced me that I’d underestimated how important nuclear
power could have been.
Nanotech
So the technology of the Second Atomic Age will be a confluence of two
strongly synergistic atomic technologies: nanotech and nuclear.
The book has a chapter on the feasibility of Feynman / Drexler style
nanotech, which attempts to find a compromise between Drexler’s
excruciatingly technical
Nanosystems and his
science-fiction style Engines of
Creation. That
compromise will convince a few people who weren’t convinced by Drexler,
but most people will either find it insufficiently technical, or else
hard to follow because it requires a good deal of technical knowledge.
Josh explains some key parts of why the government didn’t fund research
into the Feynman / Drexler vision of nanotech: centralization and
bureaucratization of research funding, plus the Machiavelli
Effect
the old order opposes change, and beneficiaries of change “do not
readily believe in new things until they have had a long experience of
them.”
Josh describes the mainstream reaction to nanotech fairly well, but
that’s not the whole story.
Why didn’t the military fund nanotech? Nanotech would likely exist today
if we had credible fears of Al Qaeda researching it in 2001. But my fear
of a nanotech arms race exceeds my desire to use nanotech.
Many VCs would get confused by top academics who dismissed (straw-man
versions of) Drexler’s vision. But there are a few VCs such as Steve
Jurvetson
who understand Drexler’s ideas well enough to not be confused by that
smoke. With those VCs, the explanation is no entrepreneurs tried a
sufficiently incremental
path
Most approaches to nanotech require a long enough series of development
steps to achieve a marketable product that VCs won’t fund them. That’s
not a foolish mistake on VCs part—they have sensible reasons to think
that some other company will get most of the rewards (how much did Xerox
get from PARC’s UI innovations?). Josh promotes an approach to nanotech
that seems more likely to produce intermediate products which will sell.
As far as I know, no entrepreneurs attempted to follow that path (maybe
because it looked too long and slow?).
The patent system has been marketed as a solution to this kind of
problem, but it seems designed for a
hedgehog-like
model of innovation, when what we ought to be incentivizing is a more
fox-like innovation process.
Mostly there isn’t a good system of funding technologies that take more
than 5 years to generate products.
If government funding got this right during the golden age of SF, the
hard questions should be focused more on what went right then, than on
what is wrong with funding now. But I’m guessing there was no golden age
in which basic R&D got appropriate funding, except when we were lucky
enough for popular opinion to support the technologies in question.
Problems with these three industries aren’t enough to explain the
stagnation, but Josh convinced me that the problems which affected these
industries are more pervasive, affecting pretty much all
energy-intensive technologies.
Culture and politics
Of all the great improvements in know-how expected by the classic
science-fiction writers, competent government was the one we got the
least.
I’ll focus now on the underlying causes of stagnation.
Green fundamentalism and ergophobia are arguably sufficient to explain
the hostility to nuclear power and aviation, but it’s less clear how
they explain the liability crisis or the stagnation in nanotech.
Josh also mentions a variety of other cultural currents, each of which
explain some of the problems. I expect these are strongly overlapping
effects, but I won’t be surprised if they sound as disjointed as they
did in the book.
In a civilization where a belief in a Big God is effectively
universal, there is a major advantage in the kind of things you can do
collectively. In today’s America, you can’t be trusted to ride on an
airliner with a nail file. How could you be trusted driving your own
1000-horsepower flying car? … The green religion, on the other hand,
instead of enhancing people’s innate conscience, tends to degrade it,
in a phenomenon called “licensing.” People who virtue-signal by buying
organic products are more likely to cheat and steal
From Peter Turchin: when
an empire becomes big enough to stop worrying about external threats to
its existence, the cooperative “we’re all in the same boat” spirit is
replaced by a “winner take all” mentality.
the evolutionary pressures to what we consider moral behavior arise
only in non-zero-sum interactions. In a dynamic, growing society,
people can interact cooperatively and both come out ahead. In a static
no-growth society, pressures toward morality and cooperation vanish;
Self deception is less valuable on a frontier where you’re struggling
with nature than it is when most struggles involve social interaction,
where self-deception makes virtue signaling
easier.
“If your neighbor is Saving the Planet, it seems somehow less valuable
merely to keep clean water running”.
“Technologies that provoke antipathy and promote discord, such as social
networks, are the order of the day; technologies that empower everyone
but require a background of mutual trust and cooperation, such as flying
cars, are considered amusing anachronisms.”
Those were Josh’s points. I’ll add these thoughts:
It’s likely that cultural changes led competent engineers to lose
interest in working for regulatory agencies. I don’t think Josh said
that explicitly, but it seems to follow fairly naturally from what he
does say.
Josh refers to Robin Hanson a fair amount, but doesn’t mention Robin’s
suggestion that increasing wealth lets us return to forager
values.
“Big god” values are clearly farmer values.
Mancur Olson’s The Rise and Decline of
Nations
(listed in the bibliography, without explanation), predicted in 1982
that special interests would be an increasing drag on growth in stable
nations. His reasoning differs a fair amount from Josh’s, but their
conclusions sound fairly similar.
Josh often focuses on Greens as if they’re a large part of the problem,
but I’m inclined to focus more on the erosion of trust and cooperation,
and treat the Greens more as a symptom.
The most destructive aspects of Green fundamentalism can be explained by
special interests, such as coal companies and demagogues, who manipulate
long-standing prejudices for new purposes. How much of Great
Strangulation was due to special interests such as coal companies? I
don’t know, but it looks like the coal industry would have died by 2000
(according to Peter Lang) if the pre-1970 trends in nuclear power had
continued.
Green religious ideas explain hostility to energy-intensive
technologies, but I have doubts about whether that would be translated
into effective action. Greens could have caused cultural changes that
shifted the best and the brightest away from wealth creation and toward
litigation.
That attempt to attribute the stagnation mainly to Greens seems a bit
weaker than the special interests explanation. But I remain very
uncertain about whether there’s a single cause, or whether it took
several independent errors to cause the stagnation.
What now? I don’t see how we could just turn on a belief in a big god.
The book says we’ll likely prosper in spite of the problems discussed
here, but leaves me a bit gloomy about achieving our full potential.
The book could use a better way of labeling environmentalists who aren’t
Green fundamentalists. Josh clearly understands that there are big
differences between Green fundamentalists and people with pragmatic
motives for reducing pollution or preserving parks. Even when people
adopt Green values mostly for signaling purposes, there are important
differences between safe rituals, such as recycling, and signals that
protect the coal industry.
Yet standard political terminology makes it sound like attacks on the
Greens signal hostility to all of those groups. I wish Josh took more
care to signal a narrower focus of hostility.
Ironically for a book that complains about virtue signaling, a fair
amount of the book looks like virtue signaling. Maybe that gave him a
license to ignore mundane things like publicizing the book (I couldn’t
find a mention of the book on his flying car
blog until 3 months after it was
published).
Has the act of writing this review licensed me to forget about being
effective? I’m a bit worried.
Miscellaneous comments and complaints
It isn’t perhaps realized just how much the war on cars contributed to
the great stagnation—or how much flying cars could have helped
prolong the boom.
Josh provides a good analysis of the benefits of near-universal car
ownership, and why something similar should apply to flying cars. But he
misses what I’ll guess was the biggest benefit of cars—people applied
for jobs for which they couldn’t have previously managed to get to an
interview. Company towns were significant in the 19th century—with
downsides that bore some similarity to slavery, due to large obstacles
to finding a job in another town. Better transportation and
communications changed
that.
He says “a century of climate change in the worst case might cost us
as much as liability lawyers do now.” He gets his estimate of the worst
case from this GAO
report.
That’s misleading about how we should evaluate the actual worst case.
I’m not too clear how they got those numbers, but they likely mean
something more like that there’s a 95% chance that according to some
model, climate change will do no more damage than lawyers. That still
leaves plenty of room for the worst 1% of possible outcomes to be much
worse than lawyers according to the model, and there’s enough
uncertainty in climate science that we should expect more than a 5%
chance of the model erring on the optimistic side. Note also that it’s
not hard to find a somewhat respectable source that says climate change
might cost over 20% of global
GDP.
I see other problems with his climate change comments, but they seem
less important than his dismissal of the tail risks.
It’s been a long time since I’ve flown a plane, but I don’t recall that
effect being significant. I find that a better way to achieve that
experience is to hike up a mountain whose summit is above the clouds.
Although there are relatively few places that have an appropriate
mountain nearby, and it takes somewhat special timing where I live to do
that.
I often tend to side with technological determinist views of history,
but this book provides some evidence against that. Just compare Uber
with “Uber for planes”—it looks like there’s a good deal of luck
involved in what progress gets allowed.
Josh illustrates the Machiavelli Effect by an example of expert advice
that fat is unhealthy, and he complains that the experts ignore Gary
Taubes carbophobic counter-movement. Yet what I see is people on both
sides of that debate focusing on interventions that are mostly
irrelevant.
Josh points out that we can test the advice, and reports that he lost a
good deal of weight after switching to a high-fat diet. Well, I tried a
similar switch in 2012 from a low-fat diet to a high-fat diet, and it
had no effect on my weight (and a terrible effect on my homocysteine and
sdLDL, due to high saturated fat). The dietary changes the had the best
effects on my weight were alternate day calorie restriction, cutting out
junk food (mainly via paleo heuristics), and eating less kelp (which was
depressing my thyroid via excess iodine).
He cites Scott Alexander in other contexts, but apparently missed this
post
pointing out serious flaws in Taubes’ claims. Note also that Taubes
reacted
poorly
to evidence against his theory.
Miscellaneous questions prompted by the book
The book hints that cultural beliefs have important influences on where
smart people apply their talents. This mostly seems hard to analyze.
Would Elon Musk be swayed by ergophobia or Green fundamentalism? That
seems like the main example I can generate about a competent tech leader
whose plans seem somewhat influenced by what popular beliefs about where
technology should head. Tesla and SolarCity arguably fit a pattern of
Musk being influenced by Green visions. But SpaceX looks more like
pandering to the visions of ergophiles.
The book left me wondering: where does high
modernism fit into this
story? I see many similarities between high modernism and this book’s
notion of who the bad guys are. Yet high modernism started to crumble a
bit before the worst parts of the Great Strangulation started (i.e.
around 1970). The book hints at a semi-satisfying answer: Christianity
and high modernism produced a decent balance of power where each
ideology checked the others’ excesses, but Green fundamentalism eroded
the good aspects of high modernism while strengthening the worst
aspects.
Did oil prices rise in the 1970s due to evidence that nuclear prices
were rising? I can almost imagine OPEC being prescient enough to see
that nuclear regulation saved them from important competition. The
timing of OPEC’s initial effects on the market seems to closely coincide
with the nuclear industry developing cost disease. But I don’t quite
expect that OPEC leaders were that smart.
Another odd hypothesis: increasing mobility enabled people to move too
easily to better jurisdictions. This scared lots of special interests
(e.g. local governments, companies with a local monopoly, etc., whose
power depended on captive customers), who reacted by advocating policies
which reduced mobility (e.g. stifling transportation, encouraging home
ownership instead of renting).
Quotes
I’ve only tried to summarize and analyze the more modest and basic parts
of the book here. Some parts of the book are too strange for me to want
to review. I will close with some quotes from them:
Hmmm. This might explain some of the book’s peculiarities: “ideation
recapitulates inebriation!”.
“The human of the future will have more and better senses, be stronger
and be adaptable to a much wider range of environments, and last but not
least have the biosphere atom-rearranging capability built in. The human
of the future need not have any ecological footprint at all.”
His favorite form of renewable energy is nuclear: “In other words, if we
start taking uranium out of seawater and use it for the entire world’s
energy economy, indeed a robustly growing energy economy, the
concentration in seawater will not decline for literally millions of
years.”
“In the Second Atomic Age, Litvenenko would have gotten a text from his
left kidney telling him that it had collected 26.5 micrograms of
Polonium-210, and what would he like to do with it?”
He asks us not to call this a greenhouse: “The LEDs emit only the
frequencies used by chlorophyll, so they are an apparently whimsical
purple. The air is moist, warm, and has a significantly higher fraction
of CO2 than natural air … the plants do not need pesticides because
insects simply can’t get to them. … you get something like 300 times
as much lettuce per square foot of ground than the pre-industrial
mule-and-plow dirt farmer. All you need is power, to have fresh local
strawberries in January in the Yukon or in August in Antarctica.”
And he likes tall buildings. I don’t want to classify this comment:
A ten-mile tower might have a footprint of a square mile and could
house 40 million people. Eight such buildings would house the entire
current population of the United States, leaving 2,954,833 square
miles of land available for organic lavender farms.
Compared to the skyhook (geostationary orbital tower), which is just
barely possible even with the theoretical best material properties, a
tower 100 km high is easy. Flawless diamond, with a compressive
strength of 50 GPa, does not even need a taper at all for a 100 km
tower; a 100-km column of diamond weights 3.5 billion newtons per
square meter but can support 50 billion. Even commercially available
polycrystalline synthetic diamond with advertised strengths of 5 GPa
would work.
A Weather Machine could probably double global GDP simply by regional
climate control. … You could make land in lots of places, such as
Northern Canada and Russia, as valuable as California.
I used to be sort of comfortable with Reynolds
numbers and lift-to-drag
ratios, but this claim seems to be beyond my pay grade:
Given the ridiculous wingspan and the virtually infinite Reynolds
number, we might get a lift-to-drag ratio of 100; we would need 1
billion pounds of thrust.
He’s interested in cold fusion, but admits it’s hard:
But we would like a theory in which whenever some mechanism causes
miracle 1 to happen, it almost always causes miracles 2 and 3. …
It seems at first blush that saying there might be a quantum coupling
between phonons and some nuclear degree of freedom is
indistinguishable from magic. But if you look closely, it’s not
completely insane.
I’ll take his word on that for now, since “look closely” appears to
require way more physics than I’m up for.
Biotech gets approximately one paragraph, including: “Expect
Astro the talking dog
before 2062. Expect to live long enough to see him.”
One of the hardest jobs that humans do, some well and some poorly, is
management of other humans. One of the major reasons this is hard is
that humans are selfish, unreasonable, fractious, and just plain
ornery. … On the other hand, managing robots with human-level
competence will be falling-down easy. In the next couple of decades,
robots will be climbing up the levels of competence to compete with
humans at one job and another. Until they become spectacularly better,
though, I suspect that the major effect will be to make management
easier—perhaps so easy a robot could do it! Once we build
trustworthy IQ 200 machines, only an idiot will trust any human to
make any decision that mattered …
What then are we humans supposed to do?
Don’t look at me! We already know that only a fool would ask a human
such an important question. Ask the machines.
when someone invents a method of turning a Nicaragua into a Norway,
extracting only a 1% profit from the improvement, they will become
rich beyond the dreams of avarice and the world will become a much
better, happier, place. Wise incorruptible robots may have something
to do with it.
Footnotes
[1] - I haven’t read The Great Stagnation, so I’m commenting based
on simple summaries of it. Based on what I know of Cowen, the books are
of superficially similar quality. Cowen does an unusual amount of broad
but shallow research, whereas Josh is less predictable about his
research quality, but his research is often much deeper than Cowen’s.
E.g. for this book, it included learning how to fly, and buying a plane.
That research alone likely cost him more money than he’ll make from the
book (not to mention hundreds of hours of his time), and it’s not the
only way in which his research is surprisingly deep.
[2] - not in quite the same sense as what people who call
themselves
Green fundamentalists mean, but pretty close. Both sides seems to agree
that a key issue is whether industrial growth is good or bad.
Some of what Josh dislikes about the worst
Greens:
My own doubts came when DDT was introduced. In Guyana, within two
years, it had almost eliminated malaria. So my chief quarrel with DDT,
in hindsight, is that it has greatly added to the population problem.
[3] - at least in many countries. South Korea’s nuclear costs have
continued to decline. The variation in when cost disease hits suggests
something other than engineering problems.
I got concerned about the lack of data from China. I couldn’t find
comparable Chinese data, so I used financial data from CGN Power Company
(2011 data
here,
first half 2018 data
here)
to show, if my math is right, that CGN sold power at RMB0.3695
($0.0558) / KWh in 2011 versus RMB0.2966 ($0.0448) / KWh in 2018, a
decline of nearly 20%. I.e. no cost disease there.
Note: I own stock in CGN Power.
Josh claims that the Navy’s nuclear power program avoided strangulation.
Where can I get data about the cost trends there?
[4] - I’ve looked for anti-nuke arguments about the cost of
nuclear power, and most seem to assume that cost disease is inevitable.
A few look for signs that nuclear power has been treated unfairly, and
focus on things like subsidies or carbon taxes.
It seems quite plausible that they start with the assumption that most
wealth is a gift from Mother Nature, and conclude that most important
conflicts are zero-sum struggles over who gets those gifts. They don’t
see anything that looks like taking resources away from nuclear power,
and conclude that nuclear power has been regulated fairly.
Let me suggest an analogy: imagine the early days of the dot-com boom,
when the benefits of Google search were not widely understood. Imagine
also a coalition of music distributors, and people who are devoted to
community-building via promoting social interaction in local libraries.
Such a coalition might see Google as a threat, and point to the risks
that Google would make porn more abundant. Such a coalition might well
promote laws requiring Google to check each search result for porn (e.g.
via manual inspection, or by only indexing pages of companies who take
responsibility for keeping porn off their sites). It would be obvious
that Google needs to charge a moderately high subscription fee for its
search—surely the new rules would only increase the subscription fees
by a small fraction. [It actually seemed obvious to most hypertext
enthusiasts up through about 1995 that a company like Google would need
to charge users for its service.] Oh, and
Xanadu has some
interesting ideas for how to use micropayments to more easily charge for
that kind of service—maybe Google can run under Xanadu?
A person who had no personal experience of benefiting from Google might
not notice much harm from such a regulation, or might assume it has a
negligible effect on Google’s costs. And someone who imagines that
Mother Nature is the primary source of free lunches is likely to
seriously underestimate the benefits of Google.
I’ve seen occasional hints that people attribute the cost increases to
valuable safety measures that had been missing from early reactors, but
I haven’t found anyone saying that who seems aware of the
risks of keeping other
energy sources in business. So I’m inclined to treat that the way I
treat concerns about the safety of consumers pumping
gas,
or the
dangers
of
caffeinateddriving.
[5] - note that the high inflation of the time complicates that
picture. A more simplified model would go like this: imagine 0%
inflation, and the company borrows money at an interest rate of 5%. Then
a 5-year delay causes the cost of capital to rise 27.6% (1.05^5). Cost
of capital is one of the larger
costs
of nuclear power, so the delays alone look sufficient to turn nuclear
power from quite competitive to fairly uncompetitive.
I expect that people who are unfamiliar with finance will underestimate
the significance of this.
[6] - Josh says this is an example of how science works pretty
well: social scientists are likely quite biased against this conclusion,
but keep upholding it.
Where is my Flying Car?
Link post
Book review: Where Is My Flying Car? A Memoir of Future Past, by J. Storrs Hall (aka Josh).
If you only read the first 3 chapters, you might imagine that this is the history of just one industry (or the mysterious lack of an industry).
But this book attributes the absence of that industry to a broad set of problems that are keeping us poor. He looks at the post-1970 slowdown in innovation that Cowen describes in The Great Stagnation[1]. The two books agree on many symptoms, but describe the causes differently: where Cowen says we ate the low hanging fruit, Josh says it’s due to someone “spraying paraquat on the low-hanging fruit”.
The book is full of mostly good insights. It significantly changed my opinion of the Great Stagnation.
The book jumps back and forth between polemics about the Great Strangulation (with a bit too much outrage porn), and nerdy descriptions of engineering and piloting problems. I found those large shifts in tone to be somewhat disorienting—it’s like the author can’t decide whether he’s an autistic youth who is eagerly describing his latest obsession, or an angry old man complaining about how the world is going to hell (I’ve met the author at Foresight conferences, and got similar but milder impressions there).
Josh’s main explanation for the Great Strangulation is the rise of Green fundamentalism[2], but he also describes other cultural / political factors that seem related. But before looking at those, I’ll look in some depth at three industries that exemplify the Great Strangulation.
The good old days of Science Fiction
The leading SF writers of the mid 20th century made predictions for today that looked somewhat close to what we got in many areas, with a big set of exceptions in the areas around transportation and space exploration.
The absence of flying cars is used as an argument against futurists’ ability to predict technology. This can’t be dismissed as just a minor error of some obscure forecasters. It was a widespread vision of leading technologists.
Josh provides a decent argument that we should treat that absence as a clue to why U.S. economic growth slowed in the 1970s, and why growth is still disappointing.
Were those SF writers clueless optimists, making mostly random forecasting errors? No! Josh shows that for the least energy intensive technologies, their optimism was about right, and the more energy intensive the technology was, the more reality let them down.
Is it just a coincidence that people started worshiping energy conservation around the start of the Great Stagnation? Josh says no, we developed ergophobia—no, not the standard meaning of ergophobia: Josh has redefined it to mean fear of using energy.
Did flying cars prove to be technically harder than expected?
The simple answer is: mostly no. The people who predicted flying cars knew a fair amount about the difficulty, and we may have forgotten more than we’ve learned since then.
Josh describes, in more detail than I wanted, a wide variety of plausible approaches to building flying cars. None of them clearly qualify as low-hanging fruit, but they also don’t look farther from our grasp than did flying machines in 1900.
How serious were the technical obstacles?
Air traffic control
Before reading this book, I assumed that there were serious technical problems here. In hindsight, that looks dumb.
Josh calculates that there’s room for a million non-pressurized aircraft at one time, under current rules about distance between planes (assuming they’re spread out evenly; it doesn’t say all Tesla employees can land near their office at 9am). And he points out that seagull tornadoes (see this video) provide hints that current rules are many orders of magnitude away from any hard limits.
Regulators’ fear of problems looks like an obstacle, but it’s unclear whether anyone put much thought into solving them, and it doesn’t look like the industry got far enough for this issue to be very important.
Skill
It seems unlikely that anywhere near as many people would learn to fly competently as have learned to drive. So this looks like a large obstacle for the average family, given 20th century technology.
But we didn’t get close the point where that was a large obstacle to further adoption. And 21st century technology is making progress toward convenient ways of connecting competent pilots with people who want to fly, except where it’s actively discouraged.
Cost
If the economic growth of 1945-1970 had continued, we’d be approaching wealth levels where people on a UBI … oops, I mean on a national basic income could hope to afford an occasional ride in a flying Uber that comes to their door. At least if there were no political problems that drove up costs.
Weather
Weather will make flying cars a less predictable means than ground cars to get to a given destination. That seems to explain a modest fraction of people’s reluctance to buy flying cars, but that explains at most a modest part of the puzzle.
Safety
I wasn’t able to verify that, and other sources say that general aviation is roughly as dangerous as motorcycles. Motorcycles are dangerous enough that they’d likely be illegal if they hadn’t been around before the Great Strangulation, so whether either of those are considered safe enough seems to depend on accidents of history.
People have irrational fears of risk, but there has also been a rational trend of people demanding more safety because we can now afford more safety. I expect this is a moderate part of why early SF writers overestimated demand for flying cars.
The liability crisis seems to have hit general aviation harder than it hit most other industries. I’m still unclear why.
If the small aircraft industry hadn’t mostly shut down, it’s likely that new planes would have more safety features (airbags? whole-airplane parachutes?).
The flying car industry hit a number of speedbumps, such as WWII diverting talent and resources to other types of aviation, then a key entrepreneur being distracted by a patent dispute, and then was largely shut down by liability lawsuits. It seems like progress should have been a bit faster around 1950-1970 - I’m confused as to whether the industry did well then.
At any rate, it looks like liability lawsuits were the industry’s biggest problem, and they combined with a more hostile culture and expensive energy to stop progress around 1980.
The book shifted my opinion from “those SF writers were confused” to “flying cars should be roughly as widespread as motorcycles”. We should be close to having autopilots which eliminate the need for human pilots (and the same for motorcycles?), and then I’d consider it somewhat reasonable for the average family to have a flying car.
Nuclear Power
Josh emphasizes the importance of cheap energy for things such as flying cars, space travel, eradicating poverty, etc., and identifies nuclear power as the main technology that should have made energy increasingly affordable. So it seems important to check his claims about what went wrong with nuclear power.
He cites a study by Peter Lang, with this strange learning curve:
It shows a trend of costs declining with experience, just like a normal industry where there’s some competition and where consumers seem to care about price. Then that trend was replaced by a clear example of cost disease[3]. I’ve previously blogged about the value of learning curves (aka experience curve effects) in forecasting.
This is pretty inconsistent with running out of low-hanging fruit, and is consistent with a broad class of political problems, including the hypothesis of hostile regulation, and also the hypothesis that nuclear markets were once competitive, then switched to having a good deal of monopoly power.
This is a pretty strong case that something avoidable went wrong, but leaves a good deal of uncertainty about what went wrong, and Josh seemed a little too quick to jump to the obvious conclusion here, so I investigated further[4]. I couldn’t find anyone arguing that nuclear power hit technical problems around 1970, but then it’s hard to find many people who try to explain nuclear cost trends at all.
This book chapter suggests there was a shift from engineering decisions being mostly made by the companies that were doing the construction, to mostly being determined by regulators. Since regulators have little incentive to care about cost, the effect seems fairly similar to the industry becoming a monopoly. Cost disease seems fairly normal for monopolies.
That chapter also points out the effects of regulatory delays on costs: “The increase in total construction time … from 7 years in 1971 to 12 years in 1980 roughly doubled the final cost of plants.”[5]
In sum, something went wrong with nuclear power. The problems look more political than technical. The resulting high cost of energy slowed economic progress by making some new technologies too expensive, and by diverting talent to energy conservation. And by protecting the fossil fuel industries, it caused millions of deaths, and maybe 174 Gt of unnecessary CO2 emissions (about 31% of all man-made CO2 emissions).
This book convinced me that I’d underestimated how important nuclear power could have been.
Nanotech
The book has a chapter on the feasibility of Feynman / Drexler style nanotech, which attempts to find a compromise between Drexler’s excruciatingly technical Nanosystems and his science-fiction style Engines of Creation. That compromise will convince a few people who weren’t convinced by Drexler, but most people will either find it insufficiently technical, or else hard to follow because it requires a good deal of technical knowledge.
Josh explains some key parts of why the government didn’t fund research into the Feynman / Drexler vision of nanotech: centralization and bureaucratization of research funding, plus the Machiavelli Effect
the old order opposes change, and beneficiaries of change “do not readily believe in new things until they have had a long experience of them.”
Josh describes the mainstream reaction to nanotech fairly well, but that’s not the whole story.
Why didn’t the military fund nanotech? Nanotech would likely exist today if we had credible fears of Al Qaeda researching it in 2001. But my fear of a nanotech arms race exceeds my desire to use nanotech.
Many VCs would get confused by top academics who dismissed (straw-man versions of) Drexler’s vision. But there are a few VCs such as Steve Jurvetson who understand Drexler’s ideas well enough to not be confused by that smoke. With those VCs, the explanation is no entrepreneurs tried a sufficiently incremental path
Most approaches to nanotech require a long enough series of development steps to achieve a marketable product that VCs won’t fund them. That’s not a foolish mistake on VCs part—they have sensible reasons to think that some other company will get most of the rewards (how much did Xerox get from PARC’s UI innovations?). Josh promotes an approach to nanotech that seems more likely to produce intermediate products which will sell. As far as I know, no entrepreneurs attempted to follow that path (maybe because it looked too long and slow?).
The patent system has been marketed as a solution to this kind of problem, but it seems designed for a hedgehog-like model of innovation, when what we ought to be incentivizing is a more fox-like innovation process.
Mostly there isn’t a good system of funding technologies that take more than 5 years to generate products.
If government funding got this right during the golden age of SF, the hard questions should be focused more on what went right then, than on what is wrong with funding now. But I’m guessing there was no golden age in which basic R&D got appropriate funding, except when we were lucky enough for popular opinion to support the technologies in question.
Problems with these three industries aren’t enough to explain the stagnation, but Josh convinced me that the problems which affected these industries are more pervasive, affecting pretty much all energy-intensive technologies.
Culture and politics
I’ll focus now on the underlying causes of stagnation.
Green fundamentalism and ergophobia are arguably sufficient to explain the hostility to nuclear power and aviation, but it’s less clear how they explain the liability crisis or the stagnation in nanotech.
Josh also mentions a variety of other cultural currents, each of which explain some of the problems. I expect these are strongly overlapping effects, but I won’t be surprised if they sound as disjointed as they did in the book.
It matters whether we fear an all-seeing god. From the book Big Gods: How Religion Transformed Cooperation and Conflict:
[6]
From Peter Turchin: when an empire becomes big enough to stop worrying about external threats to its existence, the cooperative “we’re all in the same boat” spirit is replaced by a “winner take all” mentality.
Self deception is less valuable on a frontier where you’re struggling with nature than it is when most struggles involve social interaction, where self-deception makes virtue signaling easier.
“If your neighbor is Saving the Planet, it seems somehow less valuable merely to keep clean water running”.
“Technologies that provoke antipathy and promote discord, such as social networks, are the order of the day; technologies that empower everyone but require a background of mutual trust and cooperation, such as flying cars, are considered amusing anachronisms.”
Those were Josh’s points. I’ll add these thoughts:
It’s likely that cultural changes led competent engineers to lose interest in working for regulatory agencies. I don’t think Josh said that explicitly, but it seems to follow fairly naturally from what he does say.
Josh refers to Robin Hanson a fair amount, but doesn’t mention Robin’s suggestion that increasing wealth lets us return to forager values. “Big god” values are clearly farmer values.
Mancur Olson’s The Rise and Decline of Nations (listed in the bibliography, without explanation), predicted in 1982 that special interests would be an increasing drag on growth in stable nations. His reasoning differs a fair amount from Josh’s, but their conclusions sound fairly similar.
Josh often focuses on Greens as if they’re a large part of the problem, but I’m inclined to focus more on the erosion of trust and cooperation, and treat the Greens more as a symptom.
The most destructive aspects of Green fundamentalism can be explained by special interests, such as coal companies and demagogues, who manipulate long-standing prejudices for new purposes. How much of Great Strangulation was due to special interests such as coal companies? I don’t know, but it looks like the coal industry would have died by 2000 (according to Peter Lang) if the pre-1970 trends in nuclear power had continued.
Green religious ideas explain hostility to energy-intensive technologies, but I have doubts about whether that would be translated into effective action. Greens could have caused cultural changes that shifted the best and the brightest away from wealth creation and toward litigation.
That attempt to attribute the stagnation mainly to Greens seems a bit weaker than the special interests explanation. But I remain very uncertain about whether there’s a single cause, or whether it took several independent errors to cause the stagnation.
What now? I don’t see how we could just turn on a belief in a big god. The book says we’ll likely prosper in spite of the problems discussed here, but leaves me a bit gloomy about achieving our full potential.
The book could use a better way of labeling environmentalists who aren’t Green fundamentalists. Josh clearly understands that there are big differences between Green fundamentalists and people with pragmatic motives for reducing pollution or preserving parks. Even when people adopt Green values mostly for signaling purposes, there are important differences between safe rituals, such as recycling, and signals that protect the coal industry.
Yet standard political terminology makes it sound like attacks on the Greens signal hostility to all of those groups. I wish Josh took more care to signal a narrower focus of hostility.
Ironically for a book that complains about virtue signaling, a fair amount of the book looks like virtue signaling. Maybe that gave him a license to ignore mundane things like publicizing the book (I couldn’t find a mention of the book on his flying car blog until 3 months after it was published).
Has the act of writing this review licensed me to forget about being effective? I’m a bit worried.
Miscellaneous comments and complaints
Josh provides a good analysis of the benefits of near-universal car ownership, and why something similar should apply to flying cars. But he misses what I’ll guess was the biggest benefit of cars—people applied for jobs for which they couldn’t have previously managed to get to an interview. Company towns were significant in the 19th century—with downsides that bore some similarity to slavery, due to large obstacles to finding a job in another town. Better transportation and communications changed that.
He says “a century of climate change in the worst case might cost us as much as liability lawyers do now.” He gets his estimate of the worst case from this GAO report. That’s misleading about how we should evaluate the actual worst case. I’m not too clear how they got those numbers, but they likely mean something more like that there’s a 95% chance that according to some model, climate change will do no more damage than lawyers. That still leaves plenty of room for the worst 1% of possible outcomes to be much worse than lawyers according to the model, and there’s enough uncertainty in climate science that we should expect more than a 5% chance of the model erring on the optimistic side. Note also that it’s not hard to find a somewhat respectable source that says climate change might cost over 20% of global GDP. I see other problems with his climate change comments, but they seem less important than his dismissal of the tail risks.
Josh reports that flying a plane causes him to think in far mode, much like our somewhat biased view of the future.
It’s been a long time since I’ve flown a plane, but I don’t recall that effect being significant. I find that a better way to achieve that experience is to hike up a mountain whose summit is above the clouds. Although there are relatively few places that have an appropriate mountain nearby, and it takes somewhat special timing where I live to do that.
While researching this review, I found this weird litigation story: Disney Sued for Not Building Flying “Star Wars” Car.
I often tend to side with technological determinist views of history, but this book provides some evidence against that. Just compare Uber with “Uber for planes”—it looks like there’s a good deal of luck involved in what progress gets allowed.
Josh illustrates the Machiavelli Effect by an example of expert advice that fat is unhealthy, and he complains that the experts ignore Gary Taubes carbophobic counter-movement. Yet what I see is people on both sides of that debate focusing on interventions that are mostly irrelevant.
Josh points out that we can test the advice, and reports that he lost a good deal of weight after switching to a high-fat diet. Well, I tried a similar switch in 2012 from a low-fat diet to a high-fat diet, and it had no effect on my weight (and a terrible effect on my homocysteine and sdLDL, due to high saturated fat). The dietary changes the had the best effects on my weight were alternate day calorie restriction, cutting out junk food (mainly via paleo heuristics), and eating less kelp (which was depressing my thyroid via excess iodine).
He cites Scott Alexander in other contexts, but apparently missed this post pointing out serious flaws in Taubes’ claims. Note also that Taubes reacted poorly to evidence against his theory.
Miscellaneous questions prompted by the book
The book hints that cultural beliefs have important influences on where smart people apply their talents. This mostly seems hard to analyze. Would Elon Musk be swayed by ergophobia or Green fundamentalism? That seems like the main example I can generate about a competent tech leader whose plans seem somewhat influenced by what popular beliefs about where technology should head. Tesla and SolarCity arguably fit a pattern of Musk being influenced by Green visions. But SpaceX looks more like pandering to the visions of ergophiles.
The book left me wondering: where does high modernism fit into this story? I see many similarities between high modernism and this book’s notion of who the bad guys are. Yet high modernism started to crumble a bit before the worst parts of the Great Strangulation started (i.e. around 1970). The book hints at a semi-satisfying answer: Christianity and high modernism produced a decent balance of power where each ideology checked the others’ excesses, but Green fundamentalism eroded the good aspects of high modernism while strengthening the worst aspects.
Did oil prices rise in the 1970s due to evidence that nuclear prices were rising? I can almost imagine OPEC being prescient enough to see that nuclear regulation saved them from important competition. The timing of OPEC’s initial effects on the market seems to closely coincide with the nuclear industry developing cost disease. But I don’t quite expect that OPEC leaders were that smart.
Another odd hypothesis: increasing mobility enabled people to move too easily to better jurisdictions. This scared lots of special interests (e.g. local governments, companies with a local monopoly, etc., whose power depended on captive customers), who reacted by advocating policies which reduced mobility (e.g. stifling transportation, encouraging home ownership instead of renting).
Quotes
I’ve only tried to summarize and analyze the more modest and basic parts of the book here. Some parts of the book are too strange for me to want to review. I will close with some quotes from them:
Hmmm. This might explain some of the book’s peculiarities: “ideation recapitulates inebriation!”.
“The human of the future will have more and better senses, be stronger and be adaptable to a much wider range of environments, and last but not least have the biosphere atom-rearranging capability built in. The human of the future need not have any ecological footprint at all.”
His favorite form of renewable energy is nuclear: “In other words, if we start taking uranium out of seawater and use it for the entire world’s energy economy, indeed a robustly growing energy economy, the concentration in seawater will not decline for literally millions of years.”
“In the Second Atomic Age, Litvenenko would have gotten a text from his left kidney telling him that it had collected 26.5 micrograms of Polonium-210, and what would he like to do with it?”
He asks us not to call this a greenhouse: “The LEDs emit only the frequencies used by chlorophyll, so they are an apparently whimsical purple. The air is moist, warm, and has a significantly higher fraction of CO2 than natural air … the plants do not need pesticides because insects simply can’t get to them. … you get something like 300 times as much lettuce per square foot of ground than the pre-industrial mule-and-plow dirt farmer. All you need is power, to have fresh local strawberries in January in the Yukon or in August in Antarctica.”
And he likes tall buildings. I don’t want to classify this comment:
Um, don’t forget the military implications which might offset that.
I used to be sort of comfortable with Reynolds numbers and lift-to-drag ratios, but this claim seems to be beyond my pay grade:
He’s interested in cold fusion, but admits it’s hard:
I’ll take his word on that for now, since “look closely” appears to require way more physics than I’m up for.
Biotech gets approximately one paragraph, including: “Expect Astro the talking dog before 2062. Expect to live long enough to see him.”
Footnotes
[1] - I haven’t read The Great Stagnation, so I’m commenting based on simple summaries of it. Based on what I know of Cowen, the books are of superficially similar quality. Cowen does an unusual amount of broad but shallow research, whereas Josh is less predictable about his research quality, but his research is often much deeper than Cowen’s. E.g. for this book, it included learning how to fly, and buying a plane. That research alone likely cost him more money than he’ll make from the book (not to mention hundreds of hours of his time), and it’s not the only way in which his research is surprisingly deep.
[2] - not in quite the same sense as what people who call themselves Green fundamentalists mean, but pretty close. Both sides seems to agree that a key issue is whether industrial growth is good or bad.
Some of what Josh dislikes about the worst Greens:
Alexander King
[3] - at least in many countries. South Korea’s nuclear costs have continued to decline. The variation in when cost disease hits suggests something other than engineering problems.
I got concerned about the lack of data from China. I couldn’t find comparable Chinese data, so I used financial data from CGN Power Company (2011 data here, first half 2018 data here) to show, if my math is right, that CGN sold power at RMB0.3695 ($0.0558) / KWh in 2011 versus RMB0.2966 ($0.0448) / KWh in 2018, a decline of nearly 20%. I.e. no cost disease there.
Note: I own stock in CGN Power.
Josh claims that the Navy’s nuclear power program avoided strangulation. Where can I get data about the cost trends there?
[4] - I’ve looked for anti-nuke arguments about the cost of nuclear power, and most seem to assume that cost disease is inevitable. A few look for signs that nuclear power has been treated unfairly, and focus on things like subsidies or carbon taxes.
It seems quite plausible that they start with the assumption that most wealth is a gift from Mother Nature, and conclude that most important conflicts are zero-sum struggles over who gets those gifts. They don’t see anything that looks like taking resources away from nuclear power, and conclude that nuclear power has been regulated fairly.
Let me suggest an analogy: imagine the early days of the dot-com boom, when the benefits of Google search were not widely understood. Imagine also a coalition of music distributors, and people who are devoted to community-building via promoting social interaction in local libraries. Such a coalition might see Google as a threat, and point to the risks that Google would make porn more abundant. Such a coalition might well promote laws requiring Google to check each search result for porn (e.g. via manual inspection, or by only indexing pages of companies who take responsibility for keeping porn off their sites). It would be obvious that Google needs to charge a moderately high subscription fee for its search—surely the new rules would only increase the subscription fees by a small fraction. [It actually seemed obvious to most hypertext enthusiasts up through about 1995 that a company like Google would need to charge users for its service.] Oh, and Xanadu has some interesting ideas for how to use micropayments to more easily charge for that kind of service—maybe Google can run under Xanadu?
A person who had no personal experience of benefiting from Google might not notice much harm from such a regulation, or might assume it has a negligible effect on Google’s costs. And someone who imagines that Mother Nature is the primary source of free lunches is likely to seriously underestimate the benefits of Google.
I’ve seen occasional hints that people attribute the cost increases to valuable safety measures that had been missing from early reactors, but I haven’t found anyone saying that who seems aware of the risks of keeping other energy sources in business. So I’m inclined to treat that the way I treat concerns about the safety of consumers pumping gas, or the dangers of caffeinated driving.
[5] - note that the high inflation of the time complicates that picture. A more simplified model would go like this: imagine 0% inflation, and the company borrows money at an interest rate of 5%. Then a 5-year delay causes the cost of capital to rise 27.6% (1.05^5). Cost of capital is one of the larger costs of nuclear power, so the delays alone look sufficient to turn nuclear power from quite competitive to fairly uncompetitive.
I expect that people who are unfamiliar with finance will underestimate the significance of this.
[6] - Josh says this is an example of how science works pretty well: social scientists are likely quite biased against this conclusion, but keep upholding it.