Reducing Catastrophic Risks, A Practical Introduction
While thinking about my own next career steps, I’ve been writing down some of my thoughts about what’s in an impactful career.
In the process, I wrote an introductory report on what seem to me to be practical approaches to problems in catastrophic risks. It’s intended to complement the analysis that 80,000 Hours provides by thinking about what general roles we ought to perform, rather than analysing specific careers and jobs, and by focusing specifically on existential risks.
I’m happy to receive feedback on it, positive and negative.
Here it is: Reducing Catastrophic Risks, A Practical Introduction.
Thanks for this. I’m not sure why it hasn’t gotten more upvotes.
On technological safety, Freitas & Merkle’s Kinematic Self-Replicating Machines includes material on safety measures for self-replicating molecular-nanotechnological machines. He also remains a Senior Research Fellow at the Institute for Molecular Manufacturing which runs the Assembler Safeguards Project for, surprise surprise, assembler safeguards.
Also, the emphasized words from the report seem superfluous:
Thank you for mentioning Unknown Unknowns explicitly. I suspect that they are one of the least discussed, simply because of the difficulties involved. However, I do think they may perhaps be more tractable than they appear at first. For example, there is a substantial existential risk posed by global catastrophic risks from which civilization never fully recovers. (This GiveWell article sums things up nicely: http://blog.givewell.org/2015/08/13/the-long-term-significance-of-reducing-global-catastrophic-risks/ ) I think trying to increase the probability that we recover from a broad range of GCR’s could make a significant decrease in total existential risk.
If we expect GCR’s to be much more likely than X-risk events, and if there is a reasonable chance that our civilization will never recover from any given GCR thus turning it into an X-risk, then it seams prudent to put a lot of effort into increasing the probability that we recover from various forms of collapse. Existing efforts include things like seed banks, fallout shelters, and efforts against a digital dark-age. However, those projects seem to be done for their own sake, rather than to reduce X-risk. I think it is likely that they would do things slightly differently if X-risk was the terminal goal. This implies that the counterfactual good we might do by joining may well be large.
Note that this also implies that work in the humanities might be especially useful, if it explores how and why civilizational progress occurs. Understanding the mechanisms might allow us to better predict what sorts of disruptions cause collapses, and what factors are most important in a recovery. Perhaps history, sociology, anthropology, etc. would be good career paths for people interested in reducing X-risk. (at least if they think they can help promote relevant academic research within their field.) We tend discuss the more quantitative fields here, perhaps overly so.
Yep. I guess that it’d be good to include books about recovering civilisation. Maybe also 3D printers, or some other design templates for basic agricultural equipment a la Open Source Ecology
The article at your link defines a GCR as
I can’t imagine how the death of, say, 10% of the global population is something from which civilization never (!) fully recovers.
Yeah, that claim definitely requires a bit of an explanation. Let me try and show you where I’m coming from with this.
Our current global economy is structured around growth, with many large feedback mechanisms in place to promote growth. Not all economies work that way.
The Romans, for example, were excellent at recognizing and adopting technology and ideas other civilizations had. However, just about the only actual invention they made was concrete. After Rome fell, there was a dark-age period where much of the knowledge that that the Romans had gathered was lost. Then there was a period of extremely slow growth as a few new agricultural practices were implemented and some of the lost classical knowledge was rediscovered due to the crusades. But if you were a landed baron, you would expect the same yield in rents and crops from your land each year, not a constant rise in productivity. Money changed hands, but there wasn’t much in the way of economic growth.
Oddly enough, the thing that really kicked things off was the black death. It killed a third of Europe, and suddenly there was a massive labor shortage. Land was the primary unit of wealth, and there weren’t enough people to work the land. This effectively made labor much more valuable, and made labor saving devices desirable. Before that, labor was so cheap that it just didn’t make sense to use anything else. This led to a ton of inventions like the Gutenberg printing press, and to the spread of things like windmills, which had been invented long before but never saw much use while cheap labor was available. This oriented large chunks of society around science and technology, and arguably led to the institutions that brought about the enlightenment and industrial revolution. Perhaps several other factors besides the black death helped things along, but the plague had a huge impact, especially on the ~100 years directly after it.
I could easily see a society focus it’s efforts purely on the near-term, with no long-term investments. Often, research into things like mathematics doesn’t get put to practical use for hundreds of years. Is our current research into string theory going to help someone build a better car within our lifetimes, let alone within a business cycle timeframe?
The mechanisms of science are large, and expensive. They require many thousands of specialists with domain-specific knowledge to work full time on expanding human knowledge. Someone has to pay for all that, and I can easily picture a society structured in such a way that such funding isn’t available. Then maybe you’ll get a few rich eccentrics funding their own curiosity, but I don’t know if that would be enough to kickstart a second scientific revolution.
This is especially true if we deplete our fossil fuel reserves. Our industrial revolution ran on coal and later oil, but I suspect that it would be much more difficult to kickstart such a massive movement using all-electric tech and renewable energy. Good solar panels and wind turbines do eventually produce more energy than it takes to manufacture them, but only because they are mass manufactured on an industrial scale. If I were to hack something together in a lab or garage, I would be surprised if I was even able to break even. This hints that the “activation energy” threshold for kickstarting a technology oriented society may be much higher if we deplete our oil reserves.
To make things worse, society may not WANT to recover. If we almost destroy ourselves with technology, I find it likely that many people will be against technology in general. I suspect nuclear weapons and the cold war were a large reason for so many hippies rejecting technology entirely. And honestly, I can’t blame them. It’s better to be alive without tech than dead with it. But if such sentiment became widespread and entrenched in everyone’s consciousness, it is unlikely that any government would ever fund the sciences in any way remotely like how we do today. Perhaps they’d have military tech research with very short timescales for innovation, but no long-reaching research into science which wouldn’t find practical applications for generations.
All that being said, I don’t think the default result of any economic collapse or civilization decline is going back to the stone age. It’s probably well below 50% probability, maybe even below 10%. But I highly doubt it is justifiable to put it below ~1%. It’s possible that a massive disaster might actually have the opposite effect, like what the black death did. But if the global science budget is what’s cut when budgets get tight, then I would expect an eventual decline in the rate of technological progress.
Except technology continued to improve during the dark ages. For example, during the late Roman Empire waterwheels where rare toys the philosophers/mathematicians occasionally played with, by the end of the dark ages, there was one on almost every suitable site in Europe. And thanks to lack of written records, we don’t even know how this happened.
On the other hand, the quantity of production did in fact greatly decline with the collapse of the Roman Empire.
You’re talking mostly about slowing growth or difficulties in rebuilding. But my question is different: given the death of ~10% of the population, why would the civilization collapse at all?
Let’s make it a bit more concrete. Assume the Yellowstone supervolcano unexpectedly blows up. Much of North America is rendered temporarily uninhabitable, there are a few years without summer leading to crop failures all around the world with consequent famines. Let’s say 10-20% of the people on Earth die within, say, three years.
Given this scenario, why would humanity devolve? No knowledge is lost. Most everyone is much poorer, but that’s not a big deal on the “back to stone age” level. We can still build machinery and computers, we can generate electricity, etc. etc.
Agreed, we could probably recover from a natural disaster, or even a war. On the other hand improperly handling the current migrant crisis in Europe may very well ultimately be as disastrous as Emperor Valens’ decision to let the Visigothic refugees fleeing the Huns settle south of the Danube.
Ah, got it. I’d probably agree that if 0.7 billion people just randomly dropped dead, the long term effects would likely be completely recoverable. It would be one more massive tragedy in the history of the human race, but not our downfall.
However, people fight awfully hard to survive, so in real life such massive death tolls often happen only after a long, brutal fight for survival. So, GCRs may be accompanied by massive cultural, political, and economic changes. The dust bowl killed very few people, but displaced millions. If 10% of the world died of hunger, I would expect this to coincide with a larger, more severe form of the great depression. However, if the super-volcano or whatever just killed 10% instantly, without the dust in the atmosphere blotting out the sun and inducing years of winter, then I agree that it’s not an existential risk.
Note for posterity: Apparently the depression started before the dust bowl, although it seems to have been made significantly worse by it. I don’t think this negates my conclusion, although it made me update in your direction somewhat. I can track down a couple other potential examples to examine, if you are interested.
The conclusion that Xrisk doesn’t necessarily scale with number of deaths is potentially useful. It hints that there may be efficient ways to reduce X-risk, even in the face of massive death tolls. In cases where deaths are unavoidable, perhaps there are effective ways to contain the suffering to only one generation.