[SEQ RERUN] Raised in Technophilia
Today’s post, Raised in Technophilia was originally published on 17 September 2008. A summary (taken from the LW wiki):
When Eliezer was quite young, it took him a very long time to get to the point where he was capable of considering that the dangers of technology might outweigh the benefits.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we’ll be going through Eliezer Yudkowsky’s old posts in order so that people who are interested can (re-)read and discuss them. The previous post was My Best and Worst Mistake, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day’s sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
This should probably read “might outweigh the benefits”.
How about existing examples of specific technologies where the risks outweigh the benefits? And perhaps a statement or two as to why this is.
Let me start with the obvious one—nuclear weapons. Nuclear weapons can be released in a matter of minutes, in theory, and at one time there was enough of them ready to fire to wipe out all the urban areas for Western civilization. Also, no real defense.
With this said, it is unclear to me as to the real probabilities of this risk. If an accidental release had occurred, and a whole city in the Eastern or Western bloc had been lost, would this have likely lead to an escalating series of exchanges resulting in every available weapon being fired? Also, was there any point with the policymakers with legitimate power considered an all out attack to destroy the other side, in essense hoping to extermine all military and infrastructure while “only” losing some cities.
And, despite this apparent existential risk, relatively few people were killed in global wars between nuclear armed nations after their development.
The reason why the nuclear weapons are such a risk is that human civilization is concentrated in nice neat target clusters, limits on control systems makes interception of incoming reentry vehicles difficult and unreliable, and due to tech limitations, anti-ICBM defenses are much more complex and resource intensive than the missiles themselves.
Molecular nanotechnology, if used to build “killer nanobots” that combust organic molecules with oxygen and so essentially incinerate all life in the biosphere a few molecules at a time, presumably have a similar problem. Destructive, weaponized bots would not need to be self replicating, and could be orders of magnitude simpler than a device that could somehow hunt down and disable the weaponized robots.
And of course, malicious AI. Pretty much unless the ecological niches that a malicious AI would occupy (aka all the major computing systems and automated factories) are already occupied by friendly AIs, we’re all screwed.
What mass ratio of mass consumed to bot mass per unit time are you supposing? If a bot can consume its own mass ten times per second, and the lab produces a kiloton of bots before releasing them, it still takes nearly two years to consume all biomass on earth, if there is no replenishment. (560 GT biomass counting only carbon.) Presumably we’re all in trouble well before that, but it still seems slow compared to what I read into your comments. I also think the bots would likely be slower and less numerous than that. (Heat concerns, rate of movement, and complexity requirements of self-repair would likely make these bots slow, but I don’t really know how slow.)
The bots are not able to self repair. Each one is constructed of diamondroid, and is in fact very similar to the bots described in this paper (just simpler) : http://www.foresight.org/Nanomedicine/Respirocytes.html
Except, instead of using their onboard oxygen supply to carry blood, they react it with whatever organic molecule they diffuse in to. Because I expect that synthetic catalysts will be much more efficient than nature (because they are rationally optimized and constructed of materials not found in nature, such as diamondroid and/or rare earth components) I expect that they will consume biomass at a rate limited only by diffusion. They do not need to consume all biomass on the planet—the heat generated by their activities will set forests on fire, cause land animals to “spontaneously” combust, set fires to buildings, vehicles, etc.
Some of the bots may be destroyed by the fires they themselves cause, I cannot model that.
I am assuming that the bots are produced using productive nanomachinery that IS able to self-replicate itself. (that is, the nanofactories that produce these bots can produce the parts for more nanofactories as well). So the attacker would have acres of warehouses stuffed with machinery mass producing these bots, and would release kilotons of them into the atmosphere. They would have internal clocks or respond to external signals so that they could be first sufficiently distributed around the planet before activation.
How to counter? The most straightforward way is probably that these devices would have weak points. You might be able to produce a synthetic molecule preferentially binds to the catalytic portion of these bots and jams them. I’m not real certain that would do any real good, however. It might let you save a human patient, but not the biosphere.
The way to survive in the short term in a bunker. These bots can’t eat any material that cannot be combusted, and a broader rule is that no nanobot of any design can operate for long on material that can’t be reacted to release free energy. So, ordinary concrete, thick walls of corrosion resistant metals, etc would provide a good defense against most or all nanobots.
On the bright side of things, the same technology used to make the robots could in principle allow survivors to live in self-contained bunkers, since a relatively small nano-factory could replace the many square miles of infrastructure we need in order to produce essential technologies today.
Another standard (at least in sci-fi) example: some gene in GMO foods unintentionally resulting in sterilization as a long-term side effect.
The problem with your example is that sterilization is a side effect that would take a very long time to actually cause armageddon, and if civilization can produce GMO foods they can PROBABLY reverse whatever leads to the sterility.
A nuclear exchange that wipes out the key parts of western civilization could happen in 1 hour, and the war could be over in 1 day. If someone were to release a huge swarm of killer nano-machines, it might take months to years to eat the biosphere (again, the machinery I am talking about is NOT self replicating) but developing a countermeasure could take decades.
Another factor to keep in mind is that human biotech advances are very slow and incremental, due to extremely heavy regulation. There is a reluctance to take risks, yet if the risks were taken, far more rapid advances are likely. If a significant proportion of the population were sterile, this reluctance to take risks would be enormously reduced, and rapid advances could happen.
One apocryphal story I heard from a professor at Texas Tech Medical School was that most chemotherapy agents in use today were developed during the postwar period, when regulation was almost nonexistent and a researcher could go straight from the laboratory to the bedside of a cancer patient.
Hearing stories like this, and realizing that in the United States, something like 1.6 million people die every year ANYWAY, I think that regulations should be greatly reduced, but that’s for another discussion.