My relatively uninformed impression was that the particularly unique nanotech risk was poor programming leading to grey goo.
Is there a reason that economic disruption or increased weapon capacity are greater x-risks—which I thought were focused on under-appreciated but extreme downside risks. The examples from the article have greater expected harm because they are higher probability, but x-risks are civilization or humanity destroyers, aren’t they? Does economic disruption really have that large a downside?
My relatively uninformed impression was that the particularly unique nanotech risk was poor programming leading to grey goo.
The problem is that the grey goo has to out-compete the biosphere, which is hard if you’re designing nanites from scratch. If you’re basing them of existing lifeforms, that’s synthetic biology.
Yes, it’s very similar to the problem of designing a macroscopic robot that can out-compete natural predators of the same size. Early attempts will probably fail completely, and then we’ll have a few generations of devices that are only superior in some narrow specialty or in controlled environments.
But just as with robots, the design space of nanotech devices is vastly larger than that of biological life. We can easily imagine an industrial ecology of Von Neumann machines that spreads itself across a planet exterminating all large animal life, using technologies that such organisms can’t begin to compete with (mass production, nuclear power, steel armor, guns). Similarly, there’s a point of maturity at which nanotech systems built with technologies microorganisms can’t emulate (centralized computation, digital communication, high-density macroscopic energy sources) become capable of displacing any population of natural life.
So I’d agree that it isn’t going to happen by accident in the early stages of nanotech development. But at some point it becomes feasible for governments to design such a weapon, and after that the effort required goes down steadily over time.
Does economic disruption really have that large a downside?
Not on its own—but massive disruption at the same time that unprecedented manufacturing capacity existed, could lead to devastating long term wars. If nanotechnology was cheap and easy, once developed, these wars might go on perpetually.
Wouldn’t poor programming make grey goo a type of unfriendly A.I.? If so, then that would justify leaving it out of this description, as nanotechnology would just facilitate the issue. The computer commanding the nanobots would be the core problem.
Some of this is terminology, used with intent to narrow the topic—when Eliezer talks about FAI and uFAI, he’s mostly talking about potential Artificial General Intelligence. A nano-machine that makes additional copies of itself as part of its programming is not necessarily a General Intelligence. Most of the predicted uses of nano-machines wouldn’t require (or be designed to have) general intelligence.
I’m very aware that giving a terminological answer conceals that there is no agreement on what is or isn’t “General Intelligence.” About all we can agree on is that human intelligence is the archetype.
To put it slightly differently, one could argue that the laptop I’m using right now is a kind of Intelligence. And it’s clearly Artificial. But conversations about Friendly and unFriendly aren’t really about my laptop.
Fair enough, but the grey goo issue is still probably based enough in programming to categorize it separately from the direct implications of nanotechnological production.
Eh, I guess. I’m not a big fan of worrying about the consequences of something that both (a) works exactly as intended and (b) makes us richer.
So I think it is conflating problems to worry about weapon production and the general man’s-inhumanity-to-man problem when the topic is nanotechnology.
More importantly, the exact issue I’m worried about (poor programming of something powerful and barely under human control that has nothing to do with figuring out human morality) seems like it is going to be skipped.
My relatively uninformed impression was that the particularly unique nanotech risk was poor programming leading to grey goo.
Is there a reason that economic disruption or increased weapon capacity are greater x-risks—which I thought were focused on under-appreciated but extreme downside risks. The examples from the article have greater expected harm because they are higher probability, but x-risks are civilization or humanity destroyers, aren’t they? Does economic disruption really have that large a downside?
The problem is that the grey goo has to out-compete the biosphere, which is hard if you’re designing nanites from scratch. If you’re basing them of existing lifeforms, that’s synthetic biology.
Yes, it’s very similar to the problem of designing a macroscopic robot that can out-compete natural predators of the same size. Early attempts will probably fail completely, and then we’ll have a few generations of devices that are only superior in some narrow specialty or in controlled environments.
But just as with robots, the design space of nanotech devices is vastly larger than that of biological life. We can easily imagine an industrial ecology of Von Neumann machines that spreads itself across a planet exterminating all large animal life, using technologies that such organisms can’t begin to compete with (mass production, nuclear power, steel armor, guns). Similarly, there’s a point of maturity at which nanotech systems built with technologies microorganisms can’t emulate (centralized computation, digital communication, high-density macroscopic energy sources) become capable of displacing any population of natural life.
So I’d agree that it isn’t going to happen by accident in the early stages of nanotech development. But at some point it becomes feasible for governments to design such a weapon, and after that the effort required goes down steadily over time.
One difference is that the reproduction rate, and hence rate of evolution, of micro-organisms is much faster.
Not on its own—but massive disruption at the same time that unprecedented manufacturing capacity existed, could lead to devastating long term wars. If nanotechnology was cheap and easy, once developed, these wars might go on perpetually.
Wouldn’t poor programming make grey goo a type of unfriendly A.I.? If so, then that would justify leaving it out of this description, as nanotechnology would just facilitate the issue. The computer commanding the nanobots would be the core problem.
Some of this is terminology, used with intent to narrow the topic—when Eliezer talks about FAI and uFAI, he’s mostly talking about potential Artificial General Intelligence. A nano-machine that makes additional copies of itself as part of its programming is not necessarily a General Intelligence. Most of the predicted uses of nano-machines wouldn’t require (or be designed to have) general intelligence.
I’m very aware that giving a terminological answer conceals that there is no agreement on what is or isn’t “General Intelligence.” About all we can agree on is that human intelligence is the archetype.
To put it slightly differently, one could argue that the laptop I’m using right now is a kind of Intelligence. And it’s clearly Artificial. But conversations about Friendly and unFriendly aren’t really about my laptop.
Fair enough, but the grey goo issue is still probably based enough in programming to categorize it separately from the direct implications of nanotechnological production.
Eh, I guess. I’m not a big fan of worrying about the consequences of something that both (a) works exactly as intended and (b) makes us richer.
So I think it is conflating problems to worry about weapon production and the general man’s-inhumanity-to-man problem when the topic is nanotechnology.
More importantly, the exact issue I’m worried about (poor programming of something powerful and barely under human control that has nothing to do with figuring out human morality) seems like it is going to be skipped.