Just like to downgrade bioweapons as an existential threat, you have to argue that no individual or lab will ever accidentally or on purpose release something especially contagious or virulent.
The problem here is not that destruction is easier than benevolence, everyone agrees on that. The problem is that the SIAI is not arguing about grey goo scenarios but something that is not just very difficult to produce but that also needs the incentive to do so. The SIAI is not arguing about the possibility of the bursting of a dam but that the dam failure is additionally deliberately caused by the dam itself. So why isn’t for example nanotechnology a more likely and therefore bigger existential risk than AGI?
Even ems—human emulations—have this same problem, and they might actually be worse in some ways, as humans are known for doing worse things to each other than mere killing.
As I said in other comments, an argument one should take serious. But there are also arguments to outweigh this path and all others to some extent. It may very well be the case that once we are at the point of human emulation that we either already merged with our machines, that we are faster and better than our machines and simulations alone. It may also very well be that the first emulations, as it is the case today, run at much slower speeds than the original and that until any emulation reaches a standard-human level we’re already a step further ourselves or in our understanding and security measures.
unFriendly AGI is no less an existential risk than bioweapons are.
Antimatter weapons are less an existential risk than nuclear weapons although it is really hard to destroy the world with nukes and really easy to do so with antimatter weapons. The difference is that antimatter weapons are as much harder to produce, acquire and use than nuclear weapons as they are more efficient tools of destruction.
So why isn’t for example nanotechnology a more likely and therefore bigger existential risk than AGI?
If you define “nanotechnology” to include all forms of bioengineering, then it probably is.
The difference, from an awareness point of view, is that the people doing bioengineering (or creating antimatter weapons) have a much better idea that what they’re doing is potentially dangerous/world-ending, than AI developers are likely to be. The fact that many AI advocates put forth pure fantasy reasons why superintelligence will be nice and friendly by itself (see mwaser’s ethics claims, for example) is evidence that they are not taking the threat seriously.
Antimatter weapons are less an existential risk than nuclear weapons although it is really hard to destroy the world with nukes and really easy to do so with antimatter weapons. The difference is that antimatter weapons are as much harder to produce, acquire and use than nuclear weapons as they are more efficient tools of destruction.
Presumably, if you are researching antimatter weapons, you have at least some idea that what you are doing is really, really dangerous.
The issue is that AGI development is a bit like trying to build a nuclear power plant, without having any idea where “critical mass” is, in a world whose critical mass is discontinuous (i.e., you may not have any advance warning signs that you are approaching it, like overheating in a reactor), using nuclear engineers who insist that the very idea of critical mass is just a silly science fiction story.
What led you to believe that the space of possible outcomes where an AI consumes all resources (including humans) is larger than the number of outcomes where the AI doesn’t? For some reason(s) you seem to assume that the unbounded incentive to foom and consume the universe comes naturally to any constructed intelligence but any other incentive is very difficult to be implemented. What I see is a much larger number of outcomes where an intelligence does nothing without some hardcoded or evolved incentive. Crude machines do things because that’s all they can do, the number of different ways for them to behave is very limited. Intelligent machines however have high degrees of freedom to behave (pathways to follow) and with this freedom comes choice and choice needs volition, it needs incentive, the urge to follow one way but not another. You seem to assume that somehow the will to foom and consume is given, does not have to be carefully and deliberately hardcoded or evolved, yet the will to constrain itself to given parameters is really hard to achieve. I just don’t think that this premise is reasonable and it is what you base all your arguments on.
I suspect the difference in opinions here is based on different answers to the question of whether the AI should be assumed to be a recursive self-improver.
So why isn’t for example nanotechnology a more likely and therefore bigger existential risk than AGI?
That is a good question and I have no idea. The degree of existential threat there is most significantly determined by relative ease of creation. I don’t know enough to be able to predict which would be produced first—self replicating nano-technology or an AGI. SIAI believes the former is likely to be produced first and I do not know whether or not they have supported that claim.
Other factors contributing to the risk are:
Complexity—the number of ways the engineer could screw up while creating it in a way that would be catastrophic. The ‘grey goo’ risk is concentrated more specifically to the self replication mechanism of the nanotech while just about any mistake in an AI could kill us.
Awareness of the risks. It is not too difficult to understand the risks when creating a self replicating nano-bot. It is hard to imagine an engineer creating one not seeing the problem and being damn careful. Unfortunately it is not hard to imagine Ben.
I find myself confused at the fact that Drexlerian nanotechnology of any sort is advocated as possible by people who think physics and chemistry work. Materials scientists—i.e. the chemists who actually work with nanotechnology in real life—have documented at length why his ideas would need to violate both.
This is the sort of claim that makes me ask advocates to document their Bayesian network. Do their priors include the expert opinions of materials scientists, who (pretty much universally as far as I can tell) consider Drexler and fans to be clueless?
(The RW article on nanotechnology is mostly written by a very annoyed materials scientist who works at nanoscale for a living. It talks about what real-life nanotechnology is and includes lots of references that advocates can go argue with. He was inspired to write it by arguing with cryonics advocates who would literally answer almost any objection to its feasibility with “But, nanobots!”)
That RationalWiki article is a farce. The central “argument” seems to be:
imagine a car production line with its hi-tech robotic arms that work fine at our macroscopic scale. To get a glimpse of what it would be like to operate a production line on the microscopic scale, imagine filling the factory completely with gravel and trying to watch the mechanical arms move through it—and then imagine if the gravel was also sticky.
So: they don’t even know that Drexler-style nanofactories operate in a vacuum!
Drexler-style nanofactories don’t operate in a vacuum, because they don’t exist and no-one has any idea whatsoever how to make such a thing exist, at all. They are presently a purely hypothetical concept with no actual scientific or technological grounding.
The gravel analogy is not so much an argument as a very simple example for the beginner that a nanotechnology fantasist might be able to get their head around; the implicit actual argument would be “please, learn some chemistry and physics so you have some idea what you’re talking about.” Which is not an argument that people will tend to accept (in general people don’t take any sort of advice on any topic, ever), but when experts tell you you’re verging on not even wrong and there remains absolutely nothing to show for the concept after 25 years, it might be worth allowing for the possibility that Drexlerian nanotechnology is, even if the requisite hypothetical technology and hypothetical scientific breakthroughs happen, ridiculously far ahead of anything we have the slightest understanding of.
“The proposal for Drexler-style nanofactories has them operating in a vacuum”, then.
If these wannabe-critics don’t understand that then they have a very superficial understanding of Drexler’s proposals—but are sufficiently unaware of that to parade their ignorance in public.
The “wannabe-critics” are actual chemists and physicists who actually work at nanoscale—Drexler advocates tend to fit neither qualification—and who have written long lists of reasons why this stuff can’t possibly work and why Drexler is to engineering what Ayn Rand is to philosophy.
I’m sure they’ll change their tune when there’s the slightest visible progress on any of Drexler’s proposals; the existence proof would be pretty convincing.
Yep. Mostly written by Armondikov, who is said annoyed material scientist. I am not, but spent some effort asking other material scientists who work or have worked at nanoscale their expert opinions.
Thankfully, the article on the wiki has references, as I noted in my original comment.
The problem here is not that destruction is easier than benevolence, everyone agrees on that. The problem is that the SIAI is not arguing about grey goo scenarios but something that is not just very difficult to produce but that also needs the incentive to do so. The SIAI is not arguing about the possibility of the bursting of a dam but that the dam failure is additionally deliberately caused by the dam itself. So why isn’t for example nanotechnology a more likely and therefore bigger existential risk than AGI?
As I said in other comments, an argument one should take serious. But there are also arguments to outweigh this path and all others to some extent. It may very well be the case that once we are at the point of human emulation that we either already merged with our machines, that we are faster and better than our machines and simulations alone. It may also very well be that the first emulations, as it is the case today, run at much slower speeds than the original and that until any emulation reaches a standard-human level we’re already a step further ourselves or in our understanding and security measures.
Antimatter weapons are less an existential risk than nuclear weapons although it is really hard to destroy the world with nukes and really easy to do so with antimatter weapons. The difference is that antimatter weapons are as much harder to produce, acquire and use than nuclear weapons as they are more efficient tools of destruction.
If you define “nanotechnology” to include all forms of bioengineering, then it probably is.
The difference, from an awareness point of view, is that the people doing bioengineering (or creating antimatter weapons) have a much better idea that what they’re doing is potentially dangerous/world-ending, than AI developers are likely to be. The fact that many AI advocates put forth pure fantasy reasons why superintelligence will be nice and friendly by itself (see mwaser’s ethics claims, for example) is evidence that they are not taking the threat seriously.
Presumably, if you are researching antimatter weapons, you have at least some idea that what you are doing is really, really dangerous.
The issue is that AGI development is a bit like trying to build a nuclear power plant, without having any idea where “critical mass” is, in a world whose critical mass is discontinuous (i.e., you may not have any advance warning signs that you are approaching it, like overheating in a reactor), using nuclear engineers who insist that the very idea of critical mass is just a silly science fiction story.
What led you to believe that the space of possible outcomes where an AI consumes all resources (including humans) is larger than the number of outcomes where the AI doesn’t? For some reason(s) you seem to assume that the unbounded incentive to foom and consume the universe comes naturally to any constructed intelligence but any other incentive is very difficult to be implemented. What I see is a much larger number of outcomes where an intelligence does nothing without some hardcoded or evolved incentive. Crude machines do things because that’s all they can do, the number of different ways for them to behave is very limited. Intelligent machines however have high degrees of freedom to behave (pathways to follow) and with this freedom comes choice and choice needs volition, it needs incentive, the urge to follow one way but not another. You seem to assume that somehow the will to foom and consume is given, does not have to be carefully and deliberately hardcoded or evolved, yet the will to constrain itself to given parameters is really hard to achieve. I just don’t think that this premise is reasonable and it is what you base all your arguments on.
Have you read The Basic AI Drives?
I suspect the difference in opinions here is based on different answers to the question of whether the AI should be assumed to be a recursive self-improver.
That is a good question and I have no idea. The degree of existential threat there is most significantly determined by relative ease of creation. I don’t know enough to be able to predict which would be produced first—self replicating nano-technology or an AGI. SIAI believes the former is likely to be produced first and I do not know whether or not they have supported that claim.
Other factors contributing to the risk are:
Complexity—the number of ways the engineer could screw up while creating it in a way that would be catastrophic. The ‘grey goo’ risk is concentrated more specifically to the self replication mechanism of the nanotech while just about any mistake in an AI could kill us.
Awareness of the risks. It is not too difficult to understand the risks when creating a self replicating nano-bot. It is hard to imagine an engineer creating one not seeing the problem and being damn careful. Unfortunately it is not hard to imagine Ben.
I find myself confused at the fact that Drexlerian nanotechnology of any sort is advocated as possible by people who think physics and chemistry work. Materials scientists—i.e. the chemists who actually work with nanotechnology in real life—have documented at length why his ideas would need to violate both.
This is the sort of claim that makes me ask advocates to document their Bayesian network. Do their priors include the expert opinions of materials scientists, who (pretty much universally as far as I can tell) consider Drexler and fans to be clueless?
(The RW article on nanotechnology is mostly written by a very annoyed materials scientist who works at nanoscale for a living. It talks about what real-life nanotechnology is and includes lots of references that advocates can go argue with. He was inspired to write it by arguing with cryonics advocates who would literally answer almost any objection to its feasibility with “But, nanobots!”)
That RationalWiki article is a farce. The central “argument” seems to be:
So: they don’t even know that Drexler-style nanofactories operate in a vacuum!
They also need to look up “Kinesin Transport Protein”.
Drexler-style nanofactories don’t operate in a vacuum, because they don’t exist and no-one has any idea whatsoever how to make such a thing exist, at all. They are presently a purely hypothetical concept with no actual scientific or technological grounding.
The gravel analogy is not so much an argument as a very simple example for the beginner that a nanotechnology fantasist might be able to get their head around; the implicit actual argument would be “please, learn some chemistry and physics so you have some idea what you’re talking about.” Which is not an argument that people will tend to accept (in general people don’t take any sort of advice on any topic, ever), but when experts tell you you’re verging on not even wrong and there remains absolutely nothing to show for the concept after 25 years, it might be worth allowing for the possibility that Drexlerian nanotechnology is, even if the requisite hypothetical technology and hypothetical scientific breakthroughs happen, ridiculously far ahead of anything we have the slightest understanding of.
“The proposal for Drexler-style nanofactories has them operating in a vacuum”, then.
If these wannabe-critics don’t understand that then they have a very superficial understanding of Drexler’s proposals—but are sufficiently unaware of that to parade their ignorance in public.
The “wannabe-critics” are actual chemists and physicists who actually work at nanoscale—Drexler advocates tend to fit neither qualification—and who have written long lists of reasons why this stuff can’t possibly work and why Drexler is to engineering what Ayn Rand is to philosophy.
I’m sure they’ll change their tune when there’s the slightest visible progress on any of Drexler’s proposals; the existence proof would be pretty convincing.
Hah! A lot of the edits on that article seem to have been made by you!
Yep. Mostly written by Armondikov, who is said annoyed material scientist. I am not, but spent some effort asking other material scientists who work or have worked at nanoscale their expert opinions.
Thankfully, the article on the wiki has references, as I noted in my original comment.
So what were the priors that went into your considered opinion?