The correct way to approach a potential risk is not to come up with a couple of specific scenarios relating to the risk, evaluate those, and then pretend that you’ve done a proper analysis of the risk involved. That’s analogous to trying to make a system secure by patching security vulnerabilities as they show up and not even trying to employ safety measures such as firewalls, or trying to make a software system bug-free simply by fixing bugs as they get reported and ignoring techniques such as unit tests, defensive programming, etc. It’s been tried and conclusively found to be a bad idea by both the security and software engineering communities. If you want to be safe, you need to take into account as many possibilities as you can, not just concentrate on the particular special cases that happened to rise to your attention.
The proper unit of analysis here are not the particular techniques that an AI might use to take over. That’s pointless: for any particular technique that we discuss here, there might be countless of others that the AI could employ, many of them ones nobody has even thought of yet. If we’d be in an alternate universe where Eric Drexler was overrun by a car before ever coming up with his vision of molecular nanotechnology, the whole concept of strong nanotech might be unknown to us. If we then only looked at the prospects for cyberwar, and concluded that an AI isn’t a big threat because humans can do cyberwarfare too, we could be committing a horrible mistake by completely ignoring nanotech. Of course, since in that scenario we couldn’t know about nanotech, our mistake wouldn’t be ignoring it, but rather in choosing a methodology which is incapable of dealing with unknown unknowns even in principle.
So what is the right unit of analysis? It’s the power of intelligence. It’s the historical case of a new form of intelligence showing up on the planet and completely reshaping its environment to create its own tools. It’s the difference in the power of the chimpanzee species to change its environment towards its preferred state, and the power of the human species to change its environment towards its preferred state. You saying “well here I’ve listed these methods that an AI could use to take over humanity, and I’ve analyzed them and concluded that the AI is of no threat” is just as fallacious as it would be for a chimpanzee to say “well here I’ve listed these methods that a human could take over chimpanzity, and I’ve analyzed them and concluded that humans are no threat to us”. You can’t imagine the ways that an AI could come up with and attempt to use against us, so don’t even try. Instead, look at the historical examples of what happens when you pit a civilization of inferior intelligences against a civilization of hugely greater ones. And that will tell you that a greater-than-human intelligence is the greatest existential risk there is, for it’s the only one where it’s by definition impossible for us to come up with the ways to stop it once it gets out of control.
Of course, since in that scenario we couldn’t know about nanotech, our mistake wouldn’t be ignoring it, but rather in choosing a methodology which is incapable of dealing with unknown unknowns even in principle.
You have to limit the scope of unknown unknowns. Otherwise why not employ the same line of reasoning to risks associated with aliens? If someone says that there is no sign of aliens you just respond that they might hide or use different methods of communication. That is the same as saying that if the AI can’t make use of nanotechnology it might make use of something we haven’t even thought about. What, magic?
Yes, you could very well make an argument for the risks posed by superintelligent aliens. But then you would also have to produce an argument for a) why it’s plausible to assume that superintelligent aliens will show up anytime soon b) what we could do to prevent the invasion of superintelligent aliens if they did show up.
For AGI have an answer for point a (progress in computing power, neuroscience and brain reverse-engineering, etc.) and a preliminary answer for point b (figure out how to build benevolent AGIs). There are no corresponding answers to points a and b for aliens.
If someone says that there is no sign of aliens you just respond that they might hide or use different methods of communication. That is the same as saying that if the AI can’t make use of nanotechnology it might make use of something we haven’t even thought about.
No it’s not: think about this again. “Aliens of a superior intelligence might wipe us out by some means we don’t know” is symmetric to “an AGI with superior intelligence might wipe us out by some means we don’t know”. But “aliens of superior intelligence might appear out of nowhere” is not symmetric to “an AGI with superior intelligence might wipe us out by some means we don’t know”.
I didn’t mean to suggest that aliens are a more likely risk than AI. I was trying to show that unknown unknowns can not be employed to the extent you suggest. You can’t just say that ruling out many possibilities of how an AI could be dangerous doesn’t make it less dangerous because it might come up with something we haven’t thought about. That line of reasoning would allow you to undermine any evidence to the contrary.
You can’t just say that ruling out many possibilities of how an AI could be dangerous doesn’t make it less dangerous because it might come up with something we haven’t thought about. That line of reasoning would allow you to undermine any evidence to the contrary.
Not quite.
Suppose that someone brought up a number of ways by which an AI could be dangerous, and somebody else refuted them all by pointing out that there’s no particular way by which having superior intelligence would help in them. (In other words, humans could do those things too, and an AI doing them wouldn’t be any more dangerous.) Now if I couldn’t come up with any examples where having a superior intelligence would help, then that would be evidence against the “a superior intelligence helps overall”.
But all of the examples we have been discussing (nanotech warfare, biological warfare, cyberwarfare) are technological arms races. And in a technological arms race, superior intelligence does bring quite a decisive edge. In the discussion about cyberwarfare, you asked what makes the threat from an AI hacker different from the threat of human hackers. And the answer is that hacking is a task that primarily requires qualities such as intelligence and patience, both of which an AI could have a lot more than humans do. Certainly human hackers could do a lot of harm as well, but a single AI could be as dangerous as all of the 90th percentile human hackers put together.
What I am arguing is that the power of intelligence is vastly overestimated and therefore any associated risks. There are many dumb risks that can easily accomplish the same, wipe us out. It doesn’t need superhuman intelligence to do that. I also do not see enough evidence for the premise that other superior forms of intelligence are very likely to exist. Further I argue that there is no hint of any intelligence out there reshaping its environment. The stars show no sign of intelligent tinkering. I provided many other arguments for why other risks might be more worthy of our contribution. I came up with all those ideas in the time it took to write those comments. I simply expect a lot more arguments and other kinds of evidence supporting their premises from an organisation that has been around for over 10 years.
There are many dumb risks that can easily accomplish the same, wipe us out. It doesn’t need superhuman intelligence to do that.
Yes, there are dumb risks that could wipe us out just as well: but only a superhuman intelligence with different desires than humanity is guaranteed to wipe us out.
I also do not see enough evidence for the premise that other superior forms of intelligence are very likely to exist.
You don’t need qualitative differences: just take a human-level intelligence and add on enough hardware that it can run many times faster than the best of human thinkers, and hold far more things in its mind at once. If it came to a fight, the humanity of 2000 could easily muster the armies to crush the best troops of 1800 without trouble. That’s just the result of 200 years of technological development and knowledge acquisition, and doesn’t even require us to be more intelligent than the humans of 2000.
Further I argue that there is no hint of any intelligence out there reshaping its environment.
We may not have observed aliens reshaping their environment, but we can certainly observe humans reshaping their environment. This planet is full of artificial structures. We’ve blanketed the Earth with lights that can be seen anywhere where we’ve bothered to establish habitation. We’ve changed the Earth so much that we’re disturbing global climate patterns, and now we’re talking about large-scale engineering work to counteract those disturbances. If I choose to, there are ready transportation networks that will get me pretty much anywhere on Earth, and ready networks for supplying me with food, healthcare and entertainment on all the planet’s continents (though admittedly Antarctica is probably a bit tricky from a tourist’s point of view).
I also do not see enough evidence for the premise that other superior forms of intelligence are very likely to exist.
It seems as though it is rather easy to imagine humans being given the “Deep Blue” treatment in a wide range of fields. I don’t see why this would be a sticking point. Human intelligence is plainly just awful, in practically any domain you care to mention.
Further I argue that there is no hint of any intelligence out there reshaping its environment.
Uh, that’s us. wave
In case you didn’t realise, humanity is the proof of concept that superior intelligence is dangerous. Ask a chimpanzee.
I also do not see enough evidence for the premise that other superior forms of intelligence are very likely to exist.
Have you taken an IQ test? Anyone who scores significantly higher than you constitutes a superior form of intelligence.
There are many dumb risks that can easily accomplish the same, wipe us out. It doesn’t need superhuman intelligence to do that.
Few such dumb risks are being pursued by humanity. Superhuman intelligence solves all dumb risks unless you postulate a dumb risk that is in principle unsolvable. Something like collapse of vacuum energy might do it.
Contributing to the creation of FAI doesn’t just decrease the likelihood of UFAI, it also decreases the likelihood of all the other scenarios that end up with humanity ceasing to exist.
The correct way to approach a potential risk is not to come up with a couple of specific scenarios relating to the risk, evaluate those, and then pretend that you’ve done a proper analysis of the risk involved. That’s analogous to trying to make a system secure by patching security vulnerabilities as they show up and not even trying to employ safety measures such as firewalls, or trying to make a software system bug-free simply by fixing bugs as they get reported and ignoring techniques such as unit tests, defensive programming, etc. It’s been tried and conclusively found to be a bad idea by both the security and software engineering communities. If you want to be safe, you need to take into account as many possibilities as you can, not just concentrate on the particular special cases that happened to rise to your attention.
The proper unit of analysis here are not the particular techniques that an AI might use to take over. That’s pointless: for any particular technique that we discuss here, there might be countless of others that the AI could employ, many of them ones nobody has even thought of yet. If we’d be in an alternate universe where Eric Drexler was overrun by a car before ever coming up with his vision of molecular nanotechnology, the whole concept of strong nanotech might be unknown to us. If we then only looked at the prospects for cyberwar, and concluded that an AI isn’t a big threat because humans can do cyberwarfare too, we could be committing a horrible mistake by completely ignoring nanotech. Of course, since in that scenario we couldn’t know about nanotech, our mistake wouldn’t be ignoring it, but rather in choosing a methodology which is incapable of dealing with unknown unknowns even in principle.
So what is the right unit of analysis? It’s the power of intelligence. It’s the historical case of a new form of intelligence showing up on the planet and completely reshaping its environment to create its own tools. It’s the difference in the power of the chimpanzee species to change its environment towards its preferred state, and the power of the human species to change its environment towards its preferred state. You saying “well here I’ve listed these methods that an AI could use to take over humanity, and I’ve analyzed them and concluded that the AI is of no threat” is just as fallacious as it would be for a chimpanzee to say “well here I’ve listed these methods that a human could take over chimpanzity, and I’ve analyzed them and concluded that humans are no threat to us”. You can’t imagine the ways that an AI could come up with and attempt to use against us, so don’t even try. Instead, look at the historical examples of what happens when you pit a civilization of inferior intelligences against a civilization of hugely greater ones. And that will tell you that a greater-than-human intelligence is the greatest existential risk there is, for it’s the only one where it’s by definition impossible for us to come up with the ways to stop it once it gets out of control.
You have to limit the scope of unknown unknowns. Otherwise why not employ the same line of reasoning to risks associated with aliens? If someone says that there is no sign of aliens you just respond that they might hide or use different methods of communication. That is the same as saying that if the AI can’t make use of nanotechnology it might make use of something we haven’t even thought about. What, magic?
Yes, you could very well make an argument for the risks posed by superintelligent aliens. But then you would also have to produce an argument for a) why it’s plausible to assume that superintelligent aliens will show up anytime soon b) what we could do to prevent the invasion of superintelligent aliens if they did show up.
For AGI have an answer for point a (progress in computing power, neuroscience and brain reverse-engineering, etc.) and a preliminary answer for point b (figure out how to build benevolent AGIs). There are no corresponding answers to points a and b for aliens.
No it’s not: think about this again. “Aliens of a superior intelligence might wipe us out by some means we don’t know” is symmetric to “an AGI with superior intelligence might wipe us out by some means we don’t know”. But “aliens of superior intelligence might appear out of nowhere” is not symmetric to “an AGI with superior intelligence might wipe us out by some means we don’t know”.
I didn’t mean to suggest that aliens are a more likely risk than AI. I was trying to show that unknown unknowns can not be employed to the extent you suggest. You can’t just say that ruling out many possibilities of how an AI could be dangerous doesn’t make it less dangerous because it might come up with something we haven’t thought about. That line of reasoning would allow you to undermine any evidence to the contrary.
I’ll be back tomorrow.
Not quite.
Suppose that someone brought up a number of ways by which an AI could be dangerous, and somebody else refuted them all by pointing out that there’s no particular way by which having superior intelligence would help in them. (In other words, humans could do those things too, and an AI doing them wouldn’t be any more dangerous.) Now if I couldn’t come up with any examples where having a superior intelligence would help, then that would be evidence against the “a superior intelligence helps overall”.
But all of the examples we have been discussing (nanotech warfare, biological warfare, cyberwarfare) are technological arms races. And in a technological arms race, superior intelligence does bring quite a decisive edge. In the discussion about cyberwarfare, you asked what makes the threat from an AI hacker different from the threat of human hackers. And the answer is that hacking is a task that primarily requires qualities such as intelligence and patience, both of which an AI could have a lot more than humans do. Certainly human hackers could do a lot of harm as well, but a single AI could be as dangerous as all of the 90th percentile human hackers put together.
What I am arguing is that the power of intelligence is vastly overestimated and therefore any associated risks. There are many dumb risks that can easily accomplish the same, wipe us out. It doesn’t need superhuman intelligence to do that. I also do not see enough evidence for the premise that other superior forms of intelligence are very likely to exist. Further I argue that there is no hint of any intelligence out there reshaping its environment. The stars show no sign of intelligent tinkering. I provided many other arguments for why other risks might be more worthy of our contribution. I came up with all those ideas in the time it took to write those comments. I simply expect a lot more arguments and other kinds of evidence supporting their premises from an organisation that has been around for over 10 years.
Large brains can be dangerous to those who don’t have them. Look at the current human-caused mass extinction.
Yes, there are dumb risks that could wipe us out just as well: but only a superhuman intelligence with different desires than humanity is guaranteed to wipe us out.
You don’t need qualitative differences: just take a human-level intelligence and add on enough hardware that it can run many times faster than the best of human thinkers, and hold far more things in its mind at once. If it came to a fight, the humanity of 2000 could easily muster the armies to crush the best troops of 1800 without trouble. That’s just the result of 200 years of technological development and knowledge acquisition, and doesn’t even require us to be more intelligent than the humans of 2000.
We may not have observed aliens reshaping their environment, but we can certainly observe humans reshaping their environment. This planet is full of artificial structures. We’ve blanketed the Earth with lights that can be seen anywhere where we’ve bothered to establish habitation. We’ve changed the Earth so much that we’re disturbing global climate patterns, and now we’re talking about large-scale engineering work to counteract those disturbances. If I choose to, there are ready transportation networks that will get me pretty much anywhere on Earth, and ready networks for supplying me with food, healthcare and entertainment on all the planet’s continents (though admittedly Antarctica is probably a bit tricky from a tourist’s point of view).
It seems as though it is rather easy to imagine humans being given the “Deep Blue” treatment in a wide range of fields. I don’t see why this would be a sticking point. Human intelligence is plainly just awful, in practically any domain you care to mention.
Uh, that’s us. wave
In case you didn’t realise, humanity is the proof of concept that superior intelligence is dangerous. Ask a chimpanzee.
Have you taken an IQ test? Anyone who scores significantly higher than you constitutes a superior form of intelligence.
Few such dumb risks are being pursued by humanity. Superhuman intelligence solves all dumb risks unless you postulate a dumb risk that is in principle unsolvable. Something like collapse of vacuum energy might do it.
Contributing to the creation of FAI doesn’t just decrease the likelihood of UFAI, it also decreases the likelihood of all the other scenarios that end up with humanity ceasing to exist.