Of course, since in that scenario we couldn’t know about nanotech, our mistake wouldn’t be ignoring it, but rather in choosing a methodology which is incapable of dealing with unknown unknowns even in principle.
You have to limit the scope of unknown unknowns. Otherwise why not employ the same line of reasoning to risks associated with aliens? If someone says that there is no sign of aliens you just respond that they might hide or use different methods of communication. That is the same as saying that if the AI can’t make use of nanotechnology it might make use of something we haven’t even thought about. What, magic?
Yes, you could very well make an argument for the risks posed by superintelligent aliens. But then you would also have to produce an argument for a) why it’s plausible to assume that superintelligent aliens will show up anytime soon b) what we could do to prevent the invasion of superintelligent aliens if they did show up.
For AGI have an answer for point a (progress in computing power, neuroscience and brain reverse-engineering, etc.) and a preliminary answer for point b (figure out how to build benevolent AGIs). There are no corresponding answers to points a and b for aliens.
If someone says that there is no sign of aliens you just respond that they might hide or use different methods of communication. That is the same as saying that if the AI can’t make use of nanotechnology it might make use of something we haven’t even thought about.
No it’s not: think about this again. “Aliens of a superior intelligence might wipe us out by some means we don’t know” is symmetric to “an AGI with superior intelligence might wipe us out by some means we don’t know”. But “aliens of superior intelligence might appear out of nowhere” is not symmetric to “an AGI with superior intelligence might wipe us out by some means we don’t know”.
I didn’t mean to suggest that aliens are a more likely risk than AI. I was trying to show that unknown unknowns can not be employed to the extent you suggest. You can’t just say that ruling out many possibilities of how an AI could be dangerous doesn’t make it less dangerous because it might come up with something we haven’t thought about. That line of reasoning would allow you to undermine any evidence to the contrary.
You can’t just say that ruling out many possibilities of how an AI could be dangerous doesn’t make it less dangerous because it might come up with something we haven’t thought about. That line of reasoning would allow you to undermine any evidence to the contrary.
Not quite.
Suppose that someone brought up a number of ways by which an AI could be dangerous, and somebody else refuted them all by pointing out that there’s no particular way by which having superior intelligence would help in them. (In other words, humans could do those things too, and an AI doing them wouldn’t be any more dangerous.) Now if I couldn’t come up with any examples where having a superior intelligence would help, then that would be evidence against the “a superior intelligence helps overall”.
But all of the examples we have been discussing (nanotech warfare, biological warfare, cyberwarfare) are technological arms races. And in a technological arms race, superior intelligence does bring quite a decisive edge. In the discussion about cyberwarfare, you asked what makes the threat from an AI hacker different from the threat of human hackers. And the answer is that hacking is a task that primarily requires qualities such as intelligence and patience, both of which an AI could have a lot more than humans do. Certainly human hackers could do a lot of harm as well, but a single AI could be as dangerous as all of the 90th percentile human hackers put together.
You have to limit the scope of unknown unknowns. Otherwise why not employ the same line of reasoning to risks associated with aliens? If someone says that there is no sign of aliens you just respond that they might hide or use different methods of communication. That is the same as saying that if the AI can’t make use of nanotechnology it might make use of something we haven’t even thought about. What, magic?
Yes, you could very well make an argument for the risks posed by superintelligent aliens. But then you would also have to produce an argument for a) why it’s plausible to assume that superintelligent aliens will show up anytime soon b) what we could do to prevent the invasion of superintelligent aliens if they did show up.
For AGI have an answer for point a (progress in computing power, neuroscience and brain reverse-engineering, etc.) and a preliminary answer for point b (figure out how to build benevolent AGIs). There are no corresponding answers to points a and b for aliens.
No it’s not: think about this again. “Aliens of a superior intelligence might wipe us out by some means we don’t know” is symmetric to “an AGI with superior intelligence might wipe us out by some means we don’t know”. But “aliens of superior intelligence might appear out of nowhere” is not symmetric to “an AGI with superior intelligence might wipe us out by some means we don’t know”.
I didn’t mean to suggest that aliens are a more likely risk than AI. I was trying to show that unknown unknowns can not be employed to the extent you suggest. You can’t just say that ruling out many possibilities of how an AI could be dangerous doesn’t make it less dangerous because it might come up with something we haven’t thought about. That line of reasoning would allow you to undermine any evidence to the contrary.
I’ll be back tomorrow.
Not quite.
Suppose that someone brought up a number of ways by which an AI could be dangerous, and somebody else refuted them all by pointing out that there’s no particular way by which having superior intelligence would help in them. (In other words, humans could do those things too, and an AI doing them wouldn’t be any more dangerous.) Now if I couldn’t come up with any examples where having a superior intelligence would help, then that would be evidence against the “a superior intelligence helps overall”.
But all of the examples we have been discussing (nanotech warfare, biological warfare, cyberwarfare) are technological arms races. And in a technological arms race, superior intelligence does bring quite a decisive edge. In the discussion about cyberwarfare, you asked what makes the threat from an AI hacker different from the threat of human hackers. And the answer is that hacking is a task that primarily requires qualities such as intelligence and patience, both of which an AI could have a lot more than humans do. Certainly human hackers could do a lot of harm as well, but a single AI could be as dangerous as all of the 90th percentile human hackers put together.