Of course, the Singularity argument in no way relies on nanotech.
Without advanced real-world nanotechnology it will be considerable more difficult for an AI to FOOM and therefore pose an existential risk. It will have to make use of existing infrastructure, e.g. buy stocks of chip manufactures and get them to create more or better CPU’s. It will have to rely on puny humans for a lot of tasks. It won’t be able to create new computational substrate without the whole economy of the world supporting it. It won’t be able to create an army of robot drones overnight without it either.
Doing so it would have to make use of considerable amounts of social engineering without its creators noticing it. But more importantly it will have to make use of its existing intelligence to do all of that. The AGI would have to acquire new resources slowly, as it couldn’t just self-improve to come up with faster and more efficient solutions. In other words, self-improvement would demand resources, therefore the AGI could not profit from its ability to self-improve regarding the necessary acquisition of resources to be able to self-improve in the first place.
So the absence of advanced nanotechnology constitutes an immense blow to any risk estimates including already available nanotech. Further, if one assumes that nanotech is a prerequisite for AI going FOOM then another question arises. It should be easier to create advanced replicators to destroy the world than creating AGI that then creates advanced replicators that then fails hold and then destroys the world. Therefore one might ask what is the bigger risk here.
Giving the worm-scenario a second thought, I do not see how an AGI would benefit from doing that. An AGI incapable of acquiring resources by means of advanced nanotech assemblers would likely just pretend to be friendly to get humans to build more advanced computational substrates. Launching any large-scale attacks on the existing infrastructure would cause havoc but also damage the AI itself because governments (China etc.) would shut-down the whole Internet rather than living with such an infection. Or even nuke the AI’s mainframe. And even if it could increase its intelligence further by making use of unsuitable and ineffective substrates it would still be incapacitated, stuck in the machine. Without advanced nanotechnology you simply cannot grow exponentially or make use of recursive self-improvement beyond the software-level. This in turn considerably reduces the existential risk posed by an AI. That is not to say that it wouldn’t be a huge catastrophe as well, but there are other catastrophes on the same scale that you would have to compare. Only by implicitly making FOOMing the premise one can make it the most dangerous high-impact risk (never mind aliens, the LHC etc.).
You don’t see how an AGI would benefit from spreading itself in a distributed form to every computer on the planet, control and manipulate all online communications, encrypt the contents of hard drives and keep their contents hostage, etc.? You could have the AGI’s code running on every Internet-connected computer on the planet, which would make it virtually impossible to get rid of.
And even though we might be capable of shutting down the Internet today, at the cost of severe economic damage, I’m pretty sure that that’ll become less and less of a possibility as time goes on, especially if the AGI is also holding as hostage the contents of any hard drives without off-line backups. Add to that the fact that we’ve already had one case of a computer virus infecting the control computers in an off-line facility and, according to one report, delaying the nuclear program of country by two years. Add to that the fact that even people’s normal phones are increasingly becoming smartphones, which can be hacked, and that simpler phones have already shown themselves to be vulnerable to being crashed by a well-crafted SMS. Let 20 more years pass and us become more and more dependant on IT, and an AGI could probably keep all of humanity as its hostage—shutting down the entire Internet simply wouldn’t be an option.
And even if it could increase its intelligence further by making use of unsuitable and ineffective substrates it would still be incapacitated, stuck in the machine. Without advanced nanotechnology you simply cannot grow exponentially or make use of recursive self-improvement beyond the software-level.
This is nonsense. The AGI would control the vast majority of our communications networks. Once you can decide which messages get through and which ones don’t, having humans build whatever you want is relatively trivial. Besides, we already have early stage self-replicating machinery today: you don’t need nanotech for that.
And even though we might be capable of shutting down the Internet today, at the cost of severe economic damage, I’m pretty sure that that’ll become less and less of a possibility as time goes on, especially if the AGI is also holding as hostage the contents of any hard drives without off-line backups.
I understand. But how do you differentiate this from the same incident involving an army of human hackers? The AI will likely be very vulnerable if it runs on some supercomputer and even more so if it runs in the cloud (just use an EMP). In contrast an army of human hackers can’t be disturbed that easily and is an enemy you can’t pinpoint. You are portraying a certain scenario here and I do not see it as a convincing argument to fortify risks from AI compared to other risks.
The AGI would control the vast majority of our communications networks. Once you can decide which messages get through and which ones don’t, having humans build whatever you want is relatively trivial.
It isn’t trivial. There is a strong interdependence of resources and manufacturers. The AI won’t be able to simply make some humans build a high-end factory to create computational substrate. People will ask questions and shortly after get suspicious. Remember it won’t be able to coordinate a world-conspiracy because it hasn’t been able to self-improve to that point yet because it is still trying to acquire enough resources which it has to do the hard way without nanotech. You’d probably need a brain the size of the moon to effectively run and coordinate a whole world of irrational humans by intercepting their communications and altering them on the fly without anyone freaking out.
and even more so if it runs in the cloud (just use an EMP).
The point was that you can’t use an EMP if that means bringing down the whole human computer network.
t isn’t trivial. There is a strong interdependence of resources and manufacturers. The AI won’t be able to simply make some humans build a high-end factory to create computational substrate. People will ask questions and shortly after get suspicious.
Why would people need to get suspicious? If you have tabs on all the communications in the world, you can make a killing on the market, even if you didn’t delay the orders from your competitors. One could fully legitimately raise enough money by trading to hire people to do everything you wanted. Nobody needs to ever notice that there’s something amiss, especially not if you do it via enough shell corporations.
Of course, the AGI could also use more forceful means, though it’s by no means necessary. If the AGI revealed itself and the fact that it was holding all of humanity’s networked computers hostage, it could probably just flat-out tell the humans “do this or else”. Sure, not everyone would obey, but some would. Also, disrupt enough communications and manufacture enough chaos, and people will be too distracted and stressed to properly question forged orders. Social engineering is rather easy with humans, and desperate people are quite prone to wishful thinking.
Remember it won’t be able to coordinate a world-conspiracy because it hasn’t been able to self-improve to that point yet because it is still trying to acquire enough resources which it has to do the hard way without nanotech.
This claim strikes me as bizarre. Why would you need nanotech to acquire more resources for self-improvement?
Some botnets have been reported to have around 350,000 members. Currently, the distributed computing project Folding@Home, with 290,000 active clients composed mostly of volunteer home PCs and Playstations, can reach speeds in the 10^15 FLOPS range. Now say that an AGI gets developed in 20 years from now. A relatively conservative estimate that presumed an AGI couldn’t hack into more computers than the best malware practicioners of today, that a personal computer would have a hundred times the computing power of today, and that an AGI required a minimum of 10^13 FLOPS to run, would suggest that an AGI could either increase its own computational capacity 12,000-fold, or spawn 12,000 copies of itself.
Alternatively, if it wanted to avoid detection and didn’t want to do anything illegal or otherwise suspicious, it could just sign up on a site like oDesk and do lots of programming jobs for money, then rent all the computing capacity it needed until it was ready to unleash itself to the world. This is actually the more likely alternative, as unlike the botnet/hacking scheme it’s a virtually risk-free to gain extra processing power.
Acquiring computational resources is easy, and it will only get easier as time goes on. Also, while upgrading your hardware is one way of self-improving, you seem to be completely ignoring the potential for software self-improvement. It’s not a given that the AGI would even need massive hardware. For instance:
Grötschel, an expert in optimization, observes that a benchmark production planning model solved using linear programming would have taken 82 years to solve in 1988, using the computers and the linear programming algorithms of the day. Fifteen years later – in 2003 – this same model could be solved in roughly 1 minute, an improvement by a factor of roughly 43 million. Of this, a factor of roughly 1,000 was due to increased processor speed, whereas a factor of roughly 43,000 was due to improvements in algorithms! Grötschel also cites an algorithmic improvement of roughly 30,000 for mixed integer programming between 1991 and 2008.
The point is that you people are presenting an idea that is an existential risk by definition. I claim that it might superficially appear to be the most dangerous of all risks but that this is mostly a result of its vagueness.
If you say that there is the possibility of a superhuman intelligence taking over the world and all its devices to destroy humanity then that is an existential risk by definition. I counter that I dispute some of the premises and the likelihood of subsequent scenarios. So to make me update on the original idea you would have to support your underlying premises rather than arguing within already established frameworks that impose several presuppositions onto me.
Are you aware of what the most common EMPs are? Nukes. The computer that triggers the high explosive lenses is already molten vapor by the time that the chain reaction has begun expanding into a fireball.
I’m not convinced of this. As time progresses there are more and more vulnerable systems on the internet, many which shouldn’t be. That includes nuclear power plants, particle accelerators, conventional power plants and others. Other systems likely have some methods of access such as communication satellites. Soon this will also include almost completely automated manufacturing plants. An AI that quickly grows to control much of the internet would have access directly to nasty systems and just have a lot more processing power. The extra processing power means that the AI can potentially crack cryptosystems that are being used by secure parts of the internet or non-internet systems that use radio to communicate.
That said, I agree that without strong nanotech this seems like an unlikely scenario.
An AI that quickly grows to control much of the internet would have access directly to nasty systems and just have a lot more processing power.
Yes, but then how does this risk differ from asteroid impacts, solar flares, bio weapons or nanotechnology? The point is that the only reason for a donation to the SIAI to have an higher expected pay off is the premise that AI can FOOM and kill all humans and take over the universe. In all other cases dumb risks are as or more likely and can accomplish to wipe us out as well. So why the SIAI? I’m trying to get a more definite answer to that question. I at least have to consider all possible arguments I can come up with in the time it takes to write a few comments and see what feedback I get. That way I can update my estimates and refine my thinking.
Yes, but then how does this risk differ from asteroid impacts, solar flares
Asteroid impacts and solar flares are relatively ‘dumb’ risks, in that they can be defended against once you know how. They don’t constantly try to outsmart you.
bio weapons or nanotechnology?
This question is a bit like asking “yes, I know bioweapons can be dangerous, but how does the risk of genetically engineered e.coli differ from the risk of bioweapons”.
Bioweapons and nanotechnology are particular special cases of “dangerous technologies that humans might come up with”. An AGI is potentially employing all of the dangerous technologies humans—or AGIs—might come up with.
Your comment assumes that I agree on some premises that I actually dispute. That an AGI will employ all other existential risks and therefore be the most dangerous of all existential risks doesn’t follow because if such an AGI is as likely as the other risks then it doesn’t matter if we are wiped out by one of the other risks or by an AGI making use of one of those risks.
Yes, but then how does this risk differ from asteroid impacts, solar flares, bio weapons or nanotechnology?
Well, one doesn’t need to think that that it intrinsically different. One would just need to think that the marginal return here is high because we aren’t putting in much resources now to look at the problem. Someone could potentially make that sort of argument for any existential risk.
Yes. I am getting much better responses from you than from some of the donors that replied or the SIAI itself. Which isn’t very reassuring. Anyway, you are of course right there. The SIAI is currently looking into the one existential risk that is most underfunded. As I said before, I believe that the SIAI should exist and therefore should be supported. Yet I still can’t follow some of the more frenetic supporters. That is, I don’t see the case being as strong as some portray it. And there is not enough skepticism here, although people reassure me constantly that they have been skeptic but were eventually convinced. They just don’t seem very convincing to me.
I guess I should stop trying then? Have I not provided anything useful? And do I come across as “frenetic”? That’s certainly not how I feel. And I figured 90 percent chance we all die to be pretty skeptical. Maybe you weren’t referring to me...
I’m sorry, I shouldn’t have phrased my comment like that. No, I was referring to this and this comment that I just got. I feel too tired to reply to those right now because I feel they do not answer anything and that I have already tackled their content in previous comments. I’m sometimes getting a bit weary when the amount of useless information gets too high. They probably feel the same about me and I should be thankful that they take the time at all. I can assure you that my intention is not to attack anyone or the SIAI personally just to discredit them. I’m honestly interested, simply curious.
OK, cool. Yeah, this whole thing does seem to go in circles at times… it’s the sort of topic where I wish I could just meet face to face and hash it out over an hour or so.
A large solar-outburst can cause similar havoc. Or some rouge group buys all Google stocks, tweaks its search algorithm and starts to influence election outcomes by slightly tweaking the results in favor of certain candidates while using its massive data repository to spy on people. There are a lot of scenarios. But the reason to consider the availability of advanced nanotechnology regarding AI associated existential risks is to reassess their impact and probability. An AI that can make use of advanced nanotech is certainly much more dangerous than one taking over the infrastructure of the planet by means of cyber-warfare. The question is if such a risk is still bad enough to outweigh other existential risks. That is the whole point here, comparison of existential risks to assess the value of contributing to the SIAI. If you scale back to an AGI incapable of quick self-improvement by use of nanotech and instead infrastructure take-over the difference between working to prevent such a catastrophe is not as far detached anymore from working on building an infrastructure more resistant to electromagnetic impulse weapons or sun flares.
The correct way to approach a potential risk is not to come up with a couple of specific scenarios relating to the risk, evaluate those, and then pretend that you’ve done a proper analysis of the risk involved. That’s analogous to trying to make a system secure by patching security vulnerabilities as they show up and not even trying to employ safety measures such as firewalls, or trying to make a software system bug-free simply by fixing bugs as they get reported and ignoring techniques such as unit tests, defensive programming, etc. It’s been tried and conclusively found to be a bad idea by both the security and software engineering communities. If you want to be safe, you need to take into account as many possibilities as you can, not just concentrate on the particular special cases that happened to rise to your attention.
The proper unit of analysis here are not the particular techniques that an AI might use to take over. That’s pointless: for any particular technique that we discuss here, there might be countless of others that the AI could employ, many of them ones nobody has even thought of yet. If we’d be in an alternate universe where Eric Drexler was overrun by a car before ever coming up with his vision of molecular nanotechnology, the whole concept of strong nanotech might be unknown to us. If we then only looked at the prospects for cyberwar, and concluded that an AI isn’t a big threat because humans can do cyberwarfare too, we could be committing a horrible mistake by completely ignoring nanotech. Of course, since in that scenario we couldn’t know about nanotech, our mistake wouldn’t be ignoring it, but rather in choosing a methodology which is incapable of dealing with unknown unknowns even in principle.
So what is the right unit of analysis? It’s the power of intelligence. It’s the historical case of a new form of intelligence showing up on the planet and completely reshaping its environment to create its own tools. It’s the difference in the power of the chimpanzee species to change its environment towards its preferred state, and the power of the human species to change its environment towards its preferred state. You saying “well here I’ve listed these methods that an AI could use to take over humanity, and I’ve analyzed them and concluded that the AI is of no threat” is just as fallacious as it would be for a chimpanzee to say “well here I’ve listed these methods that a human could take over chimpanzity, and I’ve analyzed them and concluded that humans are no threat to us”. You can’t imagine the ways that an AI could come up with and attempt to use against us, so don’t even try. Instead, look at the historical examples of what happens when you pit a civilization of inferior intelligences against a civilization of hugely greater ones. And that will tell you that a greater-than-human intelligence is the greatest existential risk there is, for it’s the only one where it’s by definition impossible for us to come up with the ways to stop it once it gets out of control.
Of course, since in that scenario we couldn’t know about nanotech, our mistake wouldn’t be ignoring it, but rather in choosing a methodology which is incapable of dealing with unknown unknowns even in principle.
You have to limit the scope of unknown unknowns. Otherwise why not employ the same line of reasoning to risks associated with aliens? If someone says that there is no sign of aliens you just respond that they might hide or use different methods of communication. That is the same as saying that if the AI can’t make use of nanotechnology it might make use of something we haven’t even thought about. What, magic?
Yes, you could very well make an argument for the risks posed by superintelligent aliens. But then you would also have to produce an argument for a) why it’s plausible to assume that superintelligent aliens will show up anytime soon b) what we could do to prevent the invasion of superintelligent aliens if they did show up.
For AGI have an answer for point a (progress in computing power, neuroscience and brain reverse-engineering, etc.) and a preliminary answer for point b (figure out how to build benevolent AGIs). There are no corresponding answers to points a and b for aliens.
If someone says that there is no sign of aliens you just respond that they might hide or use different methods of communication. That is the same as saying that if the AI can’t make use of nanotechnology it might make use of something we haven’t even thought about.
No it’s not: think about this again. “Aliens of a superior intelligence might wipe us out by some means we don’t know” is symmetric to “an AGI with superior intelligence might wipe us out by some means we don’t know”. But “aliens of superior intelligence might appear out of nowhere” is not symmetric to “an AGI with superior intelligence might wipe us out by some means we don’t know”.
I didn’t mean to suggest that aliens are a more likely risk than AI. I was trying to show that unknown unknowns can not be employed to the extent you suggest. You can’t just say that ruling out many possibilities of how an AI could be dangerous doesn’t make it less dangerous because it might come up with something we haven’t thought about. That line of reasoning would allow you to undermine any evidence to the contrary.
You can’t just say that ruling out many possibilities of how an AI could be dangerous doesn’t make it less dangerous because it might come up with something we haven’t thought about. That line of reasoning would allow you to undermine any evidence to the contrary.
Not quite.
Suppose that someone brought up a number of ways by which an AI could be dangerous, and somebody else refuted them all by pointing out that there’s no particular way by which having superior intelligence would help in them. (In other words, humans could do those things too, and an AI doing them wouldn’t be any more dangerous.) Now if I couldn’t come up with any examples where having a superior intelligence would help, then that would be evidence against the “a superior intelligence helps overall”.
But all of the examples we have been discussing (nanotech warfare, biological warfare, cyberwarfare) are technological arms races. And in a technological arms race, superior intelligence does bring quite a decisive edge. In the discussion about cyberwarfare, you asked what makes the threat from an AI hacker different from the threat of human hackers. And the answer is that hacking is a task that primarily requires qualities such as intelligence and patience, both of which an AI could have a lot more than humans do. Certainly human hackers could do a lot of harm as well, but a single AI could be as dangerous as all of the 90th percentile human hackers put together.
What I am arguing is that the power of intelligence is vastly overestimated and therefore any associated risks. There are many dumb risks that can easily accomplish the same, wipe us out. It doesn’t need superhuman intelligence to do that. I also do not see enough evidence for the premise that other superior forms of intelligence are very likely to exist. Further I argue that there is no hint of any intelligence out there reshaping its environment. The stars show no sign of intelligent tinkering. I provided many other arguments for why other risks might be more worthy of our contribution. I came up with all those ideas in the time it took to write those comments. I simply expect a lot more arguments and other kinds of evidence supporting their premises from an organisation that has been around for over 10 years.
There are many dumb risks that can easily accomplish the same, wipe us out. It doesn’t need superhuman intelligence to do that.
Yes, there are dumb risks that could wipe us out just as well: but only a superhuman intelligence with different desires than humanity is guaranteed to wipe us out.
I also do not see enough evidence for the premise that other superior forms of intelligence are very likely to exist.
You don’t need qualitative differences: just take a human-level intelligence and add on enough hardware that it can run many times faster than the best of human thinkers, and hold far more things in its mind at once. If it came to a fight, the humanity of 2000 could easily muster the armies to crush the best troops of 1800 without trouble. That’s just the result of 200 years of technological development and knowledge acquisition, and doesn’t even require us to be more intelligent than the humans of 2000.
Further I argue that there is no hint of any intelligence out there reshaping its environment.
We may not have observed aliens reshaping their environment, but we can certainly observe humans reshaping their environment. This planet is full of artificial structures. We’ve blanketed the Earth with lights that can be seen anywhere where we’ve bothered to establish habitation. We’ve changed the Earth so much that we’re disturbing global climate patterns, and now we’re talking about large-scale engineering work to counteract those disturbances. If I choose to, there are ready transportation networks that will get me pretty much anywhere on Earth, and ready networks for supplying me with food, healthcare and entertainment on all the planet’s continents (though admittedly Antarctica is probably a bit tricky from a tourist’s point of view).
I also do not see enough evidence for the premise that other superior forms of intelligence are very likely to exist.
It seems as though it is rather easy to imagine humans being given the “Deep Blue” treatment in a wide range of fields. I don’t see why this would be a sticking point. Human intelligence is plainly just awful, in practically any domain you care to mention.
Further I argue that there is no hint of any intelligence out there reshaping its environment.
Uh, that’s us. wave
In case you didn’t realise, humanity is the proof of concept that superior intelligence is dangerous. Ask a chimpanzee.
I also do not see enough evidence for the premise that other superior forms of intelligence are very likely to exist.
Have you taken an IQ test? Anyone who scores significantly higher than you constitutes a superior form of intelligence.
There are many dumb risks that can easily accomplish the same, wipe us out. It doesn’t need superhuman intelligence to do that.
Few such dumb risks are being pursued by humanity. Superhuman intelligence solves all dumb risks unless you postulate a dumb risk that is in principle unsolvable. Something like collapse of vacuum energy might do it.
Contributing to the creation of FAI doesn’t just decrease the likelihood of UFAI, it also decreases the likelihood of all the other scenarios that end up with humanity ceasing to exist.
Without advanced real-world nanotechnology it will be considerable more difficult for an AI to FOOM and therefore pose an existential risk. It will have to make use of existing infrastructure, e.g. buy stocks of chip manufactures and get them to create more or better CPU’s. It will have to rely on puny humans for a lot of tasks. It won’t be able to create new computational substrate without the whole economy of the world supporting it. It won’t be able to create an army of robot drones overnight without it either.
Doing so it would have to make use of considerable amounts of social engineering without its creators noticing it. But more importantly it will have to make use of its existing intelligence to do all of that. The AGI would have to acquire new resources slowly, as it couldn’t just self-improve to come up with faster and more efficient solutions. In other words, self-improvement would demand resources, therefore the AGI could not profit from its ability to self-improve regarding the necessary acquisition of resources to be able to self-improve in the first place.
So the absence of advanced nanotechnology constitutes an immense blow to any risk estimates including already available nanotech. Further, if one assumes that nanotech is a prerequisite for AI going FOOM then another question arises. It should be easier to create advanced replicators to destroy the world than creating AGI that then creates advanced replicators that then fails hold and then destroys the world. Therefore one might ask what is the bigger risk here.
To be honest, I think this is far scarier AI-go-FOOM scenario than nanotech is.
Giving the worm-scenario a second thought, I do not see how an AGI would benefit from doing that. An AGI incapable of acquiring resources by means of advanced nanotech assemblers would likely just pretend to be friendly to get humans to build more advanced computational substrates. Launching any large-scale attacks on the existing infrastructure would cause havoc but also damage the AI itself because governments (China etc.) would shut-down the whole Internet rather than living with such an infection. Or even nuke the AI’s mainframe. And even if it could increase its intelligence further by making use of unsuitable and ineffective substrates it would still be incapacitated, stuck in the machine. Without advanced nanotechnology you simply cannot grow exponentially or make use of recursive self-improvement beyond the software-level. This in turn considerably reduces the existential risk posed by an AI. That is not to say that it wouldn’t be a huge catastrophe as well, but there are other catastrophes on the same scale that you would have to compare. Only by implicitly making FOOMing the premise one can make it the most dangerous high-impact risk (never mind aliens, the LHC etc.).
You don’t see how an AGI would benefit from spreading itself in a distributed form to every computer on the planet, control and manipulate all online communications, encrypt the contents of hard drives and keep their contents hostage, etc.? You could have the AGI’s code running on every Internet-connected computer on the planet, which would make it virtually impossible to get rid of.
And even though we might be capable of shutting down the Internet today, at the cost of severe economic damage, I’m pretty sure that that’ll become less and less of a possibility as time goes on, especially if the AGI is also holding as hostage the contents of any hard drives without off-line backups. Add to that the fact that we’ve already had one case of a computer virus infecting the control computers in an off-line facility and, according to one report, delaying the nuclear program of country by two years. Add to that the fact that even people’s normal phones are increasingly becoming smartphones, which can be hacked, and that simpler phones have already shown themselves to be vulnerable to being crashed by a well-crafted SMS. Let 20 more years pass and us become more and more dependant on IT, and an AGI could probably keep all of humanity as its hostage—shutting down the entire Internet simply wouldn’t be an option.
This is nonsense. The AGI would control the vast majority of our communications networks. Once you can decide which messages get through and which ones don’t, having humans build whatever you want is relatively trivial. Besides, we already have early stage self-replicating machinery today: you don’t need nanotech for that.
I understand. But how do you differentiate this from the same incident involving an army of human hackers? The AI will likely be very vulnerable if it runs on some supercomputer and even more so if it runs in the cloud (just use an EMP). In contrast an army of human hackers can’t be disturbed that easily and is an enemy you can’t pinpoint. You are portraying a certain scenario here and I do not see it as a convincing argument to fortify risks from AI compared to other risks.
It isn’t trivial. There is a strong interdependence of resources and manufacturers. The AI won’t be able to simply make some humans build a high-end factory to create computational substrate. People will ask questions and shortly after get suspicious. Remember it won’t be able to coordinate a world-conspiracy because it hasn’t been able to self-improve to that point yet because it is still trying to acquire enough resources which it has to do the hard way without nanotech. You’d probably need a brain the size of the moon to effectively run and coordinate a whole world of irrational humans by intercepting their communications and altering them on the fly without anyone freaking out.
The point was that you can’t use an EMP if that means bringing down the whole human computer network.
Why would people need to get suspicious? If you have tabs on all the communications in the world, you can make a killing on the market, even if you didn’t delay the orders from your competitors. One could fully legitimately raise enough money by trading to hire people to do everything you wanted. Nobody needs to ever notice that there’s something amiss, especially not if you do it via enough shell corporations.
Of course, the AGI could also use more forceful means, though it’s by no means necessary. If the AGI revealed itself and the fact that it was holding all of humanity’s networked computers hostage, it could probably just flat-out tell the humans “do this or else”. Sure, not everyone would obey, but some would. Also, disrupt enough communications and manufacture enough chaos, and people will be too distracted and stressed to properly question forged orders. Social engineering is rather easy with humans, and desperate people are quite prone to wishful thinking.
This claim strikes me as bizarre. Why would you need nanotech to acquire more resources for self-improvement?
Some botnets have been reported to have around 350,000 members. Currently, the distributed computing project Folding@Home, with 290,000 active clients composed mostly of volunteer home PCs and Playstations, can reach speeds in the 10^15 FLOPS range. Now say that an AGI gets developed in 20 years from now. A relatively conservative estimate that presumed an AGI couldn’t hack into more computers than the best malware practicioners of today, that a personal computer would have a hundred times the computing power of today, and that an AGI required a minimum of 10^13 FLOPS to run, would suggest that an AGI could either increase its own computational capacity 12,000-fold, or spawn 12,000 copies of itself.
Alternatively, if it wanted to avoid detection and didn’t want to do anything illegal or otherwise suspicious, it could just sign up on a site like oDesk and do lots of programming jobs for money, then rent all the computing capacity it needed until it was ready to unleash itself to the world. This is actually the more likely alternative, as unlike the botnet/hacking scheme it’s a virtually risk-free to gain extra processing power.
Acquiring computational resources is easy, and it will only get easier as time goes on. Also, while upgrading your hardware is one way of self-improving, you seem to be completely ignoring the potential for software self-improvement. It’s not a given that the AGI would even need massive hardware. For instance:
Here my reply.
And what kind of computer controls the EMP? Or is it hand-cranked?
The point is that you people are presenting an idea that is an existential risk by definition. I claim that it might superficially appear to be the most dangerous of all risks but that this is mostly a result of its vagueness.
If you say that there is the possibility of a superhuman intelligence taking over the world and all its devices to destroy humanity then that is an existential risk by definition. I counter that I dispute some of the premises and the likelihood of subsequent scenarios. So to make me update on the original idea you would have to support your underlying premises rather than arguing within already established frameworks that impose several presuppositions onto me.
Are you aware of what the most common EMPs are? Nukes. The computer that triggers the high explosive lenses is already molten vapor by the time that the chain reaction has begun expanding into a fireball.
What kind of computer indeed!
I used this very example to argue with Robin Hanson during after-lecture QA (it should be in Parsons Part 2), it did not seem to help :)
I’m not convinced of this. As time progresses there are more and more vulnerable systems on the internet, many which shouldn’t be. That includes nuclear power plants, particle accelerators, conventional power plants and others. Other systems likely have some methods of access such as communication satellites. Soon this will also include almost completely automated manufacturing plants. An AI that quickly grows to control much of the internet would have access directly to nasty systems and just have a lot more processing power. The extra processing power means that the AI can potentially crack cryptosystems that are being used by secure parts of the internet or non-internet systems that use radio to communicate.
That said, I agree that without strong nanotech this seems like an unlikely scenario.
Yes, but then how does this risk differ from asteroid impacts, solar flares, bio weapons or nanotechnology? The point is that the only reason for a donation to the SIAI to have an higher expected pay off is the premise that AI can FOOM and kill all humans and take over the universe. In all other cases dumb risks are as or more likely and can accomplish to wipe us out as well. So why the SIAI? I’m trying to get a more definite answer to that question. I at least have to consider all possible arguments I can come up with in the time it takes to write a few comments and see what feedback I get. That way I can update my estimates and refine my thinking.
Asteroid impacts and solar flares are relatively ‘dumb’ risks, in that they can be defended against once you know how. They don’t constantly try to outsmart you.
This question is a bit like asking “yes, I know bioweapons can be dangerous, but how does the risk of genetically engineered e.coli differ from the risk of bioweapons”.
Bioweapons and nanotechnology are particular special cases of “dangerous technologies that humans might come up with”. An AGI is potentially employing all of the dangerous technologies humans—or AGIs—might come up with.
Your comment assumes that I agree on some premises that I actually dispute. That an AGI will employ all other existential risks and therefore be the most dangerous of all existential risks doesn’t follow because if such an AGI is as likely as the other risks then it doesn’t matter if we are wiped out by one of the other risks or by an AGI making use of one of those risks.
Well, one doesn’t need to think that that it intrinsically different. One would just need to think that the marginal return here is high because we aren’t putting in much resources now to look at the problem. Someone could potentially make that sort of argument for any existential risk.
Yes. I am getting much better responses from you than from some of the donors that replied or the SIAI itself. Which isn’t very reassuring. Anyway, you are of course right there. The SIAI is currently looking into the one existential risk that is most underfunded. As I said before, I believe that the SIAI should exist and therefore should be supported. Yet I still can’t follow some of the more frenetic supporters. That is, I don’t see the case being as strong as some portray it. And there is not enough skepticism here, although people reassure me constantly that they have been skeptic but were eventually convinced. They just don’t seem very convincing to me.
I guess I should stop trying then? Have I not provided anything useful? And do I come across as “frenetic”? That’s certainly not how I feel. And I figured 90 percent chance we all die to be pretty skeptical. Maybe you weren’t referring to me...
I’m sorry, I shouldn’t have phrased my comment like that. No, I was referring to this and this comment that I just got. I feel too tired to reply to those right now because I feel they do not answer anything and that I have already tackled their content in previous comments. I’m sometimes getting a bit weary when the amount of useless information gets too high. They probably feel the same about me and I should be thankful that they take the time at all. I can assure you that my intention is not to attack anyone or the SIAI personally just to discredit them. I’m honestly interested, simply curious.
OK, cool. Yeah, this whole thing does seem to go in circles at times… it’s the sort of topic where I wish I could just meet face to face and hash it out over an hour or so.
A large solar-outburst can cause similar havoc. Or some rouge group buys all Google stocks, tweaks its search algorithm and starts to influence election outcomes by slightly tweaking the results in favor of certain candidates while using its massive data repository to spy on people. There are a lot of scenarios. But the reason to consider the availability of advanced nanotechnology regarding AI associated existential risks is to reassess their impact and probability. An AI that can make use of advanced nanotech is certainly much more dangerous than one taking over the infrastructure of the planet by means of cyber-warfare. The question is if such a risk is still bad enough to outweigh other existential risks. That is the whole point here, comparison of existential risks to assess the value of contributing to the SIAI. If you scale back to an AGI incapable of quick self-improvement by use of nanotech and instead infrastructure take-over the difference between working to prevent such a catastrophe is not as far detached anymore from working on building an infrastructure more resistant to electromagnetic impulse weapons or sun flares.
The correct way to approach a potential risk is not to come up with a couple of specific scenarios relating to the risk, evaluate those, and then pretend that you’ve done a proper analysis of the risk involved. That’s analogous to trying to make a system secure by patching security vulnerabilities as they show up and not even trying to employ safety measures such as firewalls, or trying to make a software system bug-free simply by fixing bugs as they get reported and ignoring techniques such as unit tests, defensive programming, etc. It’s been tried and conclusively found to be a bad idea by both the security and software engineering communities. If you want to be safe, you need to take into account as many possibilities as you can, not just concentrate on the particular special cases that happened to rise to your attention.
The proper unit of analysis here are not the particular techniques that an AI might use to take over. That’s pointless: for any particular technique that we discuss here, there might be countless of others that the AI could employ, many of them ones nobody has even thought of yet. If we’d be in an alternate universe where Eric Drexler was overrun by a car before ever coming up with his vision of molecular nanotechnology, the whole concept of strong nanotech might be unknown to us. If we then only looked at the prospects for cyberwar, and concluded that an AI isn’t a big threat because humans can do cyberwarfare too, we could be committing a horrible mistake by completely ignoring nanotech. Of course, since in that scenario we couldn’t know about nanotech, our mistake wouldn’t be ignoring it, but rather in choosing a methodology which is incapable of dealing with unknown unknowns even in principle.
So what is the right unit of analysis? It’s the power of intelligence. It’s the historical case of a new form of intelligence showing up on the planet and completely reshaping its environment to create its own tools. It’s the difference in the power of the chimpanzee species to change its environment towards its preferred state, and the power of the human species to change its environment towards its preferred state. You saying “well here I’ve listed these methods that an AI could use to take over humanity, and I’ve analyzed them and concluded that the AI is of no threat” is just as fallacious as it would be for a chimpanzee to say “well here I’ve listed these methods that a human could take over chimpanzity, and I’ve analyzed them and concluded that humans are no threat to us”. You can’t imagine the ways that an AI could come up with and attempt to use against us, so don’t even try. Instead, look at the historical examples of what happens when you pit a civilization of inferior intelligences against a civilization of hugely greater ones. And that will tell you that a greater-than-human intelligence is the greatest existential risk there is, for it’s the only one where it’s by definition impossible for us to come up with the ways to stop it once it gets out of control.
You have to limit the scope of unknown unknowns. Otherwise why not employ the same line of reasoning to risks associated with aliens? If someone says that there is no sign of aliens you just respond that they might hide or use different methods of communication. That is the same as saying that if the AI can’t make use of nanotechnology it might make use of something we haven’t even thought about. What, magic?
Yes, you could very well make an argument for the risks posed by superintelligent aliens. But then you would also have to produce an argument for a) why it’s plausible to assume that superintelligent aliens will show up anytime soon b) what we could do to prevent the invasion of superintelligent aliens if they did show up.
For AGI have an answer for point a (progress in computing power, neuroscience and brain reverse-engineering, etc.) and a preliminary answer for point b (figure out how to build benevolent AGIs). There are no corresponding answers to points a and b for aliens.
No it’s not: think about this again. “Aliens of a superior intelligence might wipe us out by some means we don’t know” is symmetric to “an AGI with superior intelligence might wipe us out by some means we don’t know”. But “aliens of superior intelligence might appear out of nowhere” is not symmetric to “an AGI with superior intelligence might wipe us out by some means we don’t know”.
I didn’t mean to suggest that aliens are a more likely risk than AI. I was trying to show that unknown unknowns can not be employed to the extent you suggest. You can’t just say that ruling out many possibilities of how an AI could be dangerous doesn’t make it less dangerous because it might come up with something we haven’t thought about. That line of reasoning would allow you to undermine any evidence to the contrary.
I’ll be back tomorrow.
Not quite.
Suppose that someone brought up a number of ways by which an AI could be dangerous, and somebody else refuted them all by pointing out that there’s no particular way by which having superior intelligence would help in them. (In other words, humans could do those things too, and an AI doing them wouldn’t be any more dangerous.) Now if I couldn’t come up with any examples where having a superior intelligence would help, then that would be evidence against the “a superior intelligence helps overall”.
But all of the examples we have been discussing (nanotech warfare, biological warfare, cyberwarfare) are technological arms races. And in a technological arms race, superior intelligence does bring quite a decisive edge. In the discussion about cyberwarfare, you asked what makes the threat from an AI hacker different from the threat of human hackers. And the answer is that hacking is a task that primarily requires qualities such as intelligence and patience, both of which an AI could have a lot more than humans do. Certainly human hackers could do a lot of harm as well, but a single AI could be as dangerous as all of the 90th percentile human hackers put together.
What I am arguing is that the power of intelligence is vastly overestimated and therefore any associated risks. There are many dumb risks that can easily accomplish the same, wipe us out. It doesn’t need superhuman intelligence to do that. I also do not see enough evidence for the premise that other superior forms of intelligence are very likely to exist. Further I argue that there is no hint of any intelligence out there reshaping its environment. The stars show no sign of intelligent tinkering. I provided many other arguments for why other risks might be more worthy of our contribution. I came up with all those ideas in the time it took to write those comments. I simply expect a lot more arguments and other kinds of evidence supporting their premises from an organisation that has been around for over 10 years.
Large brains can be dangerous to those who don’t have them. Look at the current human-caused mass extinction.
Yes, there are dumb risks that could wipe us out just as well: but only a superhuman intelligence with different desires than humanity is guaranteed to wipe us out.
You don’t need qualitative differences: just take a human-level intelligence and add on enough hardware that it can run many times faster than the best of human thinkers, and hold far more things in its mind at once. If it came to a fight, the humanity of 2000 could easily muster the armies to crush the best troops of 1800 without trouble. That’s just the result of 200 years of technological development and knowledge acquisition, and doesn’t even require us to be more intelligent than the humans of 2000.
We may not have observed aliens reshaping their environment, but we can certainly observe humans reshaping their environment. This planet is full of artificial structures. We’ve blanketed the Earth with lights that can be seen anywhere where we’ve bothered to establish habitation. We’ve changed the Earth so much that we’re disturbing global climate patterns, and now we’re talking about large-scale engineering work to counteract those disturbances. If I choose to, there are ready transportation networks that will get me pretty much anywhere on Earth, and ready networks for supplying me with food, healthcare and entertainment on all the planet’s continents (though admittedly Antarctica is probably a bit tricky from a tourist’s point of view).
It seems as though it is rather easy to imagine humans being given the “Deep Blue” treatment in a wide range of fields. I don’t see why this would be a sticking point. Human intelligence is plainly just awful, in practically any domain you care to mention.
Uh, that’s us. wave
In case you didn’t realise, humanity is the proof of concept that superior intelligence is dangerous. Ask a chimpanzee.
Have you taken an IQ test? Anyone who scores significantly higher than you constitutes a superior form of intelligence.
Few such dumb risks are being pursued by humanity. Superhuman intelligence solves all dumb risks unless you postulate a dumb risk that is in principle unsolvable. Something like collapse of vacuum energy might do it.
Contributing to the creation of FAI doesn’t just decrease the likelihood of UFAI, it also decreases the likelihood of all the other scenarios that end up with humanity ceasing to exist.