Given that definition it doesn’t seem too surprising to me. I guess I have been less skeptical about this than you...
I don’t think much of typical humans.
These kind of very extreme views are what I have a real problem with.
I see.
And just to substantiate “extreme views”, here is Luke Muehlhauser:
It might be developed in a server cluster somewhere, but as soon as you plug a superhuman machine into the internet it will be everywhere moments later.
That’s not extreme at all, and also not the same as the EY quote. Have you read any computer security papers? You can literally get people to run programs on their computer as root by offering them pennies! That’s the sort of security environment we operate in. Every botnet with millions of computers is a proof of concept.
That’s not extreme at all, and also not the same as the EY quote. Have you read any computer security papers? You can literally get people to run programs on their computer as root by offering them pennies!
He wrote it will be moments later everywhere. Do you claim that it could take over the Internet within moments?
With proper preparation, yes. To reuse my example: it doesn’t take long to register an Amazon account, offer a high-paying HIT with a binary download which opens up a port on the computer, and within minutes multiple people across the world will have run your trojan (well-paying HITs go very fast & Turkers are geographically diverse, especially if the requester doesn’t set requirements on country*); and then one can begin doing all sorts of other things like fuzzing, SMT solvers to automatically extract vulnerabilities from released patches, building botnets, writing flashworms, etc. At that point, the only real question is how stringently one wishes to define ‘moments later’ and ‘everywhere’.
(Within a few seconds? Hard to see how any plausible AI could copy its entire source code & memories over the existing Internet that fast unless it was for some reason already sitting on something like a gigabit link. Within a few minutes? More reasonable. Is there any real difference in safety? No.)
* IIRC a lot of Turk HITs—like for psychology surveys—specify they want Turkers who are from eg America, to reduce variation or get more relevant answers, so Turk requires a declared country for each worker and lets requesters mandate being from particular countries. That said, there’s a lot of incentive for Turkers to lie—to qualify for more and more high-paying HITs—so even if the AI were for some reason to restrict by country, it’d wind up with a lot of diverse foreign computers anyway.
With proper preparation, yes. To reuse my example: it doesn’t take long to register an Amazon account, offer a high-paying HIT with a binary download which opens up a port on the computer, and within minutes multiple people across the world will have run your trojan (well-paying HITs go very fast & Turkers are geographically diverse, especially if the requester doesn’t set requirements on country*); and then one can begin doing all sorts of other things like fuzzing, SMT solvers to automatically extract vulnerabilities from released patches, building botnets, writing flashworms, etc.
Thanks. Looks like my perception is mainly based on my lack of expertise about security and the resulting inferential distance.
Hard to see how any plausible AI could copy its entire source code & memories over the existing Internet that fast unless it was for some reason already sitting on something like a gigabit link.
Are there good reasons to assume that the first such AI won’t be running on a state of the art supercomputer? Take the movie Avatar. The resources needed to render it were: 4,000 Hewlett-Packard servers with 35,000 processor cores with 104 terabytes of RAM and three petabytes of storage. I suppose that it would have been relatively hard to render it on illegally obtained storage and computational resources?
Do we have any estimates on how quickly a superhuman AI’s storage requirements would grow? CERN produces 30 petabytes of data per year. If an AI undergoing an intelligence explosion requires to store vast amounts of data then it will be much harder for it to copy itself.
The uncertainties involved here still seem to be too big to claim that a superhuman intelligence will be everywhere moments after you connect it to the Internet.
Take the movie Avatar. The resources needed to render it were: 4,000 Hewlett-Packard servers with 35,000 processor cores with 104 terabytes of RAM and three petabytes of storage. I suppose that it would have been relatively hard to render it on illegally obtained storage and computational resources?
I don’t think so. Consider botnets.
How hard is it to buy time on a botnet? Not too hard, since they exist for the sake of selling their services, after all.
Do they have the capacity? Botnets range in size from a few computers to extremes of 30 million computers; if they’re desktops, then average RAM these days tends to be at least 4GB, dual core, and hard drive sizes are >=500GB, briefly looking at the cheapest desktops on Newegg. So to get those specs: 35k cores is 17.5k desktops, for 104tb of RAM you’d need a minimum of 104000 / 4 = 26k computers, and the 3pb would be 6k (3000000 / 500); botnets can’t use 100% of host resources or their attrition will be even higher than usual, so double the numbers, and the minimum of the biggest is 52k. Well within the range of normal botnets (the WP list has 22 botnets which could’ve handled that load). And AFAIK CGI rendering is very parallel, so the botnet being high latency and highly distributed might not be as big an issue as it seems.
How much would it cost? Because botnets are a market, it’s been occasionally studied/reported on by the likes of Brian Krebs (google ‘cost of renting a botnet’). For example, https://www.damballa.com/want-to-rent-an-80-120k-ddos-botnet/ says you could rent a 80-120k botnet for $200 a day, or a 12k botnet for $500 a month—so presumably 5 such botnets would cost a quite reasonable $2500 per month.
Do we have any estimates on how quickly a superhuman AI’s storage requirements would grow? CERN produces 30 petabytes of data per year. If an AI undergoing an intelligence explosion requires to store vast amounts of data then it will be much harder for it to copy itself.
I don’t know. Humans get by adding only a few bits per second to long-term memory, Landauer estimated, but I’m not sure how well that maps onto an AI.
The uncertainties involved here still seem to be too big to claim that a superhuman intelligence will be everywhere moments after you connect it to the Internet.
It may not be able to move itself instantly, but given everything we know about botnets and computer security, it would be able to control a lot of remote computers quickly if prepared, and that opens up a lot of avenues one would rather not have to deal with. (Original AI connects to Internet, seizes a few computers across the world, installs a small seed as root with a carefully packaged set of instructions, small database of key facts around the world, and a bunch of Bitcoin private keys for funding while the seed grows to something approaching the original, and lets itself be wiped by its vigilant overseers.)
It will want to—for safety, to avoid being vulnerable due to having a single point of failure.
However the “take over in moments” refers to just control, I think. Getting the AI to become fully distributed and able to tolerate losing large chunks of hardware will take a fair amount of time and, likely, hardware upgrades.
I don’t think much of typical humans.
I see.
That’s not extreme at all, and also not the same as the EY quote. Have you read any computer security papers? You can literally get people to run programs on their computer as root by offering them pennies! That’s the sort of security environment we operate in. Every botnet with millions of computers is a proof of concept.
He wrote it will be moments later everywhere. Do you claim that it could take over the Internet within moments?
With proper preparation, yes. To reuse my example: it doesn’t take long to register an Amazon account, offer a high-paying HIT with a binary download which opens up a port on the computer, and within minutes multiple people across the world will have run your trojan (well-paying HITs go very fast & Turkers are geographically diverse, especially if the requester doesn’t set requirements on country*); and then one can begin doing all sorts of other things like fuzzing, SMT solvers to automatically extract vulnerabilities from released patches, building botnets, writing flashworms, etc. At that point, the only real question is how stringently one wishes to define ‘moments later’ and ‘everywhere’.
(Within a few seconds? Hard to see how any plausible AI could copy its entire source code & memories over the existing Internet that fast unless it was for some reason already sitting on something like a gigabit link. Within a few minutes? More reasonable. Is there any real difference in safety? No.)
* IIRC a lot of Turk HITs—like for psychology surveys—specify they want Turkers who are from eg America, to reduce variation or get more relevant answers, so Turk requires a declared country for each worker and lets requesters mandate being from particular countries. That said, there’s a lot of incentive for Turkers to lie—to qualify for more and more high-paying HITs—so even if the AI were for some reason to restrict by country, it’d wind up with a lot of diverse foreign computers anyway.
Thanks. Looks like my perception is mainly based on my lack of expertise about security and the resulting inferential distance.
Are there good reasons to assume that the first such AI won’t be running on a state of the art supercomputer? Take the movie Avatar. The resources needed to render it were: 4,000 Hewlett-Packard servers with 35,000 processor cores with 104 terabytes of RAM and three petabytes of storage. I suppose that it would have been relatively hard to render it on illegally obtained storage and computational resources?
Do we have any estimates on how quickly a superhuman AI’s storage requirements would grow? CERN produces 30 petabytes of data per year. If an AI undergoing an intelligence explosion requires to store vast amounts of data then it will be much harder for it to copy itself.
The uncertainties involved here still seem to be too big to claim that a superhuman intelligence will be everywhere moments after you connect it to the Internet.
I don’t think so. Consider botnets.
How hard is it to buy time on a botnet? Not too hard, since they exist for the sake of selling their services, after all.
Do they have the capacity? Botnets range in size from a few computers to extremes of 30 million computers; if they’re desktops, then average RAM these days tends to be at least 4GB, dual core, and hard drive sizes are >=500GB, briefly looking at the cheapest desktops on Newegg. So to get those specs: 35k cores is 17.5k desktops, for 104tb of RAM you’d need a minimum of
104000 / 4 = 26k
computers, and the 3pb would be 6k (3000000 / 500
); botnets can’t use 100% of host resources or their attrition will be even higher than usual, so double the numbers, and the minimum of the biggest is 52k. Well within the range of normal botnets (the WP list has 22 botnets which could’ve handled that load). And AFAIK CGI rendering is very parallel, so the botnet being high latency and highly distributed might not be as big an issue as it seems.How much would it cost? Because botnets are a market, it’s been occasionally studied/reported on by the likes of Brian Krebs (google ‘cost of renting a botnet’). For example, https://www.damballa.com/want-to-rent-an-80-120k-ddos-botnet/ says you could rent a 80-120k botnet for $200 a day, or a 12k botnet for $500 a month—so presumably 5 such botnets would cost a quite reasonable $2500 per month.
(That’s much cheaper than Amazon AWS, looks like. https://calculator.s3.amazonaws.com/index.html 17500 t2.medium instances would cost ~$666k a month.)
I don’t know. Humans get by adding only a few bits per second to long-term memory, Landauer estimated, but I’m not sure how well that maps onto an AI.
It may not be able to move itself instantly, but given everything we know about botnets and computer security, it would be able to control a lot of remote computers quickly if prepared, and that opens up a lot of avenues one would rather not have to deal with. (Original AI connects to Internet, seizes a few computers across the world, installs a small seed as root with a carefully packaged set of instructions, small database of key facts around the world, and a bunch of Bitcoin private keys for funding while the seed grows to something approaching the original, and lets itself be wiped by its vigilant overseers.)
Why does the AI have to transfer its source code? I assumed we were just talking about taking over machines as effectors.
It will want to—for safety, to avoid being vulnerable due to having a single point of failure.
However the “take over in moments” refers to just control, I think. Getting the AI to become fully distributed and able to tolerate losing large chunks of hardware will take a fair amount of time and, likely, hardware upgrades.