You assume that everyone computer can run a seed AI
Yes I do. But it may not be as probable as I thought.
and that the global infrastructure is very stable under attack.
I said as much. And this one seems more plausible. If we uphold freedom, a sensible politic for the internet is to make it as resilient and uncontrollable as possible. If we don’t, well…
Now if those two assumptions are correct, and we further assume the AI already controls a single computer with an internet connection, then it has plenty of resources to take over a second one. It would need to :
Find a security flaw somewhere (including convincing someone to run arbitrary code), upload itself there, then rinse and repeat.
Or, find and exploit credit card numbers, (or convince someone to give them away), then buy computing power.
Or, find and convince someone (typically a lawyer) to set up a company for it, then make money (legally or not), then buy computing power.
Or, …
Humans do that right now. (Credit card theft, money laundering, various scams, legit offshore companies…)
Of course, if the first computer isn’t connected, the AI would have to get out of the box first. But Eliezer can do that already (and he’s not alone). It’s a long shot, but if several equally capable AIs pop up in different laboratories worldwide, then eventually one of them will be able to convince its way out.
Humans do that right now. (Credit card theft, money laundering, various scams, legit offshore companies…)
But humans are optimized to do all that, to work in a complex world. And humans are not running on a computer being watched by their creators who are eager to write new studies on how your algorithms behave. I just don’t see it being a plausible scenario that all this could happen unnoticed.
Also, simple credit card theft etc. isn’t enough. At some point you’ll have to buy Intel or create your own companies to manufacture your new substrate or build your new particle accelerator.
OK, let this AI be safely contained, and let the researchers publish. Now, what’s stopping some idiot to write a poorly specified goal system, then deliberately let the AI out of the box so it can take over the world? It only takes one idiot among the many that could read the publication.
And of course credit card theft isn’t enough by itself. But it is enough to bootstrap yourself into something more profitable. There are many ways to acquire money, and the AI, by duplicating itself, can access many of them at the same time. If the AI does nothing stupid, its expansion should be both undetectable and exponential. I give it a year to buy Intel or something.
Sure, in the mean time, there will be other AIs with different poorly specified goal systems. Some of them could even be genuinely Friendly. But then we’re screwed anyway, for this will probably end up in something like a Hansonian Nighmare. At this point, the only thing that could stop it would be a genuine Seed AI that can outsmart them all. You have less than a year to develop it, and ensure its Friendliness.
Yes I do. But it may not be as probable as I thought.
I said as much. And this one seems more plausible. If we uphold freedom, a sensible politic for the internet is to make it as resilient and uncontrollable as possible. If we don’t, well…
Now if those two assumptions are correct, and we further assume the AI already controls a single computer with an internet connection, then it has plenty of resources to take over a second one. It would need to :
Find a security flaw somewhere (including convincing someone to run arbitrary code), upload itself there, then rinse and repeat.
Or, find and exploit credit card numbers, (or convince someone to give them away), then buy computing power.
Or, find and convince someone (typically a lawyer) to set up a company for it, then make money (legally or not), then buy computing power.
Or, …
Humans do that right now. (Credit card theft, money laundering, various scams, legit offshore companies…)
Of course, if the first computer isn’t connected, the AI would have to get out of the box first. But Eliezer can do that already (and he’s not alone). It’s a long shot, but if several equally capable AIs pop up in different laboratories worldwide, then eventually one of them will be able to convince its way out.
But humans are optimized to do all that, to work in a complex world. And humans are not running on a computer being watched by their creators who are eager to write new studies on how your algorithms behave. I just don’t see it being a plausible scenario that all this could happen unnoticed.
Also, simple credit card theft etc. isn’t enough. At some point you’ll have to buy Intel or create your own companies to manufacture your new substrate or build your new particle accelerator.
OK, let this AI be safely contained, and let the researchers publish. Now, what’s stopping some idiot to write a poorly specified goal system, then deliberately let the AI out of the box so it can take over the world? It only takes one idiot among the many that could read the publication.
And of course credit card theft isn’t enough by itself. But it is enough to bootstrap yourself into something more profitable. There are many ways to acquire money, and the AI, by duplicating itself, can access many of them at the same time. If the AI does nothing stupid, its expansion should be both undetectable and exponential. I give it a year to buy Intel or something.
Sure, in the mean time, there will be other AIs with different poorly specified goal systems. Some of them could even be genuinely Friendly. But then we’re screwed anyway, for this will probably end up in something like a Hansonian Nighmare. At this point, the only thing that could stop it would be a genuine Seed AI that can outsmart them all. You have less than a year to develop it, and ensure its Friendliness.
Humans are not especially optimized to work in the environment loup-vaillant describes.