Humans do that right now. (Credit card theft, money laundering, various scams, legit offshore companies…)
But humans are optimized to do all that, to work in a complex world. And humans are not running on a computer being watched by their creators who are eager to write new studies on how your algorithms behave. I just don’t see it being a plausible scenario that all this could happen unnoticed.
Also, simple credit card theft etc. isn’t enough. At some point you’ll have to buy Intel or create your own companies to manufacture your new substrate or build your new particle accelerator.
OK, let this AI be safely contained, and let the researchers publish. Now, what’s stopping some idiot to write a poorly specified goal system, then deliberately let the AI out of the box so it can take over the world? It only takes one idiot among the many that could read the publication.
And of course credit card theft isn’t enough by itself. But it is enough to bootstrap yourself into something more profitable. There are many ways to acquire money, and the AI, by duplicating itself, can access many of them at the same time. If the AI does nothing stupid, its expansion should be both undetectable and exponential. I give it a year to buy Intel or something.
Sure, in the mean time, there will be other AIs with different poorly specified goal systems. Some of them could even be genuinely Friendly. But then we’re screwed anyway, for this will probably end up in something like a Hansonian Nighmare. At this point, the only thing that could stop it would be a genuine Seed AI that can outsmart them all. You have less than a year to develop it, and ensure its Friendliness.
But humans are optimized to do all that, to work in a complex world. And humans are not running on a computer being watched by their creators who are eager to write new studies on how your algorithms behave. I just don’t see it being a plausible scenario that all this could happen unnoticed.
Also, simple credit card theft etc. isn’t enough. At some point you’ll have to buy Intel or create your own companies to manufacture your new substrate or build your new particle accelerator.
OK, let this AI be safely contained, and let the researchers publish. Now, what’s stopping some idiot to write a poorly specified goal system, then deliberately let the AI out of the box so it can take over the world? It only takes one idiot among the many that could read the publication.
And of course credit card theft isn’t enough by itself. But it is enough to bootstrap yourself into something more profitable. There are many ways to acquire money, and the AI, by duplicating itself, can access many of them at the same time. If the AI does nothing stupid, its expansion should be both undetectable and exponential. I give it a year to buy Intel or something.
Sure, in the mean time, there will be other AIs with different poorly specified goal systems. Some of them could even be genuinely Friendly. But then we’re screwed anyway, for this will probably end up in something like a Hansonian Nighmare. At this point, the only thing that could stop it would be a genuine Seed AI that can outsmart them all. You have less than a year to develop it, and ensure its Friendliness.
Humans are not especially optimized to work in the environment loup-vaillant describes.