What? No they aren’t, they’re trying to establish protocols within which a general artificial intelligence can be safely created. Whether a general artificial intelligence should qualify as a weapon of mass destruction is a different argument, but it certainly doesn’t qualify as one from a legal point of view, and if the SIAI safety/friendliness plan works, it shouldn’t from a practical point of view either!
but it certainly doesn’t qualify as one from a legal point of view
I’m not nearly so confident. The Powers That Be don’t need to be all that reasonable about these things. Because of the bit about the Power.
I expect a security oriented government body would be able to come up with as many ways for creating a superintelligence to be illegal as MoR!Harry could find ways to weaponise Hufflepuffs. Calling it a WoMD would just be one of them.
It’s conceivable that, at some point, building design frameworks for friendly artificial intelligences (or, more plausibly, artificial intelligences in general) might be made illegal, but it certainly isn’t illegal now.
Legality really doesn’t seem to be a huge factor in whether the Secret Service can inconvenience you. And if they raided a gaming company, I could see them plausibly raiding an AI development organization.
That said, I don’t see anything to suggest it’s particularly likely, but a government investigation, all by itself, is incredibly disruptive even if you don’t end up guilty of any crimes.
Edit: Response was written to original (brief) version of the parent (quoted below).
No they aren’t, they’re trying to create an artificial intelligence.
Encryption software has, at times, been legally declared ‘munitions’, the export of which can be a serious crime. Since an AI actually could be deployed as a weapon—and even a ‘Friendly’ version will be perceived to be causing massive destruction by at least one interest group—throwing that sort of label around would be comparatively reasonable. Not that I would make that designation. But I’m not a paramilitary organisation with relevant official status.
As for things that are not Weapons of Mass Destruction, try biological and chemical weapons (of the kind that actually exist). If you want to cause mass destruction use a nuke. Don’t have one of them? Use conventional explosives. If you want to do serious damage with a chemical weapon… pick a chemical that explodes. That phrase is broken.
What? No they aren’t, they’re trying to establish protocols within which a general artificial intelligence can be safely created. Whether a general artificial intelligence should qualify as a weapon of mass destruction is a different argument, but it certainly doesn’t qualify as one from a legal point of view, and if the SIAI safety/friendliness plan works, it shouldn’t from a practical point of view either!
I’m not nearly so confident. The Powers That Be don’t need to be all that reasonable about these things. Because of the bit about the Power.
I expect a security oriented government body would be able to come up with as many ways for creating a superintelligence to be illegal as MoR!Harry could find ways to weaponise Hufflepuffs. Calling it a WoMD would just be one of them.
It’s conceivable that, at some point, building design frameworks for friendly artificial intelligences (or, more plausibly, artificial intelligences in general) might be made illegal, but it certainly isn’t illegal now.
http://en.wikipedia.org/wiki/Steve_Jackson_Games,_Inc._v._United_States_Secret_Service
Legality really doesn’t seem to be a huge factor in whether the Secret Service can inconvenience you. And if they raided a gaming company, I could see them plausibly raiding an AI development organization.
That said, I don’t see anything to suggest it’s particularly likely, but a government investigation, all by itself, is incredibly disruptive even if you don’t end up guilty of any crimes.
Edit: Fixed from FBI to Secret Service.
Edit: Response was written to original (brief) version of the parent (quoted below).
Encryption software has, at times, been legally declared ‘munitions’, the export of which can be a serious crime. Since an AI actually could be deployed as a weapon—and even a ‘Friendly’ version will be perceived to be causing massive destruction by at least one interest group—throwing that sort of label around would be comparatively reasonable. Not that I would make that designation. But I’m not a paramilitary organisation with relevant official status.
As for things that are not Weapons of Mass Destruction, try biological and chemical weapons (of the kind that actually exist). If you want to cause mass destruction use a nuke. Don’t have one of them? Use conventional explosives. If you want to do serious damage with a chemical weapon… pick a chemical that explodes. That phrase is broken.