In your case, a force is needed to actually push most of organisations to participate in such project, and the worst ones - which want to make AI first to take over the world—will not participate in it. IAEA is an example of such organisation, but it was not able to stop North Korea to create its nukes.
Because of above you need powerful enforcement agency above your AI agency. It could either use conventional weapons, mostly nukes, or some form of narrow AI, to predict where strong AI is created—or both. Basically, it means the creation of the world government, design especially to contain AI.
It is improbable in the current world, as nobody will create world government mandated to nuke AI labs, based only reading Bostrom and EY books. The only chance for its creation is if some very spectacular AI accident happens, like hacking of 1000 airplanes and crashing them in 1000 nuclear plants using narrow AI with some machine learning capabilities. In that case, global ban of AI seems possible.
never mind this was stupid
The reliable verification methods are a dream, of course, but the ‘forbidden from sharing this information with non-members’ is even more fanciful.
Is there no way to actually delete a comment? :)
Not after someone already replied to it, I think.
Without replies, you need to retract it, then refresh the page, and then there is a Delete button.
In your case, a force is needed to actually push most of organisations to participate in such project, and the worst ones - which want to make AI first to take over the world—will not participate in it. IAEA is an example of such organisation, but it was not able to stop North Korea to create its nukes.
Because of above you need powerful enforcement agency above your AI agency. It could either use conventional weapons, mostly nukes, or some form of narrow AI, to predict where strong AI is created—or both. Basically, it means the creation of the world government, design especially to contain AI.
It is improbable in the current world, as nobody will create world government mandated to nuke AI labs, based only reading Bostrom and EY books. The only chance for its creation is if some very spectacular AI accident happens, like hacking of 1000 airplanes and crashing them in 1000 nuclear plants using narrow AI with some machine learning capabilities. In that case, global ban of AI seems possible.