That sounds a bit too simplistic to me since it relies on many what ifs. Int’l law is also far from certain in terms of providing good solutions but it seems a mix of national and int’l dialogue is the place to start. We’re also going to see localities get involved with their own ordinances and rules, or simply cultural norms. I’d rather see the discussion happen sooner rather than later because we are indeed dealing with Pandora’s Box here. Or to put it more dramatically, as Musk did recently: we are perhaps summoning the demon in seeking strong AI. Let’s discuss these weighty issues before it’s too late.
Bostrom discusses the Baruch Plan, and the lessons to learn from that historical experience are enormous. I agree that we need a multilateral framework to regulate AI.
However, it also has to be something that gains agreement. Baruch and the United States wanted to give Nuclear technology regulation over to an international agency.
Of all things, the Soviet Union disagreed BEFORE they even quite had the Bomb! (Although they were researching it.)
Why? Because they knew that they would be out-voted in this new entity’s proposed governance structure.
Figuring out the framework to present will be a challenge-and there will not be a dozen chances...
We need a globally funded global AI nanny project like Ben Goertzel suggested.
Every AGI project should spend 30% of its budget on safety and control problem: 2⁄3 project related, 1⁄3 general research.
We must find a way how financial value created by AI (today Narrow AI, tomorrow AGI) compensates for technology driven collective redundancies and supports sustainable economy and social model.
If international leadership could become aware of AI issues, discuss them and sensibly respond to them, I too think that might help in mitigating the various threats that come with AI.
Here are some interesting pieces of writing on exactly this topic:
FWIW, there already is one organization working specifically on Friendliness: MIRI. Friendliness research in general is indeed underfunded relative to its importance, and finishing this work before someone builds an Unfriendly AI is indeed a nontrivial problem.
So would be making international agreements work. Artaxerxes phrased it as “co-ordination of this kind would likely be very difficult”; I’ll try to expand on that.
The lure of superintelligent AI is that of an extremely powerful tool to shape the world. We have various entities in this world, including large nation states with vast resources, that are engaged in various forms of strong competition. For each of those entities, AI is potentially a game-winner. And contrary to nuclear weapons, you don’t need huge conspicuous infrastructure to develop it; just some computers (and you’ll likely keep server farms for various reasons anyway; what’s one more?) and a bunch of researchers that you can hide in a basement and move around as needed to evade detection. The obvious game-theoretical move, then, is to push for international outlawing of superintelligent AI, and then push lots of money into your own black budgets to develop it before anyone else does.
Nuclear weapons weren’t outlawed before we had any, or even limited to one or two countries, though that would have been much easier than with AI. The Ottawa Treaty was not signed by the US, because they decided anti-personnel mines were just too useful to give up, and that usefulness is a rounding error compared to superintelligent AI. Our species can’t even coordinate to sufficiently limit our emission of CO2 to avert likely major climate impacts, and the downside to doing that would be much lower.
I will also note that for the moment, there is a significant chance that the large nation states simply don’t take the potential of superintelligent AI seriously. This might be the best possible position for them to take. If they start to appreciate it, without also fully appreciating the difficulty of FAI (and maybe even if; the calculation if you do appreciate it is tricky if you can’t also coordinate), a full-blown armsrace is likely to result. The expected threat from that IMO outweighs the expected benefit from attempting to internationally outlaw superintelligent AI implementation.
Thanks Sebastian. I agree with your points and it scares me even more to think about the implications of what is already happening. Surely the US, China, Russia, etc., already realize the game-changing potential of superintelligent AI and are working hard to make it reality. It’s probably already a new (covert) arms race. But this to me is very strong support for seeking int’l treaty solutions now and working very hard in the coming years to strengthen that regime. Because once the unfriendly AI gets out of the bag, as with Pandora’s Box, there’s no pushing it back in. I think this issue really needs to be elevated very quickly.
Thinking about policy responses seems quite neglected to me. It’s true there are prima facie reasons to expect regulation or global cooperation to be ‘hard’, but the details of the situation deserve a great deal more thought, and ‘hard’ should be compared to the difficulty of developing some narrow variety of AI before anyone else develops any powerful AI.
That sounds a bit too simplistic to me since it relies on many what ifs. Int’l law is also far from certain in terms of providing good solutions but it seems a mix of national and int’l dialogue is the place to start. We’re also going to see localities get involved with their own ordinances and rules, or simply cultural norms. I’d rather see the discussion happen sooner rather than later because we are indeed dealing with Pandora’s Box here. Or to put it more dramatically, as Musk did recently: we are perhaps summoning the demon in seeking strong AI. Let’s discuss these weighty issues before it’s too late.
Bostrom discusses the Baruch Plan, and the lessons to learn from that historical experience are enormous. I agree that we need a multilateral framework to regulate AI.
However, it also has to be something that gains agreement. Baruch and the United States wanted to give Nuclear technology regulation over to an international agency.
Of all things, the Soviet Union disagreed BEFORE they even quite had the Bomb! (Although they were researching it.)
Why? Because they knew that they would be out-voted in this new entity’s proposed governance structure.
Figuring out the framework to present will be a challenge-and there will not be a dozen chances...
Thanks Steve. I need to dive into this book for sure.
We need a global charta for AI transparency.
We need a globally funded global AI nanny project like Ben Goertzel suggested.
Every AGI project should spend 30% of its budget on safety and control problem: 2⁄3 project related, 1⁄3 general research.
We must find a way how financial value created by AI (today Narrow AI, tomorrow AGI) compensates for technology driven collective redundancies and supports sustainable economy and social model.
If international leadership could become aware of AI issues, discuss them and sensibly respond to them, I too think that might help in mitigating the various threats that come with AI.
Here are some interesting pieces of writing on exactly this topic:
How well will policy-makers handle AGI?
AGI outcomes and civilisational competence
Great, thanks for the links.
FWIW, there already is one organization working specifically on Friendliness: MIRI. Friendliness research in general is indeed underfunded relative to its importance, and finishing this work before someone builds an Unfriendly AI is indeed a nontrivial problem.
So would be making international agreements work. Artaxerxes phrased it as “co-ordination of this kind would likely be very difficult”; I’ll try to expand on that.
The lure of superintelligent AI is that of an extremely powerful tool to shape the world. We have various entities in this world, including large nation states with vast resources, that are engaged in various forms of strong competition. For each of those entities, AI is potentially a game-winner. And contrary to nuclear weapons, you don’t need huge conspicuous infrastructure to develop it; just some computers (and you’ll likely keep server farms for various reasons anyway; what’s one more?) and a bunch of researchers that you can hide in a basement and move around as needed to evade detection. The obvious game-theoretical move, then, is to push for international outlawing of superintelligent AI, and then push lots of money into your own black budgets to develop it before anyone else does.
Nuclear weapons weren’t outlawed before we had any, or even limited to one or two countries, though that would have been much easier than with AI. The Ottawa Treaty was not signed by the US, because they decided anti-personnel mines were just too useful to give up, and that usefulness is a rounding error compared to superintelligent AI. Our species can’t even coordinate to sufficiently limit our emission of CO2 to avert likely major climate impacts, and the downside to doing that would be much lower.
I will also note that for the moment, there is a significant chance that the large nation states simply don’t take the potential of superintelligent AI seriously. This might be the best possible position for them to take. If they start to appreciate it, without also fully appreciating the difficulty of FAI (and maybe even if; the calculation if you do appreciate it is tricky if you can’t also coordinate), a full-blown armsrace is likely to result. The expected threat from that IMO outweighs the expected benefit from attempting to internationally outlaw superintelligent AI implementation.
Thanks Sebastian. I agree with your points and it scares me even more to think about the implications of what is already happening. Surely the US, China, Russia, etc., already realize the game-changing potential of superintelligent AI and are working hard to make it reality. It’s probably already a new (covert) arms race. But this to me is very strong support for seeking int’l treaty solutions now and working very hard in the coming years to strengthen that regime. Because once the unfriendly AI gets out of the bag, as with Pandora’s Box, there’s no pushing it back in. I think this issue really needs to be elevated very quickly.
Thinking about policy responses seems quite neglected to me. It’s true there are prima facie reasons to expect regulation or global cooperation to be ‘hard’, but the details of the situation deserve a great deal more thought, and ‘hard’ should be compared to the difficulty of developing some narrow variety of AI before anyone else develops any powerful AI.
In that sentence “superintelligent AI’ can be replaced with pretty much anything, starting with “time travel” and ending with “mind-control ray”.