Thanks Katya. I’m diving in a bit late here but I would like to query the group on the potential threats posed by AI. I’ve been intrigued by AI for thirty years and have followed the field peripherally. Something is very appealing about the idea of creating truly intelligent machines and, even more exciting, seeing those machines be able to improve themselves. However, I have, with some others (including most recently Elon Musk) become increasingly concerned about the threat that our technology, and particularly AI, may pose to us. This chapter on potential AI superpowers highlights this possibility but I don’t see anyone commenting here yet on the reality of this threat, or what we might do to prevent such a threat from becoming real. Musk called for some kind of regulation and I have to agree there. I also think an int’l treaty discussion would be a good start, similar to what we have in place for nuclear weapons, cluster bombs, land mines, etc. Since the potential threat of AI is huge, if the superpowers discussed are at all possible in the coming decades, we should at the very least start a serious discussion about how to manage this threat. Thoughts?
It might be a good idea somewhere down the line, but co-ordination of that kind would likely be very difficult.
It might not be so necessary if the problem of friendliness is solved, and AI is built to specification. This would also be very difficult, but it would also likely be much more permanently successful, as a friendly superintelligent AI would ensure no subsequent unfriendly superintelligences arise.
That sounds a bit too simplistic to me since it relies on many what ifs. Int’l law is also far from certain in terms of providing good solutions but it seems a mix of national and int’l dialogue is the place to start. We’re also going to see localities get involved with their own ordinances and rules, or simply cultural norms. I’d rather see the discussion happen sooner rather than later because we are indeed dealing with Pandora’s Box here. Or to put it more dramatically, as Musk did recently: we are perhaps summoning the demon in seeking strong AI. Let’s discuss these weighty issues before it’s too late.
Bostrom discusses the Baruch Plan, and the lessons to learn from that historical experience are enormous. I agree that we need a multilateral framework to regulate AI.
However, it also has to be something that gains agreement. Baruch and the United States wanted to give Nuclear technology regulation over to an international agency.
Of all things, the Soviet Union disagreed BEFORE they even quite had the Bomb! (Although they were researching it.)
Why? Because they knew that they would be out-voted in this new entity’s proposed governance structure.
Figuring out the framework to present will be a challenge-and there will not be a dozen chances...
We need a globally funded global AI nanny project like Ben Goertzel suggested.
Every AGI project should spend 30% of its budget on safety and control problem: 2⁄3 project related, 1⁄3 general research.
We must find a way how financial value created by AI (today Narrow AI, tomorrow AGI) compensates for technology driven collective redundancies and supports sustainable economy and social model.
If international leadership could become aware of AI issues, discuss them and sensibly respond to them, I too think that might help in mitigating the various threats that come with AI.
Here are some interesting pieces of writing on exactly this topic:
FWIW, there already is one organization working specifically on Friendliness: MIRI. Friendliness research in general is indeed underfunded relative to its importance, and finishing this work before someone builds an Unfriendly AI is indeed a nontrivial problem.
So would be making international agreements work. Artaxerxes phrased it as “co-ordination of this kind would likely be very difficult”; I’ll try to expand on that.
The lure of superintelligent AI is that of an extremely powerful tool to shape the world. We have various entities in this world, including large nation states with vast resources, that are engaged in various forms of strong competition. For each of those entities, AI is potentially a game-winner. And contrary to nuclear weapons, you don’t need huge conspicuous infrastructure to develop it; just some computers (and you’ll likely keep server farms for various reasons anyway; what’s one more?) and a bunch of researchers that you can hide in a basement and move around as needed to evade detection. The obvious game-theoretical move, then, is to push for international outlawing of superintelligent AI, and then push lots of money into your own black budgets to develop it before anyone else does.
Nuclear weapons weren’t outlawed before we had any, or even limited to one or two countries, though that would have been much easier than with AI. The Ottawa Treaty was not signed by the US, because they decided anti-personnel mines were just too useful to give up, and that usefulness is a rounding error compared to superintelligent AI. Our species can’t even coordinate to sufficiently limit our emission of CO2 to avert likely major climate impacts, and the downside to doing that would be much lower.
I will also note that for the moment, there is a significant chance that the large nation states simply don’t take the potential of superintelligent AI seriously. This might be the best possible position for them to take. If they start to appreciate it, without also fully appreciating the difficulty of FAI (and maybe even if; the calculation if you do appreciate it is tricky if you can’t also coordinate), a full-blown armsrace is likely to result. The expected threat from that IMO outweighs the expected benefit from attempting to internationally outlaw superintelligent AI implementation.
Thanks Sebastian. I agree with your points and it scares me even more to think about the implications of what is already happening. Surely the US, China, Russia, etc., already realize the game-changing potential of superintelligent AI and are working hard to make it reality. It’s probably already a new (covert) arms race. But this to me is very strong support for seeking int’l treaty solutions now and working very hard in the coming years to strengthen that regime. Because once the unfriendly AI gets out of the bag, as with Pandora’s Box, there’s no pushing it back in. I think this issue really needs to be elevated very quickly.
Thinking about policy responses seems quite neglected to me. It’s true there are prima facie reasons to expect regulation or global cooperation to be ‘hard’, but the details of the situation deserve a great deal more thought, and ‘hard’ should be compared to the difficulty of developing some narrow variety of AI before anyone else develops any powerful AI.
Thanks Katya. I’m diving in a bit late here but I would like to query the group on the potential threats posed by AI. I’ve been intrigued by AI for thirty years and have followed the field peripherally. Something is very appealing about the idea of creating truly intelligent machines and, even more exciting, seeing those machines be able to improve themselves. However, I have, with some others (including most recently Elon Musk) become increasingly concerned about the threat that our technology, and particularly AI, may pose to us. This chapter on potential AI superpowers highlights this possibility but I don’t see anyone commenting here yet on the reality of this threat, or what we might do to prevent such a threat from becoming real. Musk called for some kind of regulation and I have to agree there. I also think an int’l treaty discussion would be a good start, similar to what we have in place for nuclear weapons, cluster bombs, land mines, etc. Since the potential threat of AI is huge, if the superpowers discussed are at all possible in the coming decades, we should at the very least start a serious discussion about how to manage this threat. Thoughts?
It might be a good idea somewhere down the line, but co-ordination of that kind would likely be very difficult.
It might not be so necessary if the problem of friendliness is solved, and AI is built to specification. This would also be very difficult, but it would also likely be much more permanently successful, as a friendly superintelligent AI would ensure no subsequent unfriendly superintelligences arise.
That sounds a bit too simplistic to me since it relies on many what ifs. Int’l law is also far from certain in terms of providing good solutions but it seems a mix of national and int’l dialogue is the place to start. We’re also going to see localities get involved with their own ordinances and rules, or simply cultural norms. I’d rather see the discussion happen sooner rather than later because we are indeed dealing with Pandora’s Box here. Or to put it more dramatically, as Musk did recently: we are perhaps summoning the demon in seeking strong AI. Let’s discuss these weighty issues before it’s too late.
Bostrom discusses the Baruch Plan, and the lessons to learn from that historical experience are enormous. I agree that we need a multilateral framework to regulate AI.
However, it also has to be something that gains agreement. Baruch and the United States wanted to give Nuclear technology regulation over to an international agency.
Of all things, the Soviet Union disagreed BEFORE they even quite had the Bomb! (Although they were researching it.)
Why? Because they knew that they would be out-voted in this new entity’s proposed governance structure.
Figuring out the framework to present will be a challenge-and there will not be a dozen chances...
Thanks Steve. I need to dive into this book for sure.
We need a global charta for AI transparency.
We need a globally funded global AI nanny project like Ben Goertzel suggested.
Every AGI project should spend 30% of its budget on safety and control problem: 2⁄3 project related, 1⁄3 general research.
We must find a way how financial value created by AI (today Narrow AI, tomorrow AGI) compensates for technology driven collective redundancies and supports sustainable economy and social model.
If international leadership could become aware of AI issues, discuss them and sensibly respond to them, I too think that might help in mitigating the various threats that come with AI.
Here are some interesting pieces of writing on exactly this topic:
How well will policy-makers handle AGI?
AGI outcomes and civilisational competence
Great, thanks for the links.
FWIW, there already is one organization working specifically on Friendliness: MIRI. Friendliness research in general is indeed underfunded relative to its importance, and finishing this work before someone builds an Unfriendly AI is indeed a nontrivial problem.
So would be making international agreements work. Artaxerxes phrased it as “co-ordination of this kind would likely be very difficult”; I’ll try to expand on that.
The lure of superintelligent AI is that of an extremely powerful tool to shape the world. We have various entities in this world, including large nation states with vast resources, that are engaged in various forms of strong competition. For each of those entities, AI is potentially a game-winner. And contrary to nuclear weapons, you don’t need huge conspicuous infrastructure to develop it; just some computers (and you’ll likely keep server farms for various reasons anyway; what’s one more?) and a bunch of researchers that you can hide in a basement and move around as needed to evade detection. The obvious game-theoretical move, then, is to push for international outlawing of superintelligent AI, and then push lots of money into your own black budgets to develop it before anyone else does.
Nuclear weapons weren’t outlawed before we had any, or even limited to one or two countries, though that would have been much easier than with AI. The Ottawa Treaty was not signed by the US, because they decided anti-personnel mines were just too useful to give up, and that usefulness is a rounding error compared to superintelligent AI. Our species can’t even coordinate to sufficiently limit our emission of CO2 to avert likely major climate impacts, and the downside to doing that would be much lower.
I will also note that for the moment, there is a significant chance that the large nation states simply don’t take the potential of superintelligent AI seriously. This might be the best possible position for them to take. If they start to appreciate it, without also fully appreciating the difficulty of FAI (and maybe even if; the calculation if you do appreciate it is tricky if you can’t also coordinate), a full-blown armsrace is likely to result. The expected threat from that IMO outweighs the expected benefit from attempting to internationally outlaw superintelligent AI implementation.
Thanks Sebastian. I agree with your points and it scares me even more to think about the implications of what is already happening. Surely the US, China, Russia, etc., already realize the game-changing potential of superintelligent AI and are working hard to make it reality. It’s probably already a new (covert) arms race. But this to me is very strong support for seeking int’l treaty solutions now and working very hard in the coming years to strengthen that regime. Because once the unfriendly AI gets out of the bag, as with Pandora’s Box, there’s no pushing it back in. I think this issue really needs to be elevated very quickly.
Thinking about policy responses seems quite neglected to me. It’s true there are prima facie reasons to expect regulation or global cooperation to be ‘hard’, but the details of the situation deserve a great deal more thought, and ‘hard’ should be compared to the difficulty of developing some narrow variety of AI before anyone else develops any powerful AI.
In that sentence “superintelligent AI’ can be replaced with pretty much anything, starting with “time travel” and ending with “mind-control ray”.