In fact we have too many good AI projects which may result in incompatible versions of AI friendliness and wars between AIs. It often happens in humans history before, most typically when two versions of one religion fight each other (like Shi’ah against Sunni, or different version of Buddhism).
I think it would be much better to concentrate all friendly AI efforts under control of one person or organisation.
Basically we are moving from underinvetmnet stage to overinvetmnet.
There is no way around this problem because the run-up to the singularity is going to bring huge economic and military benefits to having slightly better AI than anybody else. Moloch is hard to beat.
Ok, we have many nuclear powers in world, but only one main non-proliferation agency that is IAEA, and some how work.
The same way we could have many AI-projects in the world, but one agency which provide safety guidelines (and it will be logical that it will be MIRI+Bostrom as they did most known research in the topic).
But if we have many agencies which provide different guidelines or even several AI with slightly different friendliness we are doomed.
Strongly disagree that our current nuclear weapons situation “works”. At this very moment a large number of hydrogen bombs sit atop missiles ready at a moments notice to kill hundreds of millions of people. Letting North Korea get atomic weapons required major civilization level incompetence.
Moreover, the nuclear weapons situation is much simpler than the AI situation. Pretty much everyone agrees that a nuclear weapon going off in an inhabited area is a big deal that can quickly make life worse for all involved. It is not the case that everyone agrees that general AI is a such a big deal. All the official nuclear powers know that there will be a significant negative response directed at them if they bomb anyone else. They do not know this about AI.
It will be probably much easier to use the AI against someone secretly.
You could try to drop an atomic bomb on someone without them knowing who dropped the bomb on them. But you cannot drop an atomic bomb on them without them knowing that someone dropped the bomb on them.
But you could give your AI a task to invent ways how to move things closer to your desired outcome without creating suspicion. The obvious options would be to make it happen as a “natural” outcome, or to cast the suspicion on someone else, or maybe try to reach the goal in a way that will make people believe it didn’t happen or that it wasn’t your goal at all. (A superhuman AI could find yet more options; some of them could be incomprehensive to humans. Also options like: the whole world turns into utter chaos; by the way your original goal is completed, but everyone is now too busy and too confused to even notice it or care about it.) How is anyone going to punish that?
In fact we have too many good AI projects which may result in incompatible versions of AI friendliness and wars between AIs. It often happens in humans history before, most typically when two versions of one religion fight each other (like Shi’ah against Sunni, or different version of Buddhism).
I think it would be much better to concentrate all friendly AI efforts under control of one person or organisation.
Basically we are moving from underinvetmnet stage to overinvetmnet.
There is no way around this problem because the run-up to the singularity is going to bring huge economic and military benefits to having slightly better AI than anybody else. Moloch is hard to beat.
Ok, we have many nuclear powers in world, but only one main non-proliferation agency that is IAEA, and some how work. The same way we could have many AI-projects in the world, but one agency which provide safety guidelines (and it will be logical that it will be MIRI+Bostrom as they did most known research in the topic). But if we have many agencies which provide different guidelines or even several AI with slightly different friendliness we are doomed.
Strongly disagree that our current nuclear weapons situation “works”. At this very moment a large number of hydrogen bombs sit atop missiles ready at a moments notice to kill hundreds of millions of people. Letting North Korea get atomic weapons required major civilization level incompetence.
Moreover, the nuclear weapons situation is much simpler than the AI situation. Pretty much everyone agrees that a nuclear weapon going off in an inhabited area is a big deal that can quickly make life worse for all involved. It is not the case that everyone agrees that general AI is a such a big deal. All the official nuclear powers know that there will be a significant negative response directed at them if they bomb anyone else. They do not know this about AI.
It will be probably much easier to use the AI against someone secretly.
You could try to drop an atomic bomb on someone without them knowing who dropped the bomb on them. But you cannot drop an atomic bomb on them without them knowing that someone dropped the bomb on them.
But you could give your AI a task to invent ways how to move things closer to your desired outcome without creating suspicion. The obvious options would be to make it happen as a “natural” outcome, or to cast the suspicion on someone else, or maybe try to reach the goal in a way that will make people believe it didn’t happen or that it wasn’t your goal at all. (A superhuman AI could find yet more options; some of them could be incomprehensive to humans. Also options like: the whole world turns into utter chaos; by the way your original goal is completed, but everyone is now too busy and too confused to even notice it or care about it.) How is anyone going to punish that?
I agree, it works in only limited sense, that is there is no nuclear war for 70 years, but proliferation and risks still exists and even grow.