I’m sure there are circumstances under which a “rogue AI” does something very scary, and leads to a very serious attempt to regulate AI worldwide, e.g. with coordination at the level of UN Security Council. The obvious analogy once again concerns nuclear weapons; proliferation in the 1960s led to the creation of the NNPT, the Nuclear Nonproliferation Treaty. Signatories agree that only the UNSC permanent members are allowed to have nuclear weapons, and in return the permanent members agree to help other signatories develop nonmilitary uses of nuclear power. The treaty definitely helped to curb proliferation, but it’s far from perfect. The official nuclear weapons states are surely willing to bend the rules and assist allies to obtain weapons capability, if it is strategically desirable and can be done deniably; and not every country signed the treaty and now some of those states (e.g. India, Pakistan) are nuclear weapons states.
Part of the NNPT regime is the IAEA, the International Atomic Energy Agency. These are the people who, for example, carry out inspections in Iran. Again, the system has all kinds of troubles, it’s surrounded by spy plots and counterplots, many nations would like to see Security Council reformed so the five victorious allies from World War 2 (US, UK, France, Russia, China) don’t have all the power, but still, something like this might buy a little time.
If we follow the blueprint that was adopted to fight nuclear proliferation, the five permanent members would be in charge, and they would insist that potentially dangerous AI activities in every country take place under some form of severe surveillance by an International Artificial Intelligence Agency, while promising to also share the benefits of safe AI with all nations. Despite all the foreseeable problems, something like this could buy time, but all the big powers would undoubtedly keep pursuing AI, in secret government programs or in open collaborations with civilian industry and academia.
The important difference is that the nuclear weapons are destructive because they worked exactly as intended, and the AI in this scenario is destructive because it failed horrendously. Plus, the concept of rogue AI has been firmly ingrained into public consciousness by now, afaik not the case with the extremely destructive weapons in 1940s [1]. So hopefully this will produce more public outrage (and scare among the elites themselves) ⇒ stricter external and internal limitations on all agents developing AIs. But in the end I agree, it’ll only buy time, maybe few decades if we are lucky, to solve the problem properly or to build more sane political institutions.
Yes I’m sure there was a scifi novel or two before 1945 describing bombs of immense power. But I don’t think it was anywhere nearly as widely known as Matrix or Terminator.
I’m sure there are circumstances under which a “rogue AI” does something very scary, and leads to a very serious attempt to regulate AI worldwide, e.g. with coordination at the level of UN Security Council. The obvious analogy once again concerns nuclear weapons; proliferation in the 1960s led to the creation of the NNPT, the Nuclear Nonproliferation Treaty. Signatories agree that only the UNSC permanent members are allowed to have nuclear weapons, and in return the permanent members agree to help other signatories develop nonmilitary uses of nuclear power. The treaty definitely helped to curb proliferation, but it’s far from perfect. The official nuclear weapons states are surely willing to bend the rules and assist allies to obtain weapons capability, if it is strategically desirable and can be done deniably; and not every country signed the treaty and now some of those states (e.g. India, Pakistan) are nuclear weapons states.
Part of the NNPT regime is the IAEA, the International Atomic Energy Agency. These are the people who, for example, carry out inspections in Iran. Again, the system has all kinds of troubles, it’s surrounded by spy plots and counterplots, many nations would like to see Security Council reformed so the five victorious allies from World War 2 (US, UK, France, Russia, China) don’t have all the power, but still, something like this might buy a little time.
If we follow the blueprint that was adopted to fight nuclear proliferation, the five permanent members would be in charge, and they would insist that potentially dangerous AI activities in every country take place under some form of severe surveillance by an International Artificial Intelligence Agency, while promising to also share the benefits of safe AI with all nations. Despite all the foreseeable problems, something like this could buy time, but all the big powers would undoubtedly keep pursuing AI, in secret government programs or in open collaborations with civilian industry and academia.
The important difference is that the nuclear weapons are destructive because they worked exactly as intended, and the AI in this scenario is destructive because it failed horrendously. Plus, the concept of rogue AI has been firmly ingrained into public consciousness by now, afaik not the case with the extremely destructive weapons in 1940s [1]. So hopefully this will produce more public outrage (and scare among the elites themselves) ⇒ stricter external and internal limitations on all agents developing AIs. But in the end I agree, it’ll only buy time, maybe few decades if we are lucky, to solve the problem properly or to build more sane political institutions.
Yes I’m sure there was a scifi novel or two before 1945 describing bombs of immense power. But I don’t think it was anywhere nearly as widely known as Matrix or Terminator.