This week, first came an open letter calling for a “pause” in “giant AI” research (LW discussion) that has received worldwide media coverage; then Eliezer went further in TIME Ideas and sketched what a globally enforced ban on AGI research would look like (LW discussion).
Many online voices are saying it could never happen, but I think they underestimate the visceral common-sense fear that many ordinary people have, regarding artificial intelligence. Most people are not looking to transcend humanity, nor are they particularly in denial about the possibility of technology producing something smarter than human.
There is genuine potential for an anti-AI movement to come into being, that simply wants to “shut it all down”. Of course, such a movement would quickly run up against the power centers in science, commerce, and national security that want to push the boundaries. Between the corporate scramble to develop and market ever more powerful software, and the new era of geopolitical polarization, it might seem impossible that the “AI arms race” could ever be halted.
However, fear is a great motivator. During the cold war, fear of nuclear war compelled the USA and the USSR to restrain themselves in their otherwise unrestrained struggle for supremacy; and it also led to the creation of the Nuclear Non-Proliferation Treaty and the International Atomic Energy Agency, a system for the worldwide management of nuclear technology that ultimately answers to the United Nations Security Council, the most powerful institution in human affairs.
If the member states of the United Nations, and in particular the ruling elites of the permanent members of the Security Council, genuinely became convinced that sufficiently powerful artificial intelligence is a threat to the human race, they truly could organize a global ban on AGI research, even up to the point of military enforcement of the ban. They would have to deal with mutual distrust, but there are ways around that. They could even remain suspicious of each other, while cooperating to force a ban on everyone else—to mention one example.
I won’t express an opinion on how successful such a ban might be, or how long it would last; but the creation of a global anti-AI regime, I think is a political possibility.
However, if it were to happen, it would have to develop out of the frameworks for regulation and governance of AI, that the world’s nations are already developing, individually and collectively. That’s why I made this post—to collect information on how AI is currently regulated. It would be nice to have some facts on how it is regulated in each of the G-20 countries, for example.
which currently has sections on AI regulation in three out of the five permanent Security Council members, Britain, America, and China (Russia and France are not mentioned, though there are sections on European regulation).
How is AI governed and regulated, around the world?
This week, first came an open letter calling for a “pause” in “giant AI” research (LW discussion) that has received worldwide media coverage; then Eliezer went further in TIME Ideas and sketched what a globally enforced ban on AGI research would look like (LW discussion).
Many online voices are saying it could never happen, but I think they underestimate the visceral common-sense fear that many ordinary people have, regarding artificial intelligence. Most people are not looking to transcend humanity, nor are they particularly in denial about the possibility of technology producing something smarter than human.
There is genuine potential for an anti-AI movement to come into being, that simply wants to “shut it all down”. Of course, such a movement would quickly run up against the power centers in science, commerce, and national security that want to push the boundaries. Between the corporate scramble to develop and market ever more powerful software, and the new era of geopolitical polarization, it might seem impossible that the “AI arms race” could ever be halted.
However, fear is a great motivator. During the cold war, fear of nuclear war compelled the USA and the USSR to restrain themselves in their otherwise unrestrained struggle for supremacy; and it also led to the creation of the Nuclear Non-Proliferation Treaty and the International Atomic Energy Agency, a system for the worldwide management of nuclear technology that ultimately answers to the United Nations Security Council, the most powerful institution in human affairs.
If the member states of the United Nations, and in particular the ruling elites of the permanent members of the Security Council, genuinely became convinced that sufficiently powerful artificial intelligence is a threat to the human race, they truly could organize a global ban on AGI research, even up to the point of military enforcement of the ban. They would have to deal with mutual distrust, but there are ways around that. They could even remain suspicious of each other, while cooperating to force a ban on everyone else—to mention one example.
I won’t express an opinion on how successful such a ban might be, or how long it would last; but the creation of a global anti-AI regime, I think is a political possibility.
However, if it were to happen, it would have to develop out of the frameworks for regulation and governance of AI, that the world’s nations are already developing, individually and collectively. That’s why I made this post—to collect information on how AI is currently regulated. It would be nice to have some facts on how it is regulated in each of the G-20 countries, for example.
For now I’ll just link to Wikipedia:
Regulation of artificial intelligence
which currently has sections on AI regulation in three out of the five permanent Security Council members, Britain, America, and China (Russia and France are not mentioned, though there are sections on European regulation).