Yes, sometimes we need to prevent humans from causing harm. For sub-national cases, current technology is sufficient for this. On the scale of nations, we should agree to concrete limits on the intelligence of weapons, and have faith in our fellow humans to follow these limits. Our governments have made progress on this issue, though there is more to be made.
“With such loud public support in prominent Chinese venues, one might think that the U.S. military need only ask in order to begin a dialogue on AI risk reduction with the Chinese military.
Alas, during my tenure as the Director of Strategy and Policy at the DOD Joint Artificial Intelligence Center, the DOD did just that, twice. Both times the Chinese military refused to allow the topic on the agenda.
Though the fact of the DOD’s request for a dialogue and China’s refusal is unclassified—nearly everything that the United States says to China in formal channels is—the U.S. government has not yet publicly acknowledged this fact. It is time for this telling detail to come to light.
...(Gregory C. Allen is the director of the Artificial Intelligence (AI) Governance Project and a senior fellow in the Strategic Technologies Program at the Center for Strategic and International Studies in Washington, D.C)”
On such a vital topic to international welfare, officials from these two countries should have many discussions, especially considering how video-conference technology has made international discussion much more convenient.
Why then, have we heard of so little progress in this matter? To the contrary, development of lethal AI weapons continues at a brisk pace.
Yes, sometimes we need to prevent humans from causing harm. For sub-national cases, current technology is sufficient for this. On the scale of nations, we should agree to concrete limits on the intelligence of weapons, and have faith in our fellow humans to follow these limits. Our governments have made progress on this issue, though there is more to be made.
For example:
https://www.csis.org/analysis/one-key-challenge-diplomacy-ai-chinas-military-does-not-want-talk
“With such loud public support in prominent Chinese venues, one might think that the U.S. military need only ask in order to begin a dialogue on AI risk reduction with the Chinese military.
Alas, during my tenure as the Director of Strategy and Policy at the DOD Joint Artificial Intelligence Center, the DOD did just that, twice. Both times the Chinese military refused to allow the topic on the agenda.
Though the fact of the DOD’s request for a dialogue and China’s refusal is unclassified—nearly everything that the United States says to China in formal channels is—the U.S. government has not yet publicly acknowledged this fact. It is time for this telling detail to come to light.
...(Gregory C. Allen is the director of the Artificial Intelligence (AI) Governance Project and a senior fellow in the Strategic Technologies Program at the Center for Strategic and International Studies in Washington, D.C)”
On such a vital topic to international welfare, officials from these two countries should have many discussions, especially considering how video-conference technology has made international discussion much more convenient.
Why then, have we heard of so little progress in this matter? To the contrary, development of lethal AI weapons continues at a brisk pace.