Suppose an AI was building autonomous weapons in secret. This would involve some of the most advanced technology currently available. It would need to construct a sophisticated factory in a secret location, or else hide it in a shell company. The first would be very unlikely, the second is plausible, though still less likely. Better regulation and examination of weapons manufacturers could help mitigate this problem.
Lucas Pfeifer
Items of response:
An intelligent lethal machine is one which chooses and attacks a target using hardware and software specialized for the task of identifying and killing humans.
Clearly, there is a spectrum of intelligence. We should define a limit on how much intelligence we are willing to build into machines which are primarily designed to destroy us humans and our habitat.
Though militaries take more thorough precautions than most organizations, there are many historical examples of militaries suffering defeat, which, with better planning, could have been avoided.
An LLM like GPT which hypothetically escaped its safety mechanisms is limited in the amount of damage it could do, based on what systems it could compromise. The most dangerous rogue AI is one that could gain unauthorized access to military hardware. The more intelligent that hardware, the more damage a rogue AI could cause with it before being eliminated. In the worst case, the rogue AI would use that military hardware to cause a complete societal collapse.
Once countries adopt weaponry, they resist giving it up, though it would be in the better interests of the global community. There are some places we’ve made progress. However, with enough foresight, we (the global community) could plan ahead by placing limits on intelligent lethal machines sooner, rather than later.
Yes, sometimes we need to prevent humans from causing harm. For sub-national cases, current technology is sufficient for this. On the scale of nations, we should agree to concrete limits on the intelligence of weapons, and have faith in our fellow humans to follow these limits. Our governments have made progress on this issue, though there is more to be made.
For example:
https://www.csis.org/analysis/one-key-challenge-diplomacy-ai-chinas-military-does-not-want-talk
“With such loud public support in prominent Chinese venues, one might think that the U.S. military need only ask in order to begin a dialogue on AI risk reduction with the Chinese military.
Alas, during my tenure as the Director of Strategy and Policy at the DOD Joint Artificial Intelligence Center, the DOD did just that, twice. Both times the Chinese military refused to allow the topic on the agenda.
Though the fact of the DOD’s request for a dialogue and China’s refusal is unclassified—nearly everything that the United States says to China in formal channels is—the U.S. government has not yet publicly acknowledged this fact. It is time for this telling detail to come to light.
...(Gregory C. Allen is the director of the Artificial Intelligence (AI) Governance Project and a senior fellow in the Strategic Technologies Program at the Center for Strategic and International Studies in Washington, D.C)”
On such a vital topic to international welfare, officials from these two countries should have many discussions, especially considering how video-conference technology has made international discussion much more convenient.
Why then, have we heard of so little progress in this matter? To the contrary, development of lethal AI weapons continues at a brisk pace.