What is an “intelligent” machine? What is a machine that is “designed” to kill people? Why should a machine with limited intelligence that is “designed” to kill, such as an AIM-9 be more of a threat than a machine with vast intelligence that is designed to accomplish a seemingly innocuous goal, that has the destruction of humanity as an unintended side-effect.
Currently, leading militaries around the world are developing and using:
Drone swarms
Suicide drones
Assassin drones
Intelligent AI pilots for fighter jets
Targeting based on facial recognition
Robot dogs with mounted guns
None of these things scare me as much as GPT-4. Militaries are overwhelmingly staid and conservative institutions. They are the ones that are most likely to require extensive safeguards and humans-in-the-loop. What does scare me is the notion of a private entity developing a superintelligence, or an uncontrolled iterative process that will lead to a superintelligence and letting it loose accidentally.
An intelligent lethal machine is one which chooses and attacks a target using hardware and software specialized for the task of identifying and killing humans.
Clearly, there is a spectrum of intelligence. We should define a limit on how much intelligence we are willing to build into machines which are primarily designed to destroy us humans and our habitat.
Though militaries take more thorough precautions than most organizations, there are many historical examples of militaries suffering defeat, which, with better planning, could have been avoided.
An LLM like GPT which hypothetically escaped its safety mechanisms is limited in the amount of damage it could do, based on what systems it could compromise. The most dangerous rogue AI is one that could gain unauthorized access to military hardware. The more intelligent that hardware, the more damage a rogue AI could cause with it before being eliminated. In the worst case, the rogue AI would use that military hardware to cause a complete societal collapse.
Once countries adopt weaponry, they resist giving it up, though it would be in the better interests of the global community. There are some places we’ve made progress. However, with enough foresight, we (the global community) could plan ahead by placing limits on intelligent lethal machines sooner, rather than later.
What is an “intelligent” machine? What is a machine that is “designed” to kill people? Why should a machine with limited intelligence that is “designed” to kill, such as an AIM-9 be more of a threat than a machine with vast intelligence that is designed to accomplish a seemingly innocuous goal, that has the destruction of humanity as an unintended side-effect.
None of these things scare me as much as GPT-4. Militaries are overwhelmingly staid and conservative institutions. They are the ones that are most likely to require extensive safeguards and humans-in-the-loop. What does scare me is the notion of a private entity developing a superintelligence, or an uncontrolled iterative process that will lead to a superintelligence and letting it loose accidentally.
Items of response:
An intelligent lethal machine is one which chooses and attacks a target using hardware and software specialized for the task of identifying and killing humans.
Clearly, there is a spectrum of intelligence. We should define a limit on how much intelligence we are willing to build into machines which are primarily designed to destroy us humans and our habitat.
Though militaries take more thorough precautions than most organizations, there are many historical examples of militaries suffering defeat, which, with better planning, could have been avoided.
An LLM like GPT which hypothetically escaped its safety mechanisms is limited in the amount of damage it could do, based on what systems it could compromise. The most dangerous rogue AI is one that could gain unauthorized access to military hardware. The more intelligent that hardware, the more damage a rogue AI could cause with it before being eliminated. In the worst case, the rogue AI would use that military hardware to cause a complete societal collapse.
Once countries adopt weaponry, they resist giving it up, though it would be in the better interests of the global community. There are some places we’ve made progress. However, with enough foresight, we (the global community) could plan ahead by placing limits on intelligent lethal machines sooner, rather than later.