While military robots might be bad for other reasons, I don’t really see the path from this to doom. If AI powered weaponry doesn’t work as expected, it might kill some people, but it can’t repair or replicate itself or make long-term plans, so it’s not really an extinction risk.
This AI powered weaponry can always be hacked/modified, even talked to perhaps, this all gives way for them to be used in more than a single way. You can’t hack a bullet, you can hack an AI powered ship. So singularly they might not be dangerous, but they don’t exist in isolation.
Also, militarisation of AI might create systems that are designed to be dangerous, amoral and without any proper oversight. This opens us a to a flood of potential dangers, some that are even hard to predict now.
If military AI is dangerous, it’s not because it’s military. If a military robot can wield a gun, a civilian robot can certainly acquire one as well.
The military may create AI systems that are designed to be amoral, but it will not want systems that overinterpret orders or violate the chain of command. Here as everywhere, if intentional misuse is even possible at all, alignment is critical and unintentional takeoff remains the dominant risk.
In seminal AI safety work Terminator, the Skynet system successfully triggers a world war because it is a military AI in command of the US nuclear arsenal, and thus has the authority to launch ICBMs. This, ironically to how it is usually ridiculed, gets AI risks quite right but grievously misjudges the state of computer security. If Skynet was running on Amazon AWS instead of a military server cluster, it would only be marginally delayed from the same outcome.
The prompting is not the hard part of operating an AI. If you can talk an AI ship into going rogue, a civilian AI can talk it into going rogue. This situation is inherently brimming with doom- it is latently doomed in multiple ways- the military training and direct access to guns merely removes small roadbumps. All the risk materialized at once, when you created an AI that had the cognitive capability to conceive of and implement plans that used a military vessel for its own goals. Whether the AI was specifically trained on this task is, in this case, really not the primary source of danger.
“My AI ship has gone rogue and is shelling the US coastline.”
“I hope you learnt a lesson here.”
“Yes. I will not put the AI on the ship next time.”
Yes, civilian robot can acquire a gun, but it still makes it safer than a military robot that already has a whole arsenal of military gadgets and weapons right away. It would have to do additional work to acquire it, and it is still better to have it do more work, have more roadblocks than less.
I think we are mainly speculating on what the military might want. It might want to have a button that will instantly kill all their enemies with one push, but they might not get that (or they might, who knows now). I personally do not think they will put more efficient AI (efficient in murdering humans) below the less efficient but more controllable AI. They would want to have an upper edge over the enemy. Always. And if it means sacrificing some controllability or anything else, they might just do that. But they might not even get that, they might get an uncontrollable and error prone AI and no better. Military arent gods, they don’t always get what they want. And someone uptop might decide “To hell with it, its good enough” and that will be it.
And to your ship analogy it’s one thing to talk a civilian AI vessel into going rogue, it’s a different thing entirely to talk a frigate or nuclear submarine into going rogue. The risks are different. One has control over a simple vessel, the other has a control over a whole arsenal. I’m talking about the fact that the second increases risk substantially and should be extremely avoided for security reasons.
I think it still does increase the danger if AI is trained without any moral guidance or any possibility of moral guardrails, but instead trained to murder people and efficiently put humans in harms way. The current AI systems have something akin to Anthropics AI constitution, that tries to put some moral guard-rails and respect for human life and ans human rights, I don’t think think that AIs trained for the military are going to have the same principles applied to them in the slightest, in fact its much more likely to be the opposite, since its customary in the military to murder humans. I think the second example poses higher risks than the first one (not saying that the first example is without risks, but I do believe that the first one is still safer). I still think there are levels to this and things that are more or less safe, things that make it harder or easier.
When people talk about p(doom) they generally mean the extinction risk directly from AI going rogue. The way I see it, that extinction-level risk is mostly self-replicating AI, and an AI that can design and build silicon chips (or whatever equivalent) can also build guns, and an AI designed to operate a gun doesn’t seem more likely to be good at building silicon chips.
I do worry that AI in direct control of nuclear weapons would be an extinction risk, but for standard software engineering reasons (all software is terrible), not for AI-safety reasons. The good news is that I don’t really think there’s any good reason to put nuclear weapons directly in the hands of AI. The practical nuclear deterrent is submarines and they don’t need particularly fast reactions to be effective.
While military robots might be bad for other reasons, I don’t really see the path from this to doom. If AI powered weaponry doesn’t work as expected, it might kill some people, but it can’t repair or replicate itself or make long-term plans, so it’s not really an extinction risk.
This AI powered weaponry can always be hacked/modified, even talked to perhaps, this all gives way for them to be used in more than a single way. You can’t hack a bullet, you can hack an AI powered ship. So singularly they might not be dangerous, but they don’t exist in isolation.
Also, militarisation of AI might create systems that are designed to be dangerous, amoral and without any proper oversight. This opens us a to a flood of potential dangers, some that are even hard to predict now.
If military AI is dangerous, it’s not because it’s military. If a military robot can wield a gun, a civilian robot can certainly acquire one as well.
The military may create AI systems that are designed to be amoral, but it will not want systems that overinterpret orders or violate the chain of command. Here as everywhere, if intentional misuse is even possible at all, alignment is critical and unintentional takeoff remains the dominant risk.
In seminal AI safety work Terminator, the Skynet system successfully triggers a world war because it is a military AI in command of the US nuclear arsenal, and thus has the authority to launch ICBMs. This, ironically to how it is usually ridiculed, gets AI risks quite right but grievously misjudges the state of computer security. If Skynet was running on Amazon AWS instead of a military server cluster, it would only be marginally delayed from the same outcome.
The prompting is not the hard part of operating an AI. If you can talk an AI ship into going rogue, a civilian AI can talk it into going rogue. This situation is inherently brimming with doom- it is latently doomed in multiple ways- the military training and direct access to guns merely removes small roadbumps. All the risk materialized at once, when you created an AI that had the cognitive capability to conceive of and implement plans that used a military vessel for its own goals. Whether the AI was specifically trained on this task is, in this case, really not the primary source of danger.
“My AI ship has gone rogue and is shelling the US coastline.”
“I hope you learnt a lesson here.”
“Yes. I will not put the AI on the ship next time.”
“You may be missing the problem here—”
Yes, civilian robot can acquire a gun, but it still makes it safer than a military robot that already has a whole arsenal of military gadgets and weapons right away. It would have to do additional work to acquire it, and it is still better to have it do more work, have more roadblocks than less.
I think we are mainly speculating on what the military might want. It might want to have a button that will instantly kill all their enemies with one push, but they might not get that (or they might, who knows now). I personally do not think they will put more efficient AI (efficient in murdering humans) below the less efficient but more controllable AI. They would want to have an upper edge over the enemy. Always. And if it means sacrificing some controllability or anything else, they might just do that. But they might not even get that, they might get an uncontrollable and error prone AI and no better. Military arent gods, they don’t always get what they want. And someone uptop might decide “To hell with it, its good enough” and that will be it.
And to your ship analogy it’s one thing to talk a civilian AI vessel into going rogue, it’s a different thing entirely to talk a frigate or nuclear submarine into going rogue. The risks are different. One has control over a simple vessel, the other has a control over a whole arsenal. I’m talking about the fact that the second increases risk substantially and should be extremely avoided for security reasons.
I think it still does increase the danger if AI is trained without any moral guidance or any possibility of moral guardrails, but instead trained to murder people and efficiently put humans in harms way. The current AI systems have something akin to Anthropics AI constitution, that tries to put some moral guard-rails and respect for human life and ans human rights, I don’t think think that AIs trained for the military are going to have the same principles applied to them in the slightest, in fact its much more likely to be the opposite, since its customary in the military to murder humans. I think the second example poses higher risks than the first one (not saying that the first example is without risks, but I do believe that the first one is still safer). I still think there are levels to this and things that are more or less safe, things that make it harder or easier.
When people talk about p(doom) they generally mean the extinction risk directly from AI going rogue. The way I see it, that extinction-level risk is mostly self-replicating AI, and an AI that can design and build silicon chips (or whatever equivalent) can also build guns, and an AI designed to operate a gun doesn’t seem more likely to be good at building silicon chips.
I do worry that AI in direct control of nuclear weapons would be an extinction risk, but for standard software engineering reasons (all software is terrible), not for AI-safety reasons. The good news is that I don’t really think there’s any good reason to put nuclear weapons directly in the hands of AI. The practical nuclear deterrent is submarines and they don’t need particularly fast reactions to be effective.