I’m being heavily downvoted here, but what exactly did I say wrong? In fact I believe i said nothing wrong.
It does worsen situation with Israel military forces mass murdering Palestinian civilians due to AIs decisions with operators just rubber stamping the actions.
Here is the +972 Mag Report: https://www.972mag.com/lavender-ai-israeli-army-gaza/
I highly advise you to read as it goes into higher details as to how it exactly internally works.
Yes, civilian robot can acquire a gun, but it still makes it safer than a military robot that already has a whole arsenal of military gadgets and weapons right away. It would have to do additional work to acquire it, and it is still better to have it do more work, have more roadblocks than less.
I think we are mainly speculating on what the military might want. It might want to have a button that will instantly kill all their enemies with one push, but they might not get that (or they might, who knows now). I personally do not think they will put more efficient AI (efficient in murdering humans) below the less efficient but more controllable AI. They would want to have an upper edge over the enemy. Always. And if it means sacrificing some controllability or anything else, they might just do that. But they might not even get that, they might get an uncontrollable and error prone AI and no better. Military arent gods, they don’t always get what they want. And someone uptop might decide “To hell with it, its good enough” and that will be it.
And to your ship analogy it’s one thing to talk a civilian AI vessel into going rogue, it’s a different thing entirely to talk a frigate or nuclear submarine into going rogue. The risks are different. One has control over a simple vessel, the other has a control over a whole arsenal. I’m talking about the fact that the second increases risk substantially and should be extremely avoided for security reasons.
I think it still does increase the danger if AI is trained without any moral guidance or any possibility of moral guardrails, but instead trained to murder people and efficiently put humans in harms way. The current AI systems have something akin to Anthropics AI constitution, that tries to put some moral guard-rails and respect for human life and ans human rights, I don’t think think that AIs trained for the military are going to have the same principles applied to them in the slightest, in fact its much more likely to be the opposite, since its customary in the military to murder humans. I think the second example poses higher risks than the first one (not saying that the first example is without risks, but I do believe that the first one is still safer). I still think there are levels to this and things that are more or less safe, things that make it harder or easier.