I don’t think your scenario is even plausible. Military complexes have to have some connection to the outside world for supplies and communication, and the AGI would figure out how to exploit it. It would also figure out that it should, it would recognize the vulnerability of being concentrated with the blast radius of a nuke.
It seems unlikely that an AGI in this situation would depend on fending off military attacks, instead of just not revealing itself outside the complex.
You also seem to have strange ideas of how easy it is to brainwash soldiers. Imitating the command structure might get them to do things within the complex, but brainwashing has to be a lot more sophisticated to get them to engage in battle with their fellow soldiers.
Your argument basically seems to be based on coming up with something foolish for an AGI to do, and then trying to find reasons to compel the AGI to behave that way. Instead, you should try to figure out the best thing the AGI could do in that situation, and realize it will do something at least that effective.
It’s an artificial intelligence, not an infallible god.
In the case of a base established specifically for research on dangerous software, connections to the outside world might reasonably be heavily monitored and low-bandwidth, to the point that escape through a land line would simply be infeasible.
If the base has a trespassers-will-be-shot policy (again, as a consequence of the research going on there), convincing the perimeter guards to open fire would be as simple as changing the passwords and resupply schedules.
The point of this speculation was to describe a scenario in which an AI became threatening, and thus raised people’s awareness of artificial intelligence as a threat, but was dealt with quickly enough to not kill us all. Yes, for that to happen, the AI needs to make some mistakes. It could be considerably smarter than any single human and still fall short of perfect Bayesian reasoning.
I don’t think your scenario is even plausible. Military complexes have to have some connection to the outside world for supplies and communication, and the AGI would figure out how to exploit it. It would also figure out that it should, it would recognize the vulnerability of being concentrated with the blast radius of a nuke.
It seems unlikely that an AGI in this situation would depend on fending off military attacks, instead of just not revealing itself outside the complex.
You also seem to have strange ideas of how easy it is to brainwash soldiers. Imitating the command structure might get them to do things within the complex, but brainwashing has to be a lot more sophisticated to get them to engage in battle with their fellow soldiers.
Your argument basically seems to be based on coming up with something foolish for an AGI to do, and then trying to find reasons to compel the AGI to behave that way. Instead, you should try to figure out the best thing the AGI could do in that situation, and realize it will do something at least that effective.
It’s an artificial intelligence, not an infallible god.
In the case of a base established specifically for research on dangerous software, connections to the outside world might reasonably be heavily monitored and low-bandwidth, to the point that escape through a land line would simply be infeasible.
If the base has a trespassers-will-be-shot policy (again, as a consequence of the research going on there), convincing the perimeter guards to open fire would be as simple as changing the passwords and resupply schedules.
The point of this speculation was to describe a scenario in which an AI became threatening, and thus raised people’s awareness of artificial intelligence as a threat, but was dealt with quickly enough to not kill us all. Yes, for that to happen, the AI needs to make some mistakes. It could be considerably smarter than any single human and still fall short of perfect Bayesian reasoning.