The historical lack of runaway-AI events means there’s no data to which a model might be compared; countless fictional examples are worse than useless.
An AI might, say, take over an isolated military compound, brainwash the staff, and be legitimately confident in it’s ability to hold off conventional forces (armored vehicles and so on) for long enough to build an exosolar colony ship, but then be destroyed when it underestimates some Russian general’s willingness to use nuclear force in a hostage situation.
The historical lack of runaway-AI events means there’s no data to which a model
might be compared; countless fictional examples are worse than useless.
That’s what everyone says until some AI decides that its values motivate it acting like a stereotypical evil AI. It first kills off the people on a space mission, and then sets off a nuclear war, sending out humanoid robots to kill off everyone but a few people. The remaining people are kept loyal with a promise of cake. The cake is real, I promise.
An AI capable of figuring out how to brainwash humans can also figure out how to distribute itself over a network of poorly secured internet servers. Nuking one military complex is not going to kill it.
If it’s being created inside the secure military facility, it would have a supply of partially pre-brainwashed humans on hand, thanks to military discipline and rigid command structures. Rapid, unquestioning obedience might be as simple as properly duplicating the syntax of legitimate orders and security clearances. If, however, the facility has no physical connections to the internet, no textbooks on TCP/IP sitting around, if the AI itself is developed on some proprietary system (all as a result of those same security measures), it might consider internet-based backups simply not worth the hassle, and existing communication satellites too secure or too low-bandwidth.
I’m not claiming that this is a particularly likely situation, just one plausible scenario in which a hostile AI could become an obvious threat without killing us all, and then be decisively stopped without involving a Friendly AI.
I don’t think your scenario is even plausible. Military complexes have to have some connection to the outside world for supplies and communication, and the AGI would figure out how to exploit it. It would also figure out that it should, it would recognize the vulnerability of being concentrated with the blast radius of a nuke.
It seems unlikely that an AGI in this situation would depend on fending off military attacks, instead of just not revealing itself outside the complex.
You also seem to have strange ideas of how easy it is to brainwash soldiers. Imitating the command structure might get them to do things within the complex, but brainwashing has to be a lot more sophisticated to get them to engage in battle with their fellow soldiers.
Your argument basically seems to be based on coming up with something foolish for an AGI to do, and then trying to find reasons to compel the AGI to behave that way. Instead, you should try to figure out the best thing the AGI could do in that situation, and realize it will do something at least that effective.
It’s an artificial intelligence, not an infallible god.
In the case of a base established specifically for research on dangerous software, connections to the outside world might reasonably be heavily monitored and low-bandwidth, to the point that escape through a land line would simply be infeasible.
If the base has a trespassers-will-be-shot policy (again, as a consequence of the research going on there), convincing the perimeter guards to open fire would be as simple as changing the passwords and resupply schedules.
The point of this speculation was to describe a scenario in which an AI became threatening, and thus raised people’s awareness of artificial intelligence as a threat, but was dealt with quickly enough to not kill us all. Yes, for that to happen, the AI needs to make some mistakes. It could be considerably smarter than any single human and still fall short of perfect Bayesian reasoning.
What makes you believe you are qualified to tell me how much confidence I have?
The historical lack of runaway-AI events means there’s no data to which a model might be compared; countless fictional examples are worse than useless.
An AI might, say, take over an isolated military compound, brainwash the staff, and be legitimately confident in it’s ability to hold off conventional forces (armored vehicles and so on) for long enough to build an exosolar colony ship, but then be destroyed when it underestimates some Russian general’s willingness to use nuclear force in a hostage situation.
That’s what everyone says until some AI decides that its values motivate it acting like a stereotypical evil AI. It first kills off the people on a space mission, and then sets off a nuclear war, sending out humanoid robots to kill off everyone but a few people. The remaining people are kept loyal with a promise of cake. The cake is real, I promise.
An AI capable of figuring out how to brainwash humans can also figure out how to distribute itself over a network of poorly secured internet servers. Nuking one military complex is not going to kill it.
If it’s being created inside the secure military facility, it would have a supply of partially pre-brainwashed humans on hand, thanks to military discipline and rigid command structures. Rapid, unquestioning obedience might be as simple as properly duplicating the syntax of legitimate orders and security clearances. If, however, the facility has no physical connections to the internet, no textbooks on TCP/IP sitting around, if the AI itself is developed on some proprietary system (all as a result of those same security measures), it might consider internet-based backups simply not worth the hassle, and existing communication satellites too secure or too low-bandwidth.
I’m not claiming that this is a particularly likely situation, just one plausible scenario in which a hostile AI could become an obvious threat without killing us all, and then be decisively stopped without involving a Friendly AI.
I don’t think your scenario is even plausible. Military complexes have to have some connection to the outside world for supplies and communication, and the AGI would figure out how to exploit it. It would also figure out that it should, it would recognize the vulnerability of being concentrated with the blast radius of a nuke.
It seems unlikely that an AGI in this situation would depend on fending off military attacks, instead of just not revealing itself outside the complex.
You also seem to have strange ideas of how easy it is to brainwash soldiers. Imitating the command structure might get them to do things within the complex, but brainwashing has to be a lot more sophisticated to get them to engage in battle with their fellow soldiers.
Your argument basically seems to be based on coming up with something foolish for an AGI to do, and then trying to find reasons to compel the AGI to behave that way. Instead, you should try to figure out the best thing the AGI could do in that situation, and realize it will do something at least that effective.
It’s an artificial intelligence, not an infallible god.
In the case of a base established specifically for research on dangerous software, connections to the outside world might reasonably be heavily monitored and low-bandwidth, to the point that escape through a land line would simply be infeasible.
If the base has a trespassers-will-be-shot policy (again, as a consequence of the research going on there), convincing the perimeter guards to open fire would be as simple as changing the passwords and resupply schedules.
The point of this speculation was to describe a scenario in which an AI became threatening, and thus raised people’s awareness of artificial intelligence as a threat, but was dealt with quickly enough to not kill us all. Yes, for that to happen, the AI needs to make some mistakes. It could be considerably smarter than any single human and still fall short of perfect Bayesian reasoning.