Nice, that’s almost exactly how I intended it. Except that I wasn’t thinking of the “stars” as satellites looking for individual humans to send propaganda at (which IMO is pretty close to “communicating”), but rather a network of satellites forming a single “screen” across the sky that plays a video infecting any baseline humans who look at it.
In my headcanon the original negotiators specified that sunlight would still reach the earth unimpeded, but didn’t specify that no AI satellites would be visible from the Earth. I don’t have headcanon explanations for exactly how the adversanimals arose or how the earth became desolate though.
(Oh, also, I think of the attack as being inefficient less because of lack of data, since AIs can just spin up humans to experiment on, and more because of the inherent difficulty of overwriting someone’s cognition via only a brief visual stimulus. Though now that I think about it more, presumably once someone has been captured the next thing you’d get them to do is spend a lot of time staring at a region of the sky that will reprogram them in more sophisticated ways. So maybe the normal glitchers in my story are unrealistically incompetent.)
Though now that I think about it more, presumably once someone has been captured the next thing you’d get them to do is spend a lot of time staring at a region of the sky that will reprogram them in more sophisticated ways. So maybe the normal glitchers in my story are unrealistically incompetent.
That was what I was thinking, yes. “A pact would normally allow voluntary communication to be initiated with the AIs, so any glitcher which had been successfully attacked would have simply communicated back to its masters, either downloading new instructions & attacks or finetuning the existing ones or being puppeted directly by the AIs, sometime over the past centuries or millennia; if nothing else, they have an unlimited amount of time to stare at the sky and be reprogrammed arbitrarily after the initial exploit; so glitchers are indeed ‘glitchy’ and must represent a permanently failed attack method. That is why they bumble around semi-harmlessly: a broken worm or virus can cause a lot of trouble as it futilely portscans or DoSes targets or goes through infinite loops etc, even if the code is buggy and has accidentally locked out its creators as well as everyone else.”
My headcannon for the animals was that early on, they released viruses that genetically modified non-human animals in ways that don’t violate the pact.
I didn’t think the pact could have been as broad as “the terrestrial Earth will be left unmodified,” because the causal impact of their actions certainly changed things. I assumed it was something like “AIs and AI created technologies may not do anything that interferes with humans actions on Earth. or harms humans in any way”—but genetic engineering instructions sent from outside of the earth, assumedly pre-collapse, didn’t qualify because they didn’t affect human, they made animals affect humans, which was parsed as similar to impacts of the environment on humans, not an AI technology.
Nice, that’s almost exactly how I intended it. Except that I wasn’t thinking of the “stars” as satellites looking for individual humans to send propaganda at (which IMO is pretty close to “communicating”), but rather a network of satellites forming a single “screen” across the sky that plays a video infecting any baseline humans who look at it.
In my headcanon the original negotiators specified that sunlight would still reach the earth unimpeded, but didn’t specify that no AI satellites would be visible from the Earth. I don’t have headcanon explanations for exactly how the adversanimals arose or how the earth became desolate though.
(Oh, also, I think of the attack as being inefficient less because of lack of data, since AIs can just spin up humans to experiment on, and more because of the inherent difficulty of overwriting someone’s cognition via only a brief visual stimulus. Though now that I think about it more, presumably once someone has been captured the next thing you’d get them to do is spend a lot of time staring at a region of the sky that will reprogram them in more sophisticated ways. So maybe the normal glitchers in my story are unrealistically incompetent.)
That was what I was thinking, yes. “A pact would normally allow voluntary communication to be initiated with the AIs, so any glitcher which had been successfully attacked would have simply communicated back to its masters, either downloading new instructions & attacks or finetuning the existing ones or being puppeted directly by the AIs, sometime over the past centuries or millennia; if nothing else, they have an unlimited amount of time to stare at the sky and be reprogrammed arbitrarily after the initial exploit; so glitchers are indeed ‘glitchy’ and must represent a permanently failed attack method. That is why they bumble around semi-harmlessly: a broken worm or virus can cause a lot of trouble as it futilely portscans or DoSes targets or goes through infinite loops etc, even if the code is buggy and has accidentally locked out its creators as well as everyone else.”
My headcannon for the animals was that early on, they released viruses that genetically modified non-human animals in ways that don’t violate the pact.
I didn’t think the pact could have been as broad as “the terrestrial Earth will be left unmodified,” because the causal impact of their actions certainly changed things. I assumed it was something like “AIs and AI created technologies may not do anything that interferes with humans actions on Earth. or harms humans in any way”—but genetic engineering instructions sent from outside of the earth, assumedly pre-collapse, didn’t qualify because they didn’t affect human, they made animals affect humans, which was parsed as similar to impacts of the environment on humans, not an AI technology.