Partial anonymity online is easy. Perfect anonymity against sufficiently well-resourced and determined adversaries is difficult or impossible. Packets do have to come from somewhere. Speed-of-light puts bounds on location. If you can convince the network operators to help, you can trace paths back hop by hop. You might find a proxy or a bot, but you can thwack that and/or keep tracing backwards in the network.
If there were some piece of super-duper malware (the rogue AI) loose on the network, I suspect it could be contained by a sufficiently determined response.
Perfect anonymity against sufficiently well-resourced and determined adversaries is difficult or impossible. … You might find a proxy or a bot, but you can thwack that and/or keep tracing backwards in the network.
No, you can’t. You should read some documents about how Tor works; this is a well-studied question and unfortunately, the conclusions are the opposite of what you have written. The problem is that there are lots of proxies around, most of which don’t keep logs, and you can set up a chain so that if any one of them refuses to keep logs then the connection can’t be traced.
If people knew there was a rogue AI around, they could go around visiting datacenters and use physical tricks to try to detect its presence. But if it maintained the pretense of being an anonymous human or anonymous humans, this probably wouldn’t happen.
I understand Tor quite well. Whether connections can be traced depends how powerful you think the attacker is. You can potentially get somewhere doing global timing attacks—though this depends on the volume and timing properties of the traffic of interest.
Maybe more importantly, if enough of the Tor nodes cooperate with the attacker, you can break the anonymity. If you could convince enough Tor operators there was a threat, you could mount that attack. Sufficiently scary malware communicating over Tor ought to do the trick. Alternatively, the powerful attacker might try to compromise the Tor nodes. In the scenario we’re discussing, there are powerful AIs capable of generating exploits. Seems strange to assume that the other side (the AGI-hunters) haven’t got specialized software able to do similarly. Automatic exploit finding and testing is more or less current state-of-the-art. It does not require superhuman AGI.
Partial anonymity online is easy. Perfect anonymity against sufficiently well-resourced and determined adversaries is difficult or impossible. Packets do have to come from somewhere. Speed-of-light puts bounds on location. If you can convince the network operators to help, you can trace paths back hop by hop. You might find a proxy or a bot, but you can thwack that and/or keep tracing backwards in the network.
If there were some piece of super-duper malware (the rogue AI) loose on the network, I suspect it could be contained by a sufficiently determined response.
No, you can’t. You should read some documents about how Tor works; this is a well-studied question and unfortunately, the conclusions are the opposite of what you have written. The problem is that there are lots of proxies around, most of which don’t keep logs, and you can set up a chain so that if any one of them refuses to keep logs then the connection can’t be traced.
If people knew there was a rogue AI around, they could go around visiting datacenters and use physical tricks to try to detect its presence. But if it maintained the pretense of being an anonymous human or anonymous humans, this probably wouldn’t happen.
I understand Tor quite well. Whether connections can be traced depends how powerful you think the attacker is. You can potentially get somewhere doing global timing attacks—though this depends on the volume and timing properties of the traffic of interest.
Maybe more importantly, if enough of the Tor nodes cooperate with the attacker, you can break the anonymity. If you could convince enough Tor operators there was a threat, you could mount that attack. Sufficiently scary malware communicating over Tor ought to do the trick. Alternatively, the powerful attacker might try to compromise the Tor nodes. In the scenario we’re discussing, there are powerful AIs capable of generating exploits. Seems strange to assume that the other side (the AGI-hunters) haven’t got specialized software able to do similarly. Automatic exploit finding and testing is more or less current state-of-the-art. It does not require superhuman AGI.