Or in simpler terms for Eliezer, the TL;DR of anonymousaisafety’s comment is that hacking is not magic, and Hollywood hacking is not real insofar in it’s ease of hacking. Effectors do not exist, which is again why hacking human brains instantly isn’t possible.
I don’t think that this TL;DR is particularly helpful.
People think attacks like Rowhammer are viable because security researchers keep releasing papers that say the attacks are viable.
If I posted 1 sentence and said “Rowhammer has too many limitations for it to be usable by an attacker”, I’d be given 30 links to papers with different security researchers all making grandiose claims about how Rowhammer is totally a viable attack, which is why 8 years after the discovery of Rowhammer we’ve had dozens of security researchers reproduce the attack and 0 attacks in the wild[1] that make use of it.
If my other posts haven’t made this clear, I think almost all disagreements in AI x-risk come down to a debate over high-level vs low-level analysis. Many things sound true as a sound-bite or quick rebuttal in a forum post, but I’m arguing from my perspective and career spent working on hardware/software systems that we’ve accumulated enough low-level evidence (“the devil is in the details”) to falsify the high-level claim entirely.
We can argue that just because we don’t know that someone has used Rowhammer—or a similar probabilistic hardware vulnerability—doesn’t mean that someone hasn’t. I don’t know if that’s a useful tangent either. The problem is that people use these side-channel attacks as an “I win” button in arguments about secure software systems by making it seem like the existence of side channel exploits is therefore proof that security is a lost cause. It isn’t. It isn’t about the intelligence of the adversary, it’s that the target basically needs be sitting there, helping the attack happen. On any platform where part of the stack is running someone else’s code, yeah, you’re going to get pwned if you just accept arbitrary code, so maybe don’t do that? It is not rocket science.
Or in simpler terms for Eliezer, the TL;DR of anonymousaisafety’s comment is that hacking is not magic, and Hollywood hacking is not real insofar in it’s ease of hacking. Effectors do not exist, which is again why hacking human brains instantly isn’t possible.
I don’t think that this TL;DR is particularly helpful.
People think attacks like Rowhammer are viable because security researchers keep releasing papers that say the attacks are viable.
If I posted 1 sentence and said “Rowhammer has too many limitations for it to be usable by an attacker”, I’d be given 30 links to papers with different security researchers all making grandiose claims about how Rowhammer is totally a viable attack, which is why 8 years after the discovery of Rowhammer we’ve had dozens of security researchers reproduce the attack and 0 attacks in the wild[1] that make use of it.
If my other posts haven’t made this clear, I think almost all disagreements in AI x-risk come down to a debate over high-level vs low-level analysis. Many things sound true as a sound-bite or quick rebuttal in a forum post, but I’m arguing from my perspective and career spent working on hardware/software systems that we’ve accumulated enough low-level evidence (“the devil is in the details”) to falsify the high-level claim entirely.
We can argue that just because we don’t know that someone has used Rowhammer—or a similar probabilistic hardware vulnerability—doesn’t mean that someone hasn’t. I don’t know if that’s a useful tangent either. The problem is that people use these side-channel attacks as an “I win” button in arguments about secure software systems by making it seem like the existence of side channel exploits is therefore proof that security is a lost cause. It isn’t. It isn’t about the intelligence of the adversary, it’s that the target basically needs be sitting there, helping the attack happen. On any platform where part of the stack is running someone else’s code, yeah, you’re going to get pwned if you just accept arbitrary code, so maybe don’t do that? It is not rocket science.
Thanks, I’ll retract that comment.
This is not only a bad summary, it’s extraordinarily toxic and uncharitable.