No, No One Is Actually Proposing a Global Surveillance State
Here’s the text of a tweet from an AI policy person: “Underrated fact: there’s currently no plausible solution of a post-AGI world which doesn’t involve global surveillance or globally centralized power.” Seems like you could be forgiven for thinking this guy is thinking of something like a global surveillance state.
As someone concerned about this issue, I don’t think that the arguments in that subheader are aimed at me or anyone actually concerned about this—the arguments are far too breezy and glib—rather they’re aimed at someone who already agrees with you.
So here are a few points:
You list the Shavit paper as evidence that we can do surveillance without disturbing privacy; the Shavit paper is closer to a research project than a full solution. Here’s what the paper itself says about its claims:
Lastly, rather than proposing a comprehensive shovel-ready solution, this work provides a high-level solution design. Its contribution is in isolating a set of open problems whose solution would be sufficient to enable a system that achieves the policy goal. If these problems prove unsolvable, the system’s design will need to be modified, or its guarantees scaled back.
So. we don’t know if it will actually work and we can do surveillance without disturbing privacy.
The Shavit proposal (and similar proposals) of “just watching datacenters” fail in the face of techniques to train models with low-latency connections (i.e., over the internet). This is an active area of research; such “oh, we’ll only watch special chips” fails if this research succeeds, which I think could very well happen—granted with an efficiency hit, but still. I expect the surveillance proposals to simply become more invasive when this happens.
Even if it were technically possible to surveil everything, while maintaining privacy—well, what are the odds you think the privacy-preserving solution is the one the government will go with? That they’ll install the minimum amount of hardware for figuring out how big the training run is? We’re gonna pay the costs of an actual “solution,” not the ideal “solution” that we’re thinking about here. We should calculate accordingly.
And finally—suppose that ML becomes increasingly important. Machine minds are increasingly thinking the thoughts that matter. Access to these minds is necessary for any kind of effective action. And governments control which machine minds are allowed, through selectively permitting only some minds above a certain level of intelligence to exist.
This seems like it could be awful. I feel like there’s… a real failure to consider 2nd and 3rd order effects in what you wrote.
The plan is to give government a bunch of the machinery it needs to prohibit minds it dislikes—and access to those minds will be the most important resource on earth. I think giving government this kind of machinery increases gov-related x-risks. Maybe it’s still necessary to decrease some other x-risk—maybe it’s not, and it’s a really bad idea—but like, lets be honest and look that in the face.
A subsection headline claim is
Here’s the text of a tweet from an AI policy person: “Underrated fact: there’s currently no plausible solution of a post-AGI world which doesn’t involve global surveillance or globally centralized power.” Seems like you could be forgiven for thinking this guy is thinking of something like a global surveillance state.
As someone concerned about this issue, I don’t think that the arguments in that subheader are aimed at me or anyone actually concerned about this—the arguments are far too breezy and glib—rather they’re aimed at someone who already agrees with you.
So here are a few points:
You list the Shavit paper as evidence that we can do surveillance without disturbing privacy; the Shavit paper is closer to a research project than a full solution. Here’s what the paper itself says about its claims:
So. we don’t know if it will actually work and we can do surveillance without disturbing privacy.
The Shavit proposal (and similar proposals) of “just watching datacenters” fail in the face of techniques to train models with low-latency connections (i.e., over the internet). This is an active area of research; such “oh, we’ll only watch special chips” fails if this research succeeds, which I think could very well happen—granted with an efficiency hit, but still. I expect the surveillance proposals to simply become more invasive when this happens.
Even if it were technically possible to surveil everything, while maintaining privacy—well, what are the odds you think the privacy-preserving solution is the one the government will go with? That they’ll install the minimum amount of hardware for figuring out how big the training run is? We’re gonna pay the costs of an actual “solution,” not the ideal “solution” that we’re thinking about here. We should calculate accordingly.
And finally—suppose that ML becomes increasingly important. Machine minds are increasingly thinking the thoughts that matter. Access to these minds is necessary for any kind of effective action. And governments control which machine minds are allowed, through selectively permitting only some minds above a certain level of intelligence to exist.
This seems like it could be awful. I feel like there’s… a real failure to consider 2nd and 3rd order effects in what you wrote.
The plan is to give government a bunch of the machinery it needs to prohibit minds it dislikes—and access to those minds will be the most important resource on earth. I think giving government this kind of machinery increases gov-related x-risks. Maybe it’s still necessary to decrease some other x-risk—maybe it’s not, and it’s a really bad idea—but like, lets be honest and look that in the face.