My actual thought process for believing GDPR is good is not that it “is a sample from the empirical distribution of governance demands”, but that it intializes the process of governments (and thereby the public they represent) weighing in on what tech companies can and cannot design their systems to reason about, and more specifically the degree to which systems are allowed to reason about humans. Having a regulatory structure in place for restricting access to human data is a good first step, but we’ll probably also eventually want restrictions for how the systems process the data once they have it (e.g., they probably shouldn’t be allowed to use what data they have to come up with ways to significantly deceive or manipulate users).
I’ll say the same thing about fairness, in that I value having initialized the process of thinking about it not because it is in the “empirical distribution of governance demands”, but because it’s a useful governance demand. When things are more fair, people fight less, which is better/safer. I don’t mind much that existing fairness research hasn’t converged on what I consider “optimal fairness”, because I think that consideration is dwarfed by the fact that technical AI researchers are thinking about fairness at all.
That said, while I disagree with your analysis, I do agree with your final position:
I hope that technical AI x-risk/existential safety researchers focus on legitimizing and fulfilling those governance and accountability demands that are in fact legitimate.
I hope that discussion of AI governance and accountability does not inhabit a frame in which demands for governance and accountability are reliably legitimate.
My actual thought process for believing GDPR is good is not that it “is a sample from the empirical distribution of governance demands”, but that it intializes the process of governments (and thereby the public they represent) weighing in on what tech companies can and cannot design their systems to reason about, and more specifically the degree to which systems are allowed to reason about humans. Having a regulatory structure in place for restricting access to human data is a good first step, but we’ll probably also eventually want restrictions for how the systems process the data once they have it (e.g., they probably shouldn’t be allowed to use what data they have to come up with ways to significantly deceive or manipulate users).
I’ll say the same thing about fairness, in that I value having initialized the process of thinking about it not because it is in the “empirical distribution of governance demands”, but because it’s a useful governance demand. When things are more fair, people fight less, which is better/safer. I don’t mind much that existing fairness research hasn’t converged on what I consider “optimal fairness”, because I think that consideration is dwarfed by the fact that technical AI researchers are thinking about fairness at all.
That said, while I disagree with your analysis, I do agree with your final position: