I think the two camps are less orthogonal than your examples of privacy and compute reg portray. There’s room for plenty of excellent policy interventions that both camps could work together to support. For instance, increasing regulatory requirements for transparency on algorithmic decision-making (and crucially, building a capacity both in regulators and in the market supporting them to enforce this) is something that I think both camps would get behind (the xrisk one because it creates demand for interpretability and more and the other because eg. it’s easier to show fairness issues) and could productively work on together. I think there are subculture clash reasons the two camps don’t always get on, but that these can be overcome, particularly given there’s a common enemy (misaligned powerful AI). See also this paper Beyond Near- and Long-Term: Towards a
Clearer Account of Research Priorities in AI Ethics and Society
I know lots of people who are uncertain about how big the risks are, and care about both problems, and work on both (I am one of these—I care more about AGI risk, but I think the best things I can do to help avert it involve working with the people you think aren’t helpful).
I think the two camps are less orthogonal than your examples of privacy and compute reg portray. There’s room for plenty of excellent policy interventions that both camps could work together to support. For instance, increasing regulatory requirements for transparency on algorithmic decision-making (and crucially, building a capacity both in regulators and in the market supporting them to enforce this) is something that I think both camps would get behind (the xrisk one because it creates demand for interpretability and more and the other because eg. it’s easier to show fairness issues) and could productively work on together. I think there are subculture clash reasons the two camps don’t always get on, but that these can be overcome, particularly given there’s a common enemy (misaligned powerful AI). See also this paper Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society I know lots of people who are uncertain about how big the risks are, and care about both problems, and work on both (I am one of these—I care more about AGI risk, but I think the best things I can do to help avert it involve working with the people you think aren’t helpful).