I am in strong agreement here. There are definitely aspects of AI Safety that rely on a confluence of skills beyond technical research. This is clearly a multi-disciplinary endeavor that has not (yet) fully capitalized on the multiple perspectives talented, interested people can bring.
One cautionary tale comes from my own perspective that there is a bit of a divide between “AI Safety” folks and “AI Ethics” folks, at least when it comes to online discourse. There isn’t a ton of overlap, and potential animosity between strong adherents to one perspective or another. I think this is borne out of a scarcity mindset, where people see a tradeoff between X-risk focus and other ethical goals, like fairness.
However, while that divide seems to be real (and potentially well-founded) in some conversations, many safety practitioners I know are more pragmatic in their approaches. Institutional capacity building, regulations focusing on responsible AI dimensions, technical alignment research—all can coexist and represent different and complementary hypotheses on how we can best develop AI systems.
The full breadth of this endeavor is beyond any one community, but a problems-focused view could be attractive to a broad range of people and benefit from perspectives that have not typically been part of this community. It’s inevitable that many more people will want to work in this space, given the recent popularization of generative systems.
I am in strong agreement here. There are definitely aspects of AI Safety that rely on a confluence of skills beyond technical research. This is clearly a multi-disciplinary endeavor that has not (yet) fully capitalized on the multiple perspectives talented, interested people can bring.
One cautionary tale comes from my own perspective that there is a bit of a divide between “AI Safety” folks and “AI Ethics” folks, at least when it comes to online discourse. There isn’t a ton of overlap, and potential animosity between strong adherents to one perspective or another. I think this is borne out of a scarcity mindset, where people see a tradeoff between X-risk focus and other ethical goals, like fairness.
However, while that divide seems to be real (and potentially well-founded) in some conversations, many safety practitioners I know are more pragmatic in their approaches. Institutional capacity building, regulations focusing on responsible AI dimensions, technical alignment research—all can coexist and represent different and complementary hypotheses on how we can best develop AI systems.
The full breadth of this endeavor is beyond any one community, but a problems-focused view could be attractive to a broad range of people and benefit from perspectives that have not typically been part of this community. It’s inevitable that many more people will want to work in this space, given the recent popularization of generative systems.