alignment research is currently a mix of different agendas that need more unity. The alignment agendas of some researchers seem hopeless to others, and one of the favorite activities of alignment researchers is to criticize each other constructively
Given the risk-landscape uncertainty and conflicting opinions, I would argue that this is precisely the optimal high-level approach for AI Alignment research agendas at this point in time. ‘Casting a broader net’ can allow us to more quickly identify and mobilize resources towards areas of urgently-needed alignment research when they are identified with sufficient confidence. IMHO, Constructive debate about research priorities is hard to argue against. Moreover, much like the lack of publication of negative results in academic literature results in significant inefficiencies within scientific R&D, having even a shallow understanding of a broader ‘space’ of alignment solutions has value in itself in that can identify areas that are ineffective or not applicable respective to certain AI capabilities.
Given the risk-landscape uncertainty and conflicting opinions, I would argue that this is precisely the optimal high-level approach for AI Alignment research agendas at this point in time. ‘Casting a broader net’ can allow us to more quickly identify and mobilize resources towards areas of urgently-needed alignment research when they are identified with sufficient confidence. IMHO, Constructive debate about research priorities is hard to argue against. Moreover, much like the lack of publication of negative results in academic literature results in significant inefficiencies within scientific R&D, having even a shallow understanding of a broader ‘space’ of alignment solutions has value in itself in that can identify areas that are ineffective or not applicable respective to certain AI capabilities.