It would help if you specified which subset of “the community” you’re arguing against. I had a similar reaction to your comment as Daniel did, since in my circles (AI safety researchers in Berkeley), governance tends to be well-respected, and I’d be shocked to encounter the sentiment that working for OpenAI is a “betrayal of allegiance to ‘the community’”.
To be clear, I do think most people who have historically worked on “alignment” at OpenAI have probably caused great harm! And I do think I am broadly in favor of stronger community norms against working at AI capability companies, even in so called “safety positions”. So I do think there is something to the sentiment that Critch is describing.
Agreed! But the words he chose were hyperbolic and unfair. Even an angrier more radical version of Habryka would still endorse “the idea that people outside the LessWrong community might recognize the existence of AI risk.”
It would help if you specified which subset of “the community” you’re arguing against. I had a similar reaction to your comment as Daniel did, since in my circles (AI safety researchers in Berkeley), governance tends to be well-respected, and I’d be shocked to encounter the sentiment that working for OpenAI is a “betrayal of allegiance to ‘the community’”.
To be clear, I do think most people who have historically worked on “alignment” at OpenAI have probably caused great harm! And I do think I am broadly in favor of stronger community norms against working at AI capability companies, even in so called “safety positions”. So I do think there is something to the sentiment that Critch is describing.
Agreed! But the words he chose were hyperbolic and unfair. Even an angrier more radical version of Habryka would still endorse “the idea that people outside the LessWrong community might recognize the existence of AI risk.”