This style of thinking seems illogical to me. It has already clearly resulted in a sort of evaporative cooling in OpenAI.
I don’t think what’s happening at OpenAI is “evaporative cooling as a result of people being too risk-averse to do alignment work that’s adjacent to capabilities”. I would describe it more as “purging anyone who tries to provide oversight”. I don’t think the people who are leaving OpenAI who are safety conscious are doing it because of concerns like the OP, they are doing it because they are being marginalized and the organization is acting somewhat obviously reckless.
I don’t think what’s happening at OpenAI is “evaporative cooling as a result of people being too risk-averse to do alignment work that’s adjacent to capabilities”. I would describe it more as “purging anyone who tries to provide oversight”. I don’t think the people who are leaving OpenAI who are safety conscious are doing it because of concerns like the OP, they are doing it because they are being marginalized and the organization is acting somewhat obviously reckless.