How were you already sure of this before the resignations actually happened?
OpenAI enthusiastically commercializing AI + the “Superalignment” approach being exactly the approach I’d expect someone doing safety-washing to pick + the November 2023 drama + the stated trillion-dollar plans to increase worldwide chip production (which are directly at odds with the way OpenAI previously framed its safety concerns).
Some of the preceding resignations (chiefly, Daniel Kokotajlo’s) also played a role here, though I didn’t update off of them much either.
Counter-counter-argument: the safety-motivated people, especially if entering at the low level, have ~zero ability to change anything for the better internally, while they could usefully contribute elsewhere, and the presence of token safety-motivated people at OpenAI improves OpenAI’s ability to safety-wash its efforts (by pointing at them and going “look how much resources we’re giving them!”, like was attempted with Superalignment).