I wouldn’t end up surprised with someone being pressured into doing some better AI censorship in a way that doesn’t have any relevance to AI safety and does indeed make OpenAI a lot of money.
I disagree for the role advertised, I would be surprised by that. (I’d be less surprised if they advised on some post-training stuff that you’d think of as capabilities; I think that the “AI censorship” work is mostly done by a different team that doesn’t talk to the superalignment people that much. But idk where the superoversight people have been moved in the org, maybe they’d more naturally talk more now.)
I disagree for the role advertised, I would be surprised by that. (I’d be less surprised if they advised on some post-training stuff that you’d think of as capabilities; I think that the “AI censorship” work is mostly done by a different team that doesn’t talk to the superalignment people that much. But idk where the superoversight people have been moved in the org, maybe they’d more naturally talk more now.)