Something I’m confused about: what is the threshold that needs meeting for the majority of people in the EA community to say something like “it would be better if EAs didn’t work at OpenAI”?
Imagining the following hypothetical scenarios over 2024⁄25, I can’t predict confidently whether they’d individually cause that response within EA?
Ten-fifteen more OpenAI staff quit for varied and unclear reasons. No public info is gained outside of rumours
There is another board shakeup because senior leaders seem worried about Altman. Altman stays on
Superalignment team is disbanded
OpenAI doesn’t let UK or US AISI’s safety test GPT5/6 before release
There are strong rumours they’ve achieved weakly general AGI internally at end of 2025
This question is two steps removed from reality. Here’s what I mean by that. Putting brackets around each of the two steps:
what is the threshold that needs meeting [for the majority of people in the EA community] [to say something like] “it would be better if EAs didn’t work at OpenAI”?
Without these steps, the question becomes
What is the threshold that needs meeting before it would be better if people didn’t work at OpenAI?
Personally, I find that a more interesting question. Is there a reason why the question is phrased at two removes like that? Or am I missing the point?
What does a “majority of the EA community” mean here? Does it mean that people who work at OAI (even on superalignment or preparedness) are shunned from professional EA events? Does it mean that when they ask, people tell them not to join OAI? And who counts as “in the EA community”?
I don’t think it’s that constructive to bar people from all or even most EA events just because they work at OAI, even if there’s a decent amount of consensus people should not work there. Of course, it’s fine to host events (even professional ones!) that don’t invite OAI people (or Anthropic people, or METR people, or FAR AI people, etc), and they do happen, but I don’t feel like barring people from EAG or e.g. Constellation just because they work at OAI would help make the case, (not that there’s any chance of this happening in the near term) and would most likely backfire.
I think that currently, many people (at least in the Berkeley EA/AIS community) will tell you to not join OAI if asked. I’m not sure if they form a majority in terms of absolute numbers, but they’re at least a majority in some professional circles (e.g. both most people at FAR/FAR Labs and at Lightcone/Lighthaven would probably say this). I also think many people would say that on the margin, too many people are trying to join OAI rather than other important jobs. (Due to factors like OAI paying a lot more than non-scaling lab jobs/having more legible prestige.)
Empirically, it sure seems significantly more people around here join Anthropic than OAI, despite Anthropic being a significantly smaller company.
Though I think almost none of these people would advocate for ~0 x-risk motivated people to work at OAI, only that the marginal x-risk concerned technical person should not work at OAI.
What specific actions are you hoping for here, that would cause you to say “yes, the majority of EA people say ‘it’s better to not work at OAI’”?
[ I don’t consider myself EA, nor a member of the EA community, though I’m largely compatible in my preferences ]
I’m not sure it matters what the majority thinks, only what marginal employees (those who can choose whether or not to work at OpenAI) think. And what you think, if you are considering whether to apply, or whether to use their products and give them money/status.
Personally, I just took a job in a related company (working on applications, rather than core modeling), and I have zero concerns that I’m doing the wrong thing.
[ in response to request to elaborate: I’m not going to at this time. It’s not secret, nor is my identity generally, but I do prefer not to make it too easy for ’bots or searchers to tie my online and real-world lives together. ]
Something I’m confused about: what is the threshold that needs meeting for the majority of people in the EA community to say something like “it would be better if EAs didn’t work at OpenAI”?
Imagining the following hypothetical scenarios over 2024⁄25, I can’t predict confidently whether they’d individually cause that response within EA?
Ten-fifteen more OpenAI staff quit for varied and unclear reasons. No public info is gained outside of rumours
There is another board shakeup because senior leaders seem worried about Altman. Altman stays on
Superalignment team is disbanded
OpenAI doesn’t let UK or US AISI’s safety test GPT5/6 before release
There are strong rumours they’ve achieved weakly general AGI internally at end of 2025
This question is two steps removed from reality. Here’s what I mean by that. Putting brackets around each of the two steps:
what is the threshold that needs meeting [for the majority of people in the EA community] [to say something like] “it would be better if EAs didn’t work at OpenAI”?
Without these steps, the question becomes
What is the threshold that needs meeting before it would be better if people didn’t work at OpenAI?
Personally, I find that a more interesting question. Is there a reason why the question is phrased at two removes like that? Or am I missing the point?
What does a “majority of the EA community” mean here? Does it mean that people who work at OAI (even on superalignment or preparedness) are shunned from professional EA events? Does it mean that when they ask, people tell them not to join OAI? And who counts as “in the EA community”?
I don’t think it’s that constructive to bar people from all or even most EA events just because they work at OAI, even if there’s a decent amount of consensus people should not work there. Of course, it’s fine to host events (even professional ones!) that don’t invite OAI people (or Anthropic people, or METR people, or FAR AI people, etc), and they do happen, but I don’t feel like barring people from EAG or e.g. Constellation just because they work at OAI would help make the case, (not that there’s any chance of this happening in the near term) and would most likely backfire.
I think that currently, many people (at least in the Berkeley EA/AIS community) will tell you to not join OAI if asked. I’m not sure if they form a majority in terms of absolute numbers, but they’re at least a majority in some professional circles (e.g. both most people at FAR/FAR Labs and at Lightcone/Lighthaven would probably say this). I also think many people would say that on the margin, too many people are trying to join OAI rather than other important jobs. (Due to factors like OAI paying a lot more than non-scaling lab jobs/having more legible prestige.)
Empirically, it sure seems significantly more people around here join Anthropic than OAI, despite Anthropic being a significantly smaller company.
Though I think almost none of these people would advocate for ~0 x-risk motivated people to work at OAI, only that the marginal x-risk concerned technical person should not work at OAI.
What specific actions are you hoping for here, that would cause you to say “yes, the majority of EA people say ‘it’s better to not work at OAI’”?
[ I don’t consider myself EA, nor a member of the EA community, though I’m largely compatible in my preferences ]
I’m not sure it matters what the majority thinks, only what marginal employees (those who can choose whether or not to work at OpenAI) think. And what you think, if you are considering whether to apply, or whether to use their products and give them money/status.
Personally, I just took a job in a related company (working on applications, rather than core modeling), and I have zero concerns that I’m doing the wrong thing.
[ in response to request to elaborate: I’m not going to at this time. It’s not secret, nor is my identity generally, but I do prefer not to make it too easy for ’bots or searchers to tie my online and real-world lives together. ]