UPDATE: I did mean this comment as a genuine question, not a rhetorical one. I’m actually curious about what underlies these perceptions of OpenAI. Feel free to private message me if you’d prefer not to reply in public. Another possible reason that occurred to me after posting this comment is that OpenAI was the first serious competitor to DeepMind in pursuing AGI, so people may resent them making AGI research more competitive and hence arguably creating pressure to rush decisions and cut corners on safety.
Why do so many people on LessWrong say that OpenAI doesn’t prioritize AI x-risk?
I’m not saying that they definitely do. But since they were founded with AI risks in mind and have had some great safety researchers over the years, I am curious which events lead people to frequently say that they are not prioritizing it. I know a lot of their safety researchers left at one point, which is a bad sign about their safety prioritization, but not conclusive.
I get the sense a lot of people were upset when they changed from a non-profit to a capped-profit structure. But that doesn’t say anything directly about how they prioritize x-risk, and the fact that they didn’t go to a 100% for-profit structure shows they are at least prioritizing misuse risks.
OpenAI has had at least a few safety-related publications since 2020:
UPDATE: I did mean this comment as a genuine question, not a rhetorical one. I’m actually curious about what underlies these perceptions of OpenAI. Feel free to private message me if you’d prefer not to reply in public. Another possible reason that occurred to me after posting this comment is that OpenAI was the first serious competitor to DeepMind in pursuing AGI, so people may resent them making AGI research more competitive and hence arguably creating pressure to rush decisions and cut corners on safety.
Why do so many people on LessWrong say that OpenAI doesn’t prioritize AI x-risk?
I’m not saying that they definitely do. But since they were founded with AI risks in mind and have had some great safety researchers over the years, I am curious which events lead people to frequently say that they are not prioritizing it. I know a lot of their safety researchers left at one point, which is a bad sign about their safety prioritization, but not conclusive.
I get the sense a lot of people were upset when they changed from a non-profit to a capped-profit structure. But that doesn’t say anything directly about how they prioritize x-risk, and the fact that they didn’t go to a 100% for-profit structure shows they are at least prioritizing misuse risks.
OpenAI has had at least a few safety-related publications since 2020:
https://openai.com/blog/language-model-safety-and-misuse/
https://openai.com/blog/summarizing-books/
https://openai.com/blog/learning-to-summarize-with-human-feedback/
https://openai.com/blog/microscope/