Nate tells me that his headline view of OpenAI is mostly the same as his view of other AGI organizations, so he feels a little odd singling out OpenAI. [...] But, while this doesn’t change the fact that we view OpenAI’s effects as harmful on net currently, Nate does want to acknowledge that OpenAI seems to him to be doing better than some other orgs on a number of fronts:
I wanted to give this a big +1. I think OpenAI is doing better than literally every single other major AI research org except probably Anthropic and Deepmind on trying to solve the AI-not-killing-everyone task. I also think that Anthropic/Deepmind/OpenAI are doing better in terms of not publishing their impressive capabilities research than ~everyone else (e.g. not revealing the impressive downstream Benchmark numbers on Codex/text-davinci-002 performance). Accordingly, I think there’s a tendency to give OpenAI an unfair amount of flak compared to say, Google Brain or FAIR or any of the startups like Adept or Cohere.
This is probably a combination of three effects:
OpenAI is clearly on the cutting edge of AI research.
OpenAI has a lot of visibility in this community, due to its physical proximity and a heavy overlap between OpenAI employees and the EA/Rationalist social scene.
OpenAI is publicly talking about alignment; other orgs don’t even acknowledge it, this makes it a heretic rather than an infidel.
And I’m happy that this post pushes against this tendency.
(And yes, standard caveats, reality doesn’t grade on a curve, etc.)
Accordingly, I think there’s a tendency to give OpenAI an unfair amount of flak compared to say, Google Brain or FAIR or any of the startups like Adept or Cohere.
I’m not sure I agree that this is unfair.
OpenAI is clearly on the cutting edge of AI research.
This is obviously a good reason to focus on them more.
OpenAI has a lot of visibility in this community, due to its physical proximity and a heavy overlap between OpenAI employees and the EA/Rationalist social scene.
Perhaps we have responsibility to scrutinize/criticize them more because of this, due to comparative advantage (who else can do it easier/better than we can), and because they’re arguably deriving some warm fuzzy glow from this association? (Consider FTX as an analogy.)
OpenAI is publicly talking about alignment; other orgs don’t even acknowledge it, this makes it a heretic rather than an infidel.
Yes, but they don’t seem keen on talking about the risks/downsides/shortcomings of their alignment efforts (e.g., they make their employees sign non-disparagement agreements and as a result the former alignment team members who left in a big exodus can’t say exactly why they left). If you only talk about how great your alignment effort is, maybe that’s worse than not talking about it at all, as it’s liable to give people a false sense of security?
I wanted to give this a big +1. I think OpenAI is doing better than literally every single other major AI research org except probably Anthropic and Deepmind on trying to solve the AI-not-killing-everyone task. I also think that Anthropic/Deepmind/OpenAI are doing better in terms of not publishing their impressive capabilities research than ~everyone else (e.g. not revealing the impressive downstream Benchmark numbers on Codex/
text-davinci-002
performance). Accordingly, I think there’s a tendency to give OpenAI an unfair amount of flak compared to say, Google Brain or FAIR or any of the startups like Adept or Cohere.This is probably a combination of three effects:
OpenAI is clearly on the cutting edge of AI research.
OpenAI has a lot of visibility in this community, due to its physical proximity and a heavy overlap between OpenAI employees and the EA/Rationalist social scene.
OpenAI is publicly talking about alignment; other orgs don’t even acknowledge it, this makes it a heretic rather than an infidel.
And I’m happy that this post pushes against this tendency.
(And yes, standard caveats, reality doesn’t grade on a curve, etc.)
I’m not sure I agree that this is unfair.
This is obviously a good reason to focus on them more.
Perhaps we have responsibility to scrutinize/criticize them more because of this, due to comparative advantage (who else can do it easier/better than we can), and because they’re arguably deriving some warm fuzzy glow from this association? (Consider FTX as an analogy.)
Yes, but they don’t seem keen on talking about the risks/downsides/shortcomings of their alignment efforts (e.g., they make their employees sign non-disparagement agreements and as a result the former alignment team members who left in a big exodus can’t say exactly why they left). If you only talk about how great your alignment effort is, maybe that’s worse than not talking about it at all, as it’s liable to give people a false sense of security?