Also every one of the organizations you named is a capabilities company which brands itself based on the small team they have working on alignment off on the side.
I’m not sure whether OpenAI was one of the organizations named, but if so, this reminded me of something Scott Aaronson said on this topic in the Q&A of his recent talk “Scott Aaronson Talks AI Safety”:
Maybe the one useful thing I can say is that, in my experience, which is admittedly very limited—working at OpenAI for all of five months—I’ve found my colleagues there to be extremely serious about safety, bordering on obsessive. They talk about it constantly. They actually have an unusual structure, where they’re a for-profit company that’s controlled by a nonprofit foundation, which is at least formally empowered to come in and hit the brakes if needed. OpenAI also has a charter that contains some striking clauses, especially the following:
We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project.
Of course, the fact that they’ve put a great deal of thought into this doesn’t mean that they’re going to get it right! But if you ask me: would I rather that it be OpenAI in the lead right now or the Chinese government? Or, if it’s going to be a company, would I rather it be one with a charter like the above, or a charter of “maximize clicks and ad revenue”? I suppose I do lean a certain way.
In short, it seems to me that Scott would not have pushed back on a claim that OpenAI is an organization” that seem[s] like the AI research they’re doing is safety research” in the way you did Jim.
I assume that all the sad-reactions are sadness that all these people at the EAGx conference aren’t noticing that their work/organization seems bad for the world on their own and that these conversations are therefore necessary. (The shear number of conversations like this you’re having also suggests that it’s a hopeless uphill battle, which is sad.)
I’m not sure whether OpenAI was one of the organizations named, but if so, this reminded me of something Scott Aaronson said on this topic in the Q&A of his recent talk “Scott Aaronson Talks AI Safety”:
Source: 1:12:52 in the video, edited transcript provided by Scott on his blog.
In short, it seems to me that Scott would not have pushed back on a claim that OpenAI is an organization” that seem[s] like the AI research they’re doing is safety research” in the way you did Jim.
I assume that all the sad-reactions are sadness that all these people at the EAGx conference aren’t noticing that their work/organization seems bad for the world on their own and that these conversations are therefore necessary. (The shear number of conversations like this you’re having also suggests that it’s a hopeless uphill battle, which is sad.)
So I wanted to bring up what Scott Aaronson said here to highlight that “systemic change” interventions are necessary also. Scott’s views are influential; potentially targeting talking to him and other “thought leaders” who aren’t sufficiently concerned about slowing down capabilities progress (or who don’t seem to emphasize enough concern for this when talking about organizations like OpenAI) would be helpful, of even necessary, for us to get to a world a few years from now where everyone studying ML or working on AI capabilities is at least aware of arguments about AI alignment and why increasing increasing AI capabilities seems harmful.