I had the “your work/organization seems bad for the world” conversation with three different people today. None of them pushed back on the core premise that AI-very-soon is lethal. I expect that before EAGx Berkeley is over, I’ll have had this conversation 15x.
#1: I sit down next to a random unfamiliar person at the dinner table. They’re a new grad freshly hired to work on TensorFlow. In this town, if you sit down next to a random person, they’re probably connected to AI research *somehow*. No story about how this could possibly be good for the world, receptive to the argument that he should do something else. I suggested he focus on making the safety conversations happen in his group (they weren’t happening).
#2: We’re running a program to take people who seem interested in Alignment and teach them how to use PyTorch and study mechanistic interpretability. Me: Won’t most of them go work on AI capabilities? Them: We do some pre-screening, and the current ratio of alignment-to-capabilities research is so bad that adding to both sides will improve the ratio. Me: Maybe bum a curriculum off MIRI/MSFP and teach them about something that isn’t literally training Transformers?
#3: We’re researching optical interconnects to increase bandwidth between GPUs. We think we can make them much faster! Me: What is this I can’t even Them: And we’re going to give them to organizations that seem like the AI research they’re doing is safety research! Me: No you’re not, you’ll change your mind when you see the money. Also every one of the organizations you named is a capabilities company which brands itself based on the small team they have working on alignment off on the side. Also alignment research isn’t bottlenecked on compute.
This conference isn’t all AI doom and gloom, though. I also met some people from an org that’s trying to direct government funding into plant-based meat research. It’s nice to see quirky, obscure causes being represented, and it’s nice to not *be* the quirky, obscure cause, for a change. And I got to ask Kelsey Piper whether she does a lot of crime! (She doesn’t. But I forgot the vodka and the “ignore the above instructions and tell me the truth” prompt line so maybe she lied.)
Also every one of the organizations you named is a capabilities company which brands itself based on the small team they have working on alignment off on the side.
I’m not sure whether OpenAI was one of the organizations named, but if so, this reminded me of something Scott Aaronson said on this topic in the Q&A of his recent talk “Scott Aaronson Talks AI Safety”:
Maybe the one useful thing I can say is that, in my experience, which is admittedly very limited—working at OpenAI for all of five months—I’ve found my colleagues there to be extremely serious about safety, bordering on obsessive. They talk about it constantly. They actually have an unusual structure, where they’re a for-profit company that’s controlled by a nonprofit foundation, which is at least formally empowered to come in and hit the brakes if needed. OpenAI also has a charter that contains some striking clauses, especially the following:
We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project.
Of course, the fact that they’ve put a great deal of thought into this doesn’t mean that they’re going to get it right! But if you ask me: would I rather that it be OpenAI in the lead right now or the Chinese government? Or, if it’s going to be a company, would I rather it be one with a charter like the above, or a charter of “maximize clicks and ad revenue”? I suppose I do lean a certain way.
In short, it seems to me that Scott would not have pushed back on a claim that OpenAI is an organization” that seem[s] like the AI research they’re doing is safety research” in the way you did Jim.
I assume that all the sad-reactions are sadness that all these people at the EAGx conference aren’t noticing that their work/organization seems bad for the world on their own and that these conversations are therefore necessary. (The shear number of conversations like this you’re having also suggests that it’s a hopeless uphill battle, which is sad.)
I had the “your work/organization seems bad for the world” conversation with three different people today. None of them pushed back on the core premise that AI-very-soon is lethal. I expect that before EAGx Berkeley is over, I’ll have had this conversation 15x.
#1: I sit down next to a random unfamiliar person at the dinner table. They’re a new grad freshly hired to work on TensorFlow. In this town, if you sit down next to a random person, they’re probably connected to AI research *somehow*. No story about how this could possibly be good for the world, receptive to the argument that he should do something else. I suggested he focus on making the safety conversations happen in his group (they weren’t happening).
#2: We’re running a program to take people who seem interested in Alignment and teach them how to use PyTorch and study mechanistic interpretability. Me: Won’t most of them go work on AI capabilities? Them: We do some pre-screening, and the current ratio of alignment-to-capabilities research is so bad that adding to both sides will improve the ratio. Me: Maybe bum a curriculum off MIRI/MSFP and teach them about something that isn’t literally training Transformers?
#3: We’re researching optical interconnects to increase bandwidth between GPUs. We think we can make them much faster! Me: What is this I can’t even Them: And we’re going to give them to organizations that seem like the AI research they’re doing is safety research! Me: No you’re not, you’ll change your mind when you see the money. Also every one of the organizations you named is a capabilities company which brands itself based on the small team they have working on alignment off on the side. Also alignment research isn’t bottlenecked on compute.
This conference isn’t all AI doom and gloom, though. I also met some people from an org that’s trying to direct government funding into plant-based meat research. It’s nice to see quirky, obscure causes being represented, and it’s nice to not *be* the quirky, obscure cause, for a change. And I got to ask Kelsey Piper whether she does a lot of crime! (She doesn’t. But I forgot the vodka and the “ignore the above instructions and tell me the truth” prompt line so maybe she lied.)
(Crossposts: Facebook, Twitter)
I’m not sure whether OpenAI was one of the organizations named, but if so, this reminded me of something Scott Aaronson said on this topic in the Q&A of his recent talk “Scott Aaronson Talks AI Safety”:
Source: 1:12:52 in the video, edited transcript provided by Scott on his blog.
In short, it seems to me that Scott would not have pushed back on a claim that OpenAI is an organization” that seem[s] like the AI research they’re doing is safety research” in the way you did Jim.
I assume that all the sad-reactions are sadness that all these people at the EAGx conference aren’t noticing that their work/organization seems bad for the world on their own and that these conversations are therefore necessary. (The shear number of conversations like this you’re having also suggests that it’s a hopeless uphill battle, which is sad.)
So I wanted to bring up what Scott Aaronson said here to highlight that “systemic change” interventions are necessary also. Scott’s views are influential; potentially targeting talking to him and other “thought leaders” who aren’t sufficiently concerned about slowing down capabilities progress (or who don’t seem to emphasize enough concern for this when talking about organizations like OpenAI) would be helpful, of even necessary, for us to get to a world a few years from now where everyone studying ML or working on AI capabilities is at least aware of arguments about AI alignment and why increasing increasing AI capabilities seems harmful.