A number of people seem to have departed OpenAI at around the same time as you. Is there a particular reason for that which you can share? Do you still think that people interested in alignment research should apply to work at OpenAI?
A number of people seem to have departed OpenAI at around the same time as you. Is there a particular reason for that which you can share?
My own departure was driven largely by my desire to work on more conceptual/theoretical issues in alignment. I’ve generally expected to transition back to this work eventually and I think there a variety of reasons that OpenAI isn’t the best for it. (I would likely have moved earlier if Geoffrey Irving’s departure hadn’t left me managing the alignment team.)
I’m pretty hesitant to speak on behalf of other people who left. It’s definitely not a complete coincidence that I left around the same time as other people (though there were multiple important coincidences), and I can talk about my own motivations:
A lot of the people who I talked with at OpenAI left, decreasing the benefits from remaining at OpenAI and increasing the benefits for talking to people outside of OpenAI.
The departures led to a lot of safety-relevant shakeups at OpenAI. It’s not super clear whether that makes it an unusually good or bad time to shake up management of my team, but I think it felt unusually good to me (this might have been a rationalization, hard to say).
Do you still think that people interested in alignment research should apply to work at OpenAI?
I think alignment is a lot better if there are strong teams trying to apply best practices to align state of the art models, who have been learning about what it actually takes to do that in practice and building social capital. Basically that seems good because (i) I think there’s a reasonable chance that we fail not because alignment is super-hard but because we just don’t do a very good job during crunch time, and I think such teams are the best intervention for doing a better job, (ii) even if alignment is very hard and we need big new ideas, I think that such teams will be important for empirically characterizing and ultimately adopting those big new ideas. It’s also an unusually unambiguous good thing.
I spent a lot of time at OpenAI largely because I wanted to help get that kind of alignment effort going. For some color see this post; that team still exists (under Jan Leike) and there are now some other similar efforts at the organization.
I’m not as in the loop as I was a few months ago and so you might want to defer to folks at OpenAI, but from the outside I still tentatively feel pretty enthusiastic about the work of this kind that’s happening at OpenAI. If you’re excited about this kind of work then OpenAI still seems like a good place to go to me. (It also seems reasonable to think about DeepMind and Google, and of course I’m a fan of ARC for people who are a good fit, and I suspect that there will be more groups doing good applied alignment work in the future.)
A number of people seem to have departed OpenAI at around the same time as you. Is there a particular reason for that which you can share? Do you still think that people interested in alignment research should apply to work at OpenAI?
My own departure was driven largely by my desire to work on more conceptual/theoretical issues in alignment. I’ve generally expected to transition back to this work eventually and I think there a variety of reasons that OpenAI isn’t the best for it. (I would likely have moved earlier if Geoffrey Irving’s departure hadn’t left me managing the alignment team.)
I’m pretty hesitant to speak on behalf of other people who left. It’s definitely not a complete coincidence that I left around the same time as other people (though there were multiple important coincidences), and I can talk about my own motivations:
A lot of the people who I talked with at OpenAI left, decreasing the benefits from remaining at OpenAI and increasing the benefits for talking to people outside of OpenAI.
The departures led to a lot of safety-relevant shakeups at OpenAI. It’s not super clear whether that makes it an unusually good or bad time to shake up management of my team, but I think it felt unusually good to me (this might have been a rationalization, hard to say).
I think alignment is a lot better if there are strong teams trying to apply best practices to align state of the art models, who have been learning about what it actually takes to do that in practice and building social capital. Basically that seems good because (i) I think there’s a reasonable chance that we fail not because alignment is super-hard but because we just don’t do a very good job during crunch time, and I think such teams are the best intervention for doing a better job, (ii) even if alignment is very hard and we need big new ideas, I think that such teams will be important for empirically characterizing and ultimately adopting those big new ideas. It’s also an unusually unambiguous good thing.
I spent a lot of time at OpenAI largely because I wanted to help get that kind of alignment effort going. For some color see this post; that team still exists (under Jan Leike) and there are now some other similar efforts at the organization.
I’m not as in the loop as I was a few months ago and so you might want to defer to folks at OpenAI, but from the outside I still tentatively feel pretty enthusiastic about the work of this kind that’s happening at OpenAI. If you’re excited about this kind of work then OpenAI still seems like a good place to go to me. (It also seems reasonable to think about DeepMind and Google, and of course I’m a fan of ARC for people who are a good fit, and I suspect that there will be more groups doing good applied alignment work in the future.)