As far as I understand, the banner is distinct—the team members seem not the same, but with meaningful overlap with the continuation of the agenda. I believe the most likely source of an error here is whether work is actually continuing in what could be called this direction. Do you believe the representation should be changed?
My impression from coverage in eg Wired and Future Perfect was that the team was fully dissolved, the central people behind it left (Leike, Sutskever, others), and Leike claimed OpenAI wasn’t meeting its publicly announced compute commitments even before the team dissolved. I haven’t personally seen new work coming out of OpenAI trying to ‘build a roughly human-level automated alignment researcher’ (the stated goal of that team). I don’t have any insight beyond the media coverage, though; if you’ve looked more deeply into it than that, your knowledge is greater than mine.
(Fairly minor point either way; I was just surprised to see it expressed that way)
Very fair observation; my take is that a relevant continuation is occurring under OpenAI Alignment Science, but I would be interested in counterpoints—the main claim I am gesturing towards here is that the agenda is alive in other parts of the community, despite the previous flagship (and the specific team) going down.
And thanks very much to you and collaborators for this update; I’ve pointed a number of people to the previous version, but with the field evolving so quickly, having a new version seems quite high-value.
Not just the name but the team, correct?
As far as I understand, the banner is distinct—the team members seem not the same, but with meaningful overlap with the continuation of the agenda. I believe the most likely source of an error here is whether work is actually continuing in what could be called this direction. Do you believe the representation should be changed?
My impression from coverage in eg Wired and Future Perfect was that the team was fully dissolved, the central people behind it left (Leike, Sutskever, others), and Leike claimed OpenAI wasn’t meeting its publicly announced compute commitments even before the team dissolved. I haven’t personally seen new work coming out of OpenAI trying to ‘build a roughly human-level automated alignment researcher’ (the stated goal of that team). I don’t have any insight beyond the media coverage, though; if you’ve looked more deeply into it than that, your knowledge is greater than mine.
(Fairly minor point either way; I was just surprised to see it expressed that way)
Very fair observation; my take is that a relevant continuation is occurring under OpenAI Alignment Science, but I would be interested in counterpoints—the main claim I am gesturing towards here is that the agenda is alive in other parts of the community, despite the previous flagship (and the specific team) going down.
Oh, fair enough. Yeah, definitely that agenda is still very much alive! Never mind, then, carry on :)
And thanks very much to you and collaborators for this update; I’ve pointed a number of people to the previous version, but with the field evolving so quickly, having a new version seems quite high-value.