Edit: Whoops, “Head of Mission Alignment” is actually a person responsible for “working across the company to ensure that we get all pieces (and culture) right to be in a place to succeed at the mission”, and not the head of alignment research. Disregard the below.
In other words, the new head of AI alignment at OpenAI is on record lecturing EAs that misalignment risk from AGI is not real.
It was going to happen eventually. If you pick competent people who take the jobs you give them seriously, and you appoint them the Alignment Czar, and then they inevitably converge towards thinking your safety policy is suicidal and they run away from your company, you’ll need to either change your policy, or stop appointing people who take their jobs seriously to that position.
I’d been skeptical of John Schulman, given his lack of alignment-related track record and likely biases towards approaching the problem via the ML modus operandi. But, evidently, he took his job seriously enough to actually bother building a gears-level model of it, at which point he decided to run away. They’d tried to appoint someone competent but previously uninterested in the safety side of things to that position – and that didn’t work.
Now they’re trying a new type of person for the role: someone who comes in with strong preconceptions against taking the risks seriously. I expect that he’s either going to take his job seriously anyway (and jump ship within, say, a year), or he’s going to keep parroting the party line without deeply engaging with it (and not actually do much competent work, i. e. he’s just there for PR).
I’m excited to see how the new season of this hit sci-fi telenovela is going to develop.
Edit: Whoops, “Head of Mission Alignment” is actually a person responsible for “working across the company to ensure that we get all pieces (and culture) right to be in a place to succeed at the mission”, and not the head of alignment research. Disregard the below.
It was going to happen eventually. If you pick competent people who take the jobs you give them seriously, and you appoint them the Alignment Czar, and then they inevitably converge towards thinking your safety policy is suicidal and they run away from your company, you’ll need to either change your policy, or stop appointing people who take their jobs seriously to that position.
I’d been skeptical of John Schulman, given his lack of alignment-related track record and likely biases towards approaching the problem via the ML modus operandi. But, evidently, he took his job seriously enough to actually bother building a gears-level model of it, at which point he decided to run away. They’d tried to appoint someone competent but previously uninterested in the safety side of things to that position – and that didn’t work.
Now they’re trying a new type of person for the role: someone who comes in with strong preconceptions against taking the risks seriously. I expect that he’s either going to take his job seriously anyway (and jump ship within, say, a year), or he’s going to keep parroting the party line without deeply engaging with it (and not actually do much competent work, i. e. he’s just there for PR).
I’m excited to see how the new season of this hit sci-fi telenovela is going to develop.