Eliezer talks a lot in the Arbital article on CEV about how useful it is to have a visibly neutral alignment target
Right now Eliezer is pursuing a strategy which does not meaningfully empower him at all (just halting AGI progress)
Eliezer complaints a lot about various people using AI alignment under the guise of mostly just achieving their personal objectives (in-particular the standard AI censorship stuff being thrown into the same bucket)
Lots of conversations I’ve had with MIRI employees
I would be happy to take bets here about what people would say.
Some things that feel incongruent with this:
Eliezer talks a lot in the Arbital article on CEV about how useful it is to have a visibly neutral alignment target
Right now Eliezer is pursuing a strategy which does not meaningfully empower him at all (just halting AGI progress)
Eliezer complaints a lot about various people using AI alignment under the guise of mostly just achieving their personal objectives (in-particular the standard AI censorship stuff being thrown into the same bucket)
Lots of conversations I’ve had with MIRI employees
I would be happy to take bets here about what people would say.
Sure, I DM’d you.