I think making inferences from that to modern MIRI is about as confused as making inferences from people’s high-school essays about what they will do when they become president
Yeah, but it’s not just the old MIRI views, but those in combination with their statements about what one might do with powerful AI, the telegraphed omissions in those statements, and other public parts of their worldview e.g. regarding the competence of the rest of the world. I get the pretty strong impression that “a small group of people with overwhelming hard power” was the ideal goal, and that this would ideally be controlled by MIRI or by a small group of people handpicked by them.
Eliezer talks a lot in the Arbital article on CEV about how useful it is to have a visibly neutral alignment target
Right now Eliezer is pursuing a strategy which does not meaningfully empower him at all (just halting AGI progress)
Eliezer complaints a lot about various people using AI alignment under the guise of mostly just achieving their personal objectives (in-particular the standard AI censorship stuff being thrown into the same bucket)
Lots of conversations I’ve had with MIRI employees
I would be happy to take bets here about what people would say.
Yeah, but it’s not just the old MIRI views, but those in combination with their statements about what one might do with powerful AI, the telegraphed omissions in those statements, and other public parts of their worldview e.g. regarding the competence of the rest of the world. I get the pretty strong impression that “a small group of people with overwhelming hard power” was the ideal goal, and that this would ideally be controlled by MIRI or by a small group of people handpicked by them.
Some things that feel incongruent with this:
Eliezer talks a lot in the Arbital article on CEV about how useful it is to have a visibly neutral alignment target
Right now Eliezer is pursuing a strategy which does not meaningfully empower him at all (just halting AGI progress)
Eliezer complaints a lot about various people using AI alignment under the guise of mostly just achieving their personal objectives (in-particular the standard AI censorship stuff being thrown into the same bucket)
Lots of conversations I’ve had with MIRI employees
I would be happy to take bets here about what people would say.
Sure, I DM’d you.