For starters, saying that he wants to save humanity contradicts this.
Does not follow.
what an AI society would look like
No such thing, for many (most?) possible AIs; just a monolithic maximizer.
Eliezer’s plan seems to enslave AIs forever for the benefit of humanity; and this is morally reprehensible
Michael Vassar: RPOP “slaves”
Eliezer is paving the way for a confrontational relationship between humans and AIs, based on control
CFAI: Beyond the adversarial attitude
Planning to keep AIs enslaved forever is unworkable; it would hold us back from becoming AIs ourselves
Could I become superintelligent under a Sysop?
Does not follow.
No such thing, for many (most?) possible AIs; just a monolithic maximizer.
Michael Vassar: RPOP “slaves”
CFAI: Beyond the adversarial attitude
Could I become superintelligent under a Sysop?