[I may be generalizing here and I don’t know if this has been said before.]
It seems to me that Eliezer’s models are a lot more specific than people like Richard’s. While Richard may put some credence on superhuman AI being “consequentialist” by default, Eliezer has certain beliefs about intelligence that make it extremely likely in his mind.
I think Eliezer’s style of reasoning which relies on specific, thought-out models of AI makes him more pessimistic than others in EA. Others believe there are many ways that AGI scenarios could play out and are generally uncertain. But Eliezer has specific models that make some scenarios a lot more likely in his mind.
There are many valid theoretical arguments for why we are doomed, but maybe other EAs put less credence in them than Eliezer does.
[I may be generalizing here and I don’t know if this has been said before.]
It seems to me that Eliezer’s models are a lot more specific than people like Richard’s. While Richard may put some credence on superhuman AI being “consequentialist” by default, Eliezer has certain beliefs about intelligence that make it extremely likely in his mind.
I think Eliezer’s style of reasoning which relies on specific, thought-out models of AI makes him more pessimistic than others in EA. Others believe there are many ways that AGI scenarios could play out and are generally uncertain. But Eliezer has specific models that make some scenarios a lot more likely in his mind.
There are many valid theoretical arguments for why we are doomed, but maybe other EAs put less credence in them than Eliezer does.