The simplest explanation for choosing a career in existential risk reduction is that it makes not building a humanity-saving superintelligent AI a virtue instead of a failure. Not that there’s anything wrong with failing every now and then.
On the plus side, you seem to be saying what you mean now instead of spouting nonsense about the characters. On the minus, Eliezer still wants to build a humanity-saving AI if he can; but he explicitly said, “I don’t know how to do this yet.” See also.
OK
The simplest explanation for choosing a career in existential risk reduction is that it makes not building a humanity-saving superintelligent AI a virtue instead of a failure. Not that there’s anything wrong with failing every now and then.
On the plus side, you seem to be saying what you mean now instead of spouting nonsense about the characters. On the minus, Eliezer still wants to build a humanity-saving AI if he can; but he explicitly said, “I don’t know how to do this yet.” See also.