Presumably for similar reasons that others (like those with MIRI affiliations) think that a negative singularity is a sufficient risk to be worth fighting. Only more so (and without as much expectation of success.)
I agree with him, because I think UFAI is probably much easier than either substantial IA or FAI, and there are plenty of very smart people with screwed up metaethics who want to build what (unknown to them) would turn out to be UFAI.
Why believe in such though?
Feels like the most likely outcome.
Presumably for similar reasons that others (like those with MIRI affiliations) think that a negative singularity is a sufficient risk to be worth fighting. Only more so (and without as much expectation of success.)
I agree with him, because I think UFAI is probably much easier than either substantial IA or FAI, and there are plenty of very smart people with screwed up metaethics who want to build what (unknown to them) would turn out to be UFAI.