There have been times when the thought of a pretty much inevitable negative singularity has made me feel quite depressed. (These days, I’m somewhat better at not thinking about it.)
Presumably for similar reasons that others (like those with MIRI affiliations) think that a negative singularity is a sufficient risk to be worth fighting. Only more so (and without as much expectation of success.)
I agree with him, because I think UFAI is probably much easier than either substantial IA or FAI, and there are plenty of very smart people with screwed up metaethics who want to build what (unknown to them) would turn out to be UFAI.
There have been times when the thought of a pretty much inevitable negative singularity has made me feel quite depressed. (These days, I’m somewhat better at not thinking about it.)
Why believe in such though?
Feels like the most likely outcome.
Presumably for similar reasons that others (like those with MIRI affiliations) think that a negative singularity is a sufficient risk to be worth fighting. Only more so (and without as much expectation of success.)
I agree with him, because I think UFAI is probably much easier than either substantial IA or FAI, and there are plenty of very smart people with screwed up metaethics who want to build what (unknown to them) would turn out to be UFAI.