Also, competition between humans (with machines as tools) seems far more likely to kill people than a superintelligent runaway. However, it’s (arguably) not so likely to kill everybody. MIRI appears to be focussing on the “killing everybody case”. That is because—according to them—that is a really, really bad outcome.
The idea that losing 99% of humans would be acceptable losses may strike laymen as crazy. However, it might appeal to some of those in the top 1%. People like Peter Thiel, maybe.
Also, competition between humans (with machines as tools) seems far more likely to kill people than a superintelligent runaway. However, it’s (arguably) not so likely to kill everybody. MIRI appears to be focussing on the “killing everybody case”. That is because—according to them—that is a really, really bad outcome.
The idea that losing 99% of humans would be acceptable losses may strike laymen as crazy. However, it might appeal to some of those in the top 1%. People like Peter Thiel, maybe.