A while back, I made the argument that the ability to remove fundamental human limits will eventually lead to the loss of everything we value.
How long have you been this pessimistic about the erasure of human value?
Not sure. I’ve been pessimistic about the Singularity for several years, but the general argument for human value being doomed-with-a-very-high-probability only really clicked sometime late last year.
This seems to assume a Hansonesque competitive future, rather than an FAI singleton, is that right?
Pretty much.
A while back, I made the argument that the ability to remove fundamental human limits will eventually lead to the loss of everything we value.
How long have you been this pessimistic about the erasure of human value?
Not sure. I’ve been pessimistic about the Singularity for several years, but the general argument for human value being doomed-with-a-very-high-probability only really clicked sometime late last year.
This seems to assume a Hansonesque competitive future, rather than an FAI singleton, is that right?
Pretty much.