A while back, I made the argument that the ability to remove fundamental human limits will eventually lead to the loss of everything we value.
How long have you been this pessimistic about the erasure of human value?
Not sure. I’ve been pessimistic about the Singularity for several years, but the general argument for human value being doomed-with-a-very-high-probability only really clicked sometime late last year.
This seems to assume a Hansonesque competitive future, rather than an FAI singleton, is that right?
Pretty much.
Current theme: default
Less Wrong (text)
Less Wrong (link)
Arrow keys: Next/previous image
Escape or click: Hide zoomed image
Space bar: Reset image size & position
Scroll to zoom in/out
(When zoomed in, drag to pan; double-click to close)
Keys shown in yellow (e.g., ]) are accesskeys, and require a browser-specific modifier key (or keys).
]
Keys shown in grey (e.g., ?) do not require any modifier keys.
?
Esc
h
f
a
m
v
c
r
q
t
u
o
,
.
/
s
n
e
;
Enter
[
\
k
i
l
=
-
0
′
1
2
3
4
5
6
7
8
9
→
↓
←
↑
Space
x
z
`
g
A while back, I made the argument that the ability to remove fundamental human limits will eventually lead to the loss of everything we value.
How long have you been this pessimistic about the erasure of human value?
Not sure. I’ve been pessimistic about the Singularity for several years, but the general argument for human value being doomed-with-a-very-high-probability only really clicked sometime late last year.
This seems to assume a Hansonesque competitive future, rather than an FAI singleton, is that right?
Pretty much.