I’m going to apply for this role.
I would love to read a rationality textbook authored by a paperclip maximizer.
Me too. After all, a traditional paperclip maximizer would be quite rational—in fact much more rational than anyone known today—but its objectives (and therefore likely its textbook examples) would appear very unusual indeed!
If for no other reason that it means they aren’t actually an agent that is maximizing paperclips. That’s be dangerous!
Almost any human existential risk is also a paperclip risk.
Current theme: default
Less Wrong (text)
Less Wrong (link)
Arrow keys: Next/previous image
Escape or click: Hide zoomed image
Space bar: Reset image size & position
Scroll to zoom in/out
(When zoomed in, drag to pan; double-click to close)
Keys shown in yellow (e.g., ]) are accesskeys, and require a browser-specific modifier key (or keys).
]
Keys shown in grey (e.g., ?) do not require any modifier keys.
?
Esc
h
f
a
m
v
c
r
q
t
u
o
,
.
/
s
n
e
;
Enter
[
\
k
i
l
=
-
0
′
1
2
3
4
5
6
7
8
9
→
↓
←
↑
Space
x
z
`
g
I’m going to apply for this role.
I would love to read a rationality textbook authored by a paperclip maximizer.
Me too. After all, a traditional paperclip maximizer would be quite rational—in fact much more rational than anyone known today—but its objectives (and therefore likely its textbook examples) would appear very unusual indeed!
If for no other reason that it means they aren’t actually an agent that is maximizing paperclips. That’s be dangerous!
Almost any human existential risk is also a paperclip risk.