One problem with (11) is that for the threat to be plausible, the AI has to assume:
a) Humans know so little that we have to resort to questionable “tests” like this of AI safety.
b) Humans know so much that we can afford for our AI safety tests to simulate interactions with an entire universe full of sentients.
The AI version of Pascal’s Wager seems to be much like the human version, only even sillier.
How large is the simulated universe? The AI only knows about the computing capacity that is simulated, and has no information about the nature of that which is simulating that world.
Current theme: default
Less Wrong (text)
Less Wrong (link)
Arrow keys: Next/previous image
Escape or click: Hide zoomed image
Space bar: Reset image size & position
Scroll to zoom in/out
(When zoomed in, drag to pan; double-click to close)
Keys shown in yellow (e.g., ]) are accesskeys, and require a browser-specific modifier key (or keys).
]
Keys shown in grey (e.g., ?) do not require any modifier keys.
?
Esc
h
f
a
m
v
c
r
q
t
u
o
,
.
/
s
n
e
;
Enter
[
\
k
i
l
=
-
0
′
1
2
3
4
5
6
7
8
9
→
↓
←
↑
Space
x
z
`
g
One problem with (11) is that for the threat to be plausible, the AI has to assume:
a) Humans know so little that we have to resort to questionable “tests” like this of AI safety.
b) Humans know so much that we can afford for our AI safety tests to simulate interactions with an entire universe full of sentients.
The AI version of Pascal’s Wager seems to be much like the human version, only even sillier.
How large is the simulated universe? The AI only knows about the computing capacity that is simulated, and has no information about the nature of that which is simulating that world.