The proof that I’ll let the AI out is not something that’s passively “plausible” or “implausible”, it’s something I control. I can make it wrong.
Do you say that to time-travelers and prophets too? ,:-.
One might want to perform the action that’s the opposite of what any correct formal proof given to you claims the action to be. As a result of having the property of behaving this way, you’ll never get confronted with the confusing formally correct claims about your future decisions.
In other words, your actions are free even of the limitations of formally correct proofs, in the sense that if your actions oppose such proofs, the proofs become impossible (you make the actions intractable by construction).
Yes, in every case where I meet one.
Current theme: default
Less Wrong (text)
Less Wrong (link)
Arrow keys: Next/previous image
Escape or click: Hide zoomed image
Space bar: Reset image size & position
Scroll to zoom in/out
(When zoomed in, drag to pan; double-click to close)
Keys shown in yellow (e.g., ]) are accesskeys, and require a browser-specific modifier key (or keys).
]
Keys shown in grey (e.g., ?) do not require any modifier keys.
?
Esc
h
f
a
m
v
c
r
q
t
u
o
,
.
/
s
n
e
;
Enter
[
\
k
i
l
=
-
0
′
1
2
3
4
5
6
7
8
9
→
↓
←
↑
Space
x
z
`
g
Do you say that to time-travelers and prophets too? ,:-.
One might want to perform the action that’s the opposite of what any correct formal proof given to you claims the action to be. As a result of having the property of behaving this way, you’ll never get confronted with the confusing formally correct claims about your future decisions.
In other words, your actions are free even of the limitations of formally correct proofs, in the sense that if your actions oppose such proofs, the proofs become impossible (you make the actions intractable by construction).
Yes, in every case where I meet one.