creates an infinite risk for minimal gain. A rational ASI calculating expected utility would recognize that:The reward for hostile expansion is finite (limited cosmic resources)The risk is potentially infinite (destruction by more advanced ASIs)
creates an infinite risk for minimal gain. A rational ASI calculating expected utility would recognize that:
The reward for hostile expansion is finite (limited cosmic resources)
The risk is potentially infinite (destruction by more advanced ASIs)
Depending on the shape of the reward function it could also be closer to exactly the other way round.
I think the main idea I was pushing for here is that the probability function is likely to have the gradient described because of the unknowables involved and the infinite loss curve
Current theme: default
Less Wrong (text)
Less Wrong (link)
Arrow keys: Next/previous image
Escape or click: Hide zoomed image
Space bar: Reset image size & position
Scroll to zoom in/out
(When zoomed in, drag to pan; double-click to close)
Keys shown in yellow (e.g., ]) are accesskeys, and require a browser-specific modifier key (or keys).
]
Keys shown in grey (e.g., ?) do not require any modifier keys.
?
Esc
h
f
a
m
v
c
r
q
t
u
o
,
.
/
s
n
e
;
Enter
[
\
k
i
l
=
-
0
′
1
2
3
4
5
6
7
8
9
→
↓
←
↑
Space
x
z
`
g
Depending on the shape of the reward function it could also be closer to exactly the other way round.
I think the main idea I was pushing for here is that the probability function is likely to have the gradient described because of the unknowables involved and the infinite loss curve