I’m not sure I agree—in the original thought experiment, it was a given that increasing intelligence would lead to changes in values in ways that the agent, at t=0, would not understand or share.
At this point, one could decide whether to go for it or hold back—and we should all consider ourself lucky that our early sapiens predecessors didn’t take the second option.
I’m not sure I agree—in the original thought experiment, it was a given that increasing intelligence would lead to changes in values in ways that the agent, at t=0, would not understand or share.
At this point, one could decide whether to go for it or hold back—and we should all consider ourself lucky that our early sapiens predecessors didn’t take the second option.
(btw, I’m very curious to know what you make of this other Land text: https://etscrivner.github.io/cryptocurrent/