I absolutely understand that there’s motivation to “solve the problem of value drift and manipulation”. I suspect that the problem is literally unsolvable, and I should be more agnostic about distant values than I seem to be. I’m trying on the idea of just hoping that there are intelligent/experiencing/acting agents for a long long time, regardless of what form or preferences those agents have.
I absolutely understand that there’s motivation to “solve the problem of value drift and manipulation”. I suspect that the problem is literally unsolvable, and I should be more agnostic about distant values than I seem to be. I’m trying on the idea of just hoping that there are intelligent/experiencing/acting agents for a long long time, regardless of what form or preferences those agents have.