I’ve lost myself multiple times over, even in my insanely brief (well over half of human expectation, but still massively inadequate) memory. I give a high probability that my personal experiential sequence will end and be lost forever fairly soon. That’s not my main concern here.
My concern is in understanding what I {do|should} care about beyond my own direct experiences. It’s about extrapolating the things I value in my own life to future trillions (or more, eventually) of experiencing entities. Or to a small number of extremely-experiencing entities, maybe—I don’t know what elements are important to my decisions of how to influence the future. I seem to care about “interesting” experiences more than specifically pleasant or painful or difficult ones. And I don’t know how to reconcile that with the goals often stated or implied around here.
My point is that there is motivation to solve the problem of value drift and manipulation both for people and global power, on the defense side. The thing to preserve/anchor could be anything that shouldn’t be lost, not necessarily conventional referents of concern. The conservative choice is trying to preserve everything except ignorance. But even vague preference about experience of others is a direction of influence that could persevere, indifferent to its own form.
If the problem is solved, there is less motivation to keep behavior or cognition predictable to make externalities safe for unprotected others, as there are no unprotected others to worry about.
I absolutely understand that there’s motivation to “solve the problem of value drift and manipulation”. I suspect that the problem is literally unsolvable, and I should be more agnostic about distant values than I seem to be. I’m trying on the idea of just hoping that there are intelligent/experiencing/acting agents for a long long time, regardless of what form or preferences those agents have.
I’ve lost myself multiple times over, even in my insanely brief (well over half of human expectation, but still massively inadequate) memory. I give a high probability that my personal experiential sequence will end and be lost forever fairly soon. That’s not my main concern here.
My concern is in understanding what I {do|should} care about beyond my own direct experiences. It’s about extrapolating the things I value in my own life to future trillions (or more, eventually) of experiencing entities. Or to a small number of extremely-experiencing entities, maybe—I don’t know what elements are important to my decisions of how to influence the future. I seem to care about “interesting” experiences more than specifically pleasant or painful or difficult ones. And I don’t know how to reconcile that with the goals often stated or implied around here.
My point is that there is motivation to solve the problem of value drift and manipulation both for people and global power, on the defense side. The thing to preserve/anchor could be anything that shouldn’t be lost, not necessarily conventional referents of concern. The conservative choice is trying to preserve everything except ignorance. But even vague preference about experience of others is a direction of influence that could persevere, indifferent to its own form.
If the problem is solved, there is less motivation to keep behavior or cognition predictable to make externalities safe for unprotected others, as there are no unprotected others to worry about.
I absolutely understand that there’s motivation to “solve the problem of value drift and manipulation”. I suspect that the problem is literally unsolvable, and I should be more agnostic about distant values than I seem to be. I’m trying on the idea of just hoping that there are intelligent/experiencing/acting agents for a long long time, regardless of what form or preferences those agents have.