...treating preferences as identifying a sort order for universes.
...treating “values” and “preferences” and “goals” as more or less interchangeable terms.
...aggregating multiple goals into a single complex “fulfill my preferences (insofar as they are not mutually exclusive)” goal, at least in principle. (To the extent that we can actually do this, the fact that preferences might have hierarchical dependencies where satisfying preference A also partially satisfies preference B becomes irrelevant; all of that is factored into the complex goal. Of course, actually doing this might prove too complicated for any given computationally bounded mind,so such dependencies might still be important in practice.)
...balancing preferences against one another to create some kind of weighted aggregate in cases where they are mutually exclusive, in principle. (As above, that’s not to say in practice that all minds can actually do that. Different strategies may be appropriate for less capable minds.)
...drawing a distinction between which universe(s) I choose, on the one hand, and what steps I take to get there, on the other. (And if we want to refer to steps as “instrumental values” and universes as “terminal values”, that’s OK with me. That said, what I see people doing a lot is mis-identifying steps as universes, simply because we haven’t thought enough about the internal structure and intended results of those steps, so in practice I am skeptical of claims about “terminal values.” In practice, I treat the term as referring to instrumental values I haven’t yet thought enough about to understand in detail.)
no one said that humans intrinsically shared the same preferences.
I’m not sure that’s true. IIRC, a lot of the Fun Theory Sequence and the stuff around CEV sounded an awful lot like precisely this claim. That said, it’s been three years, and I don’t remember details. In any case, if we agree that humans don’t necessarily share the same preferences, that’s cool with me, regardless of what someone else might or might not have said.
I’m on board with:
...treating preferences as identifying a sort order for universes.
...treating “values” and “preferences” and “goals” as more or less interchangeable terms.
...aggregating multiple goals into a single complex “fulfill my preferences (insofar as they are not mutually exclusive)” goal, at least in principle. (To the extent that we can actually do this, the fact that preferences might have hierarchical dependencies where satisfying preference A also partially satisfies preference B becomes irrelevant; all of that is factored into the complex goal. Of course, actually doing this might prove too complicated for any given computationally bounded mind,so such dependencies might still be important in practice.)
...balancing preferences against one another to create some kind of weighted aggregate in cases where they are mutually exclusive, in principle. (As above, that’s not to say in practice that all minds can actually do that. Different strategies may be appropriate for less capable minds.)
...drawing a distinction between which universe(s) I choose, on the one hand, and what steps I take to get there, on the other. (And if we want to refer to steps as “instrumental values” and universes as “terminal values”, that’s OK with me. That said, what I see people doing a lot is mis-identifying steps as universes, simply because we haven’t thought enough about the internal structure and intended results of those steps, so in practice I am skeptical of claims about “terminal values.” In practice, I treat the term as referring to instrumental values I haven’t yet thought enough about to understand in detail.)
I’m not sure that’s true. IIRC, a lot of the Fun Theory Sequence and the stuff around CEV sounded an awful lot like precisely this claim. That said, it’s been three years, and I don’t remember details. In any case, if we agree that humans don’t necessarily share the same preferences, that’s cool with me, regardless of what someone else might or might not have said.
And, yes, AIT is relevant.