I assign utility to their values even if they conflict with mine to such a great degree, but I have to measure that against the negative utility they impose on me.
So, as to the example, you would value that they want to kill you somewhat, but you would value not dying even more?
That’s my understanding of what I value, at least.
Well, I’m not so sure that those words (the ones that I used to summarize your position) even mean anything.
How could you value them wanting to kill you somewhat (which would be you feeling some desire while cycling through a few different instances of imagining them doing something leading to you dying), but also value you not dying even more (which would be you feeling even more desire while moving through a few different examples of imagining you being alive)?
It would be like saying that you value going to the store somewhat (which would be you feeling some desire while cycling through a few different instances of imagining yourself traveling to the store and getting there), but value not actually being at the store (which would be you feeling even more desire while moving through a few different examples of not being at the store). But would that make sense? Do those words (the ones making up the first sentence of this paragraph) even mean anything? Or are they just nonsense?
Simply put, would it make sense to say that somebody could value X+Y (where the addition sign refers to adding the first event to the second in a sequence), but not Y (which is a part of X+Y, which the person apparently likes)?
As you already pointed out to TheOtherDave, we have multiple values which can conflict with each other. Maximally fulfilling one value can lead to low utility as it creates negative utility according to other values. I have a general desire to fulfill the utility functions of others, but sometimes this creates negative utility according to my other values.
Simply put, could you value X+Y (where the addition sign refers to adding the first event to the second in a sequence), but not Y?
Unless I’m misunderstanding you, yes. Y could have zero or negative utility, the positive utility of X could be great enough that the addition of the two would have positive overall utility.
E.g. you could satisfy both values by helping build a (non-sentient) simulation through which they can satisfy their desire to kill you without actually killing you.
But really I think the problem is that when we refer to individual actions as if they’re terminal values, it’s difficult to compromise—true terminal values tend however to be more personal than that.
So, as to the example, you would value that they want to kill you somewhat, but you would value not dying even more?
That’s my understanding of what I value, at least.
Well, I’m not so sure that those words (the ones that I used to summarize your position) even mean anything.
How could you value them wanting to kill you somewhat (which would be you feeling some desire while cycling through a few different instances of imagining them doing something leading to you dying), but also value you not dying even more (which would be you feeling even more desire while moving through a few different examples of imagining you being alive)?
It would be like saying that you value going to the store somewhat (which would be you feeling some desire while cycling through a few different instances of imagining yourself traveling to the store and getting there), but value not actually being at the store (which would be you feeling even more desire while moving through a few different examples of not being at the store). But would that make sense? Do those words (the ones making up the first sentence of this paragraph) even mean anything? Or are they just nonsense?
Simply put, would it make sense to say that somebody could value X+Y (where the addition sign refers to adding the first event to the second in a sequence), but not Y (which is a part of X+Y, which the person apparently likes)?
As you already pointed out to TheOtherDave, we have multiple values which can conflict with each other. Maximally fulfilling one value can lead to low utility as it creates negative utility according to other values. I have a general desire to fulfill the utility functions of others, but sometimes this creates negative utility according to my other values.
Unless I’m misunderstanding you, yes. Y could have zero or negative utility, the positive utility of X could be great enough that the addition of the two would have positive overall utility.
E.g. you could satisfy both values by helping build a (non-sentient) simulation through which they can satisfy their desire to kill you without actually killing you.
But really I think the problem is that when we refer to individual actions as if they’re terminal values, it’s difficult to compromise—true terminal values tend however to be more personal than that.