Ah, okay. In that case, if you’re faced with a number of choices that offer varying expectations of v1 but all offer a certainty of say 3 units of water, then you’ll want to optimize for v1. But if the choices only have the same expectation of v2, then you won’t be optimizing for v1. So the theorem doesn’t apply because the agent doesn’t optimize for each value ceteris paribus in the strong sense described in this footnote.
But if the choices only have the same expectation of v2, then you won’t be optimizing for v1.
Ok, this correct. I hadn’t understood the preconditions well enough. It seems that now the important question is whether things people intuitively think of as different values (my happiness, total happiness, average happiness) satisfy this condition.
Ah, okay. In that case, if you’re faced with a number of choices that offer varying expectations of v1 but all offer a certainty of say 3 units of water, then you’ll want to optimize for v1. But if the choices only have the same expectation of v2, then you won’t be optimizing for v1. So the theorem doesn’t apply because the agent doesn’t optimize for each value ceteris paribus in the strong sense described in this footnote.
Ok, this correct. I hadn’t understood the preconditions well enough. It seems that now the important question is whether things people intuitively think of as different values (my happiness, total happiness, average happiness) satisfy this condition.
Admittedly, I’m pretty sure they don’t.