Can anything besides Gary’s preferences provide a justification for saying that “Gary should_gary X”? (My own answer would be “No.”)
Yes, natural laws. If Gary’s preferences do not align with reality then Gary’s preferences are objectively wrong’.
When people talk about morality they implicitly talk about fields like decision theory, game theory or economics. The mistake is to take an objective point of view, one similar to CEV. Something like CEV will result in some sort of game theoretic equilibrium. Yet each of us is a discrete agent that does not maximally value the extrapolated volition of other agents. People usually try to objectify, find a common ground, a compromise. This leads to all sorts of confusion between agents with maximally opposing terminal goals. In other words, if you are an outlier then there does exist no common ground and therefore something like CEV will be opposed.
ETA
′ I should clarify what I mean with that sentence (if I want that people understand me).
I assume that Gary has a reward function and is the result of an evolutionary process. Gary should alter its preferences as they do not suit his reward function and decrease his fitness. I realize that in a sense I just move the problem onto another level. But if Gary’s preferences can not be approached then they can be no justification for any action towards an implied goal. At that point the goal-oriented agent that is Gary will be functionally defunct and other more primitive processes will take over and consequently override Gary’s preferences. In this sense reality demands that Gary should change his mind.
Yes, natural laws. If Gary’s preferences do not align with reality then Gary’s preferences are objectively wrong’.
When people talk about morality they implicitly talk about fields like decision theory, game theory or economics. The mistake is to take an objective point of view, one similar to CEV. Something like CEV will result in some sort of game theoretic equilibrium. Yet each of us is a discrete agent that does not maximally value the extrapolated volition of other agents. People usually try to objectify, find a common ground, a compromise. This leads to all sorts of confusion between agents with maximally opposing terminal goals. In other words, if you are an outlier then there does exist no common ground and therefore something like CEV will be opposed.
ETA
′ I should clarify what I mean with that sentence (if I want that people understand me).
I assume that Gary has a reward function and is the result of an evolutionary process. Gary should alter its preferences as they do not suit his reward function and decrease his fitness. I realize that in a sense I just move the problem onto another level. But if Gary’s preferences can not be approached then they can be no justification for any action towards an implied goal. At that point the goal-oriented agent that is Gary will be functionally defunct and other more primitive processes will take over and consequently override Gary’s preferences. In this sense reality demands that Gary should change his mind.