The two contrasts you’ve set up (happiness vs. desire-satisfaction, and temporal-person-slices vs. unique-rationalized-person-idealization) aren’t completely independent. For instance, if you accept weighting all the temporal slices of the person equally, then you can weight all their desires or happinesses against each other; whereas if you take the ‘idealized rational transformation of my friend’ route, you can disregard essentially all of his empirical desires and pleasures, depending on just how you go about the idealization process. There are three criteria to keep in mind here:
Does your ethical system attend to how reality actually breaks down? Can we find a relatively natural and well-defined notion of ‘personal identity over time’ that solves this problem? If not, then that obviously strengthens the case for treating the fundamental locus of moral concern as a person-relativized-to-a-time, rather than as a person-extended-over-a-lifetime.
Does your ethical system admit of a satisfying reflective equilibrium? Do your values end up in tension with themselves, or underdetermining what the right choice is? If so, you may have taken a wrong turn.
Are these your core axiomatizations, or are they just heuristics for approximating the right utility-maximizing rule? If the latter, then the right question isn’t Which Is The One True Heuristic, but rather which heuristics have the most severe and frequent biases. For instance, the idealized-self approach has some advantages (e.g., it lets us disregard the preferences of brainwashed people in favor of their unbrainwashed selves), but it also has huge risks by virtue of its less empirical character. See Berlin’s discussion of the rational self.
The two contrasts you’ve set up (happiness vs. desire-satisfaction, and temporal-person-slices vs. unique-rationalized-person-idealization) aren’t completely independent. For instance, if you accept weighting all the temporal slices of the person equally, then you can weight all their desires or happinesses against each other; whereas if you take the ‘idealized rational transformation of my friend’ route, you can disregard essentially all of his empirical desires and pleasures, depending on just how you go about the idealization process. There are three criteria to keep in mind here:
Does your ethical system attend to how reality actually breaks down? Can we find a relatively natural and well-defined notion of ‘personal identity over time’ that solves this problem? If not, then that obviously strengthens the case for treating the fundamental locus of moral concern as a person-relativized-to-a-time, rather than as a person-extended-over-a-lifetime.
Does your ethical system admit of a satisfying reflective equilibrium? Do your values end up in tension with themselves, or underdetermining what the right choice is? If so, you may have taken a wrong turn.
Are these your core axiomatizations, or are they just heuristics for approximating the right utility-maximizing rule? If the latter, then the right question isn’t Which Is The One True Heuristic, but rather which heuristics have the most severe and frequent biases. For instance, the idealized-self approach has some advantages (e.g., it lets us disregard the preferences of brainwashed people in favor of their unbrainwashed selves), but it also has huge risks by virtue of its less empirical character. See Berlin’s discussion of the rational self.