Can you describe a situation where the whole of the ends don’t justify the whole of the means where an optimal outcome is achieved, where “optimal” is defined as maximizing utility along multiple (or all salient) weighted metrics?
Easily, as long as I’m permitted to choose poor metrics, or to choose metrics that don’t align with my values. But then the problem with the example would be poor choice of metrics...
I have many values which are implied. One of those is “cause minimal damage”. Another is “don’t draw the attention of law enforcement or break the law”. Another is “minimize the risk to life”.
Ah, that’s important. By selecting the right values, and assigning weights to them carefully, you bring suitable consideration of the means back.
The difficulty is that choosing the right metrics is a non-trivial problem. The concept of “deserving” is a heuristic—not always accurate, but close enough to work most of the time, and far quicker to calculate than considering even possible influence on a situation.
Having said that, of course, it is not always accurate. Some times, the outcome that someone deserves is not the best outcome; as with many heuristics, it’s worth thinking very carefully (and possibly talking over the situation with a friend) before breaking it. But that doesn’t mean that it should never be broken, and it certainly doesn’t mean it should never be questioned.
(Incidentally, every situation that I can work out where there appears to be some benefit to murder either comes down to killing X people in order to save Y people, where Y>X—in short, pitting the value “minimize the risk to life” against itself—or requires a near-infinite human population, which we certainly don’t have yet)
Easily, as long as I’m permitted to choose poor metrics, or to choose metrics that don’t align with my values. But then the problem with the example would be poor choice of metrics...
Ah, that’s important. By selecting the right values, and assigning weights to them carefully, you bring suitable consideration of the means back.
The difficulty is that choosing the right metrics is a non-trivial problem. The concept of “deserving” is a heuristic—not always accurate, but close enough to work most of the time, and far quicker to calculate than considering even possible influence on a situation.
Having said that, of course, it is not always accurate. Some times, the outcome that someone deserves is not the best outcome; as with many heuristics, it’s worth thinking very carefully (and possibly talking over the situation with a friend) before breaking it. But that doesn’t mean that it should never be broken, and it certainly doesn’t mean it should never be questioned.
(Incidentally, every situation that I can work out where there appears to be some benefit to murder either comes down to killing X people in order to save Y people, where Y>X—in short, pitting the value “minimize the risk to life” against itself—or requires a near-infinite human population, which we certainly don’t have yet)