Can you describe a situation where the whole of the ends don’t justify the whole of the means where an optimal outcome is achieved, where “optimal” is defined as maximizing utility along multiple (or all salient) weighted metrics? I would never advocate a myopic definition of “optimal” that disregards all but one metric. Even if my goal is as simple as “flip that switch with minimal action taken on my part”, I could maybe shoot the light switch with a gun that happens to be nearby, maximizing the given success criteria, but I wouldn’t do that. Why not? I have many values which are implied. One of those is “cause minimal damage”. Another is “don’t draw the attention of law enforcement or break the law”. Another is “minimize the risk to life”. Each of these have various weights, and usually take priority over “minimize action taken on my part”. The concept of “deserve” doesn’t have to come into it at all. Sure, my neighbor may or may not “deserve” to be put in the line of fire, especially over something as trivial as avoiding getting out of my chair. But my entire point is that you can easily break the concept of “deserve” down into component parts. Simply weigh the pros and cons of shooting the light switch, excluding violations of the concept of “deserve”, and you still arrive at similar conclusions, usually. Where you DON’T reach the same conclusions, I would argue, are cases such as incarceration where treating inmates as they deserve to be treated might have worse outcomes than treating them in whatever way has optimal outcomes across whichever metrics are most salient to you and the situation (reducing recidivism, maximizing human thriving, life longevity, making use of human potential, minimizing damage, reducing expense...).
The strawman you have minimally constructed, where there is some benefit to murder, would have to be fleshed out a bit before I’d be convinced that murder becomes justifiable in a world which analyzes outcomes without regard to who deserves what, and instead focuses on maximizing along certain usually mutually agreeable metrics, which naturally would have strong negative weights against ending lives early. The “deserve” concept helps us sum up behaviors that might not have immediate obvious benefits to society at large. The fact that we all agree upon a “deserve” based system has multiple benefits, encouraging good behavior and dissuading bad behavior, without having to monitor everybody every minute. But not noticing this system, not breaking it down, and just using it unquestioningly, vastly reduces the scope of possible actions we even conceive of, let alone partake in. The deserve based system is a cage. It requires effort and care to break free of this cage without falling into mayhem and anarchy. I certainly don’t condone mayhem. I just want us to be able to set the cage aside, see what’s outside of it, and be able to pick actions in violation of “deserve” where those actions have positive outcomes. If “because they don’t deserve it” is the only thing holding you back from setting an orphanage on fire, then by all means, please stay within your cage.
Can you describe a situation where the whole of the ends don’t justify the whole of the means where an optimal outcome is achieved, where “optimal” is defined as maximizing utility along multiple (or all salient) weighted metrics?
Easily, as long as I’m permitted to choose poor metrics, or to choose metrics that don’t align with my values. But then the problem with the example would be poor choice of metrics...
I have many values which are implied. One of those is “cause minimal damage”. Another is “don’t draw the attention of law enforcement or break the law”. Another is “minimize the risk to life”.
Ah, that’s important. By selecting the right values, and assigning weights to them carefully, you bring suitable consideration of the means back.
The difficulty is that choosing the right metrics is a non-trivial problem. The concept of “deserving” is a heuristic—not always accurate, but close enough to work most of the time, and far quicker to calculate than considering even possible influence on a situation.
Having said that, of course, it is not always accurate. Some times, the outcome that someone deserves is not the best outcome; as with many heuristics, it’s worth thinking very carefully (and possibly talking over the situation with a friend) before breaking it. But that doesn’t mean that it should never be broken, and it certainly doesn’t mean it should never be questioned.
(Incidentally, every situation that I can work out where there appears to be some benefit to murder either comes down to killing X people in order to save Y people, where Y>X—in short, pitting the value “minimize the risk to life” against itself—or requires a near-infinite human population, which we certainly don’t have yet)
Can you describe a situation where the whole of the ends don’t justify the whole of the means where an optimal outcome is achieved, where “optimal” is defined as maximizing utility along multiple (or all salient) weighted metrics? I would never advocate a myopic definition of “optimal” that disregards all but one metric. Even if my goal is as simple as “flip that switch with minimal action taken on my part”, I could maybe shoot the light switch with a gun that happens to be nearby, maximizing the given success criteria, but I wouldn’t do that. Why not? I have many values which are implied. One of those is “cause minimal damage”. Another is “don’t draw the attention of law enforcement or break the law”. Another is “minimize the risk to life”. Each of these have various weights, and usually take priority over “minimize action taken on my part”. The concept of “deserve” doesn’t have to come into it at all. Sure, my neighbor may or may not “deserve” to be put in the line of fire, especially over something as trivial as avoiding getting out of my chair. But my entire point is that you can easily break the concept of “deserve” down into component parts. Simply weigh the pros and cons of shooting the light switch, excluding violations of the concept of “deserve”, and you still arrive at similar conclusions, usually. Where you DON’T reach the same conclusions, I would argue, are cases such as incarceration where treating inmates as they deserve to be treated might have worse outcomes than treating them in whatever way has optimal outcomes across whichever metrics are most salient to you and the situation (reducing recidivism, maximizing human thriving, life longevity, making use of human potential, minimizing damage, reducing expense...).
The strawman you have minimally constructed, where there is some benefit to murder, would have to be fleshed out a bit before I’d be convinced that murder becomes justifiable in a world which analyzes outcomes without regard to who deserves what, and instead focuses on maximizing along certain usually mutually agreeable metrics, which naturally would have strong negative weights against ending lives early. The “deserve” concept helps us sum up behaviors that might not have immediate obvious benefits to society at large. The fact that we all agree upon a “deserve” based system has multiple benefits, encouraging good behavior and dissuading bad behavior, without having to monitor everybody every minute. But not noticing this system, not breaking it down, and just using it unquestioningly, vastly reduces the scope of possible actions we even conceive of, let alone partake in. The deserve based system is a cage. It requires effort and care to break free of this cage without falling into mayhem and anarchy. I certainly don’t condone mayhem. I just want us to be able to set the cage aside, see what’s outside of it, and be able to pick actions in violation of “deserve” where those actions have positive outcomes. If “because they don’t deserve it” is the only thing holding you back from setting an orphanage on fire, then by all means, please stay within your cage.
Easily, as long as I’m permitted to choose poor metrics, or to choose metrics that don’t align with my values. But then the problem with the example would be poor choice of metrics...
Ah, that’s important. By selecting the right values, and assigning weights to them carefully, you bring suitable consideration of the means back.
The difficulty is that choosing the right metrics is a non-trivial problem. The concept of “deserving” is a heuristic—not always accurate, but close enough to work most of the time, and far quicker to calculate than considering even possible influence on a situation.
Having said that, of course, it is not always accurate. Some times, the outcome that someone deserves is not the best outcome; as with many heuristics, it’s worth thinking very carefully (and possibly talking over the situation with a friend) before breaking it. But that doesn’t mean that it should never be broken, and it certainly doesn’t mean it should never be questioned.
(Incidentally, every situation that I can work out where there appears to be some benefit to murder either comes down to killing X people in order to save Y people, where Y>X—in short, pitting the value “minimize the risk to life” against itself—or requires a near-infinite human population, which we certainly don’t have yet)