I did not say engineer something so that no one wants to destroy it. Just that if you have actually reached towards the greatest good for the greatest number, then the fewest should want to destroy it.
Or have I misunderstood you?
My argument is going something along the lines of the Tautological argument that ( I think) Mills (but maybe Bentham) made about Utilitarianism (paraphrasing much), “People who object to Utilitarianism that it will end up with some kind of calculated dystopia where we trade off a few people’s happiness for the many actually prove the principle of utilitarianism in their very objection to this. Such a system would be anti-utilitarian. No one likes that. Therefore it is not utilitarianism at all.”
Perhaps I misunderstood you. I was merely pointing out that any concrete allocation of resources and status (the primary function of an ethical system) is going to have opponents based on who feels the loss.
It’s not (necessarily) that they object to Utilitarianism, it’s that they object to THIS particular application to them. This will be the case for any concrete policy.
I suppose this is technically true, but all concrete choices are not created equally.
Some policies tend towards win-win, for example “Let’s pave the cowpaths.” In that case, they are only going to bother someone with a systemic interest in the cowpaths not getting paved. Not to dismiss their interests entirely, like “they have some job that depends on routing people around the long way” or something, but this is going to, on balance, tend to be less people and less intense opposition (and more easily answered) than more zero-sum competitive approaches, for example.
I guess this is getting into a separate argument though: “Win-win thinking is fundamentally more Utilitarian than competitive zero-sum thinking.”
“Win-win thinking is fundamentally more Utilitarian than competitive zero-sum thinking.”
Well, no—that’s my main comment on your post. Any given Utilitarian priority (the aggregation of individual utility that you optimize) is NOT win-win. It’s win-on-average, which is still a loss for some.
Do you believe in the existence of win-win? If so, why wouldn’t they tend to behave as I am suggesting? Also if you believe win-wins exist and think they do not behave this way, then how do you understand a win-win?
I only think the very simplest of examples are fully win-win. Almost all of the real world consists of so many dimensions and players that it’s more win-kinda-win-win-too-much-feels-like-losing-but-maybe-is-technically-a-win-lose-big-win-slightly-etc-for-thousands-of-terms-in-the-equation.
Also, a whole lot matters whether it’s a win or a loss, what you’re comparing it to. Many things are a slight win compared to the worse outcomes (for the person in question) and a loss compared to perfect, but unlikely, outcomes.
I do totally believe that many negotiations are more successful if you can convince the loser that they’re winning. And a fair number of actual cooperative situations where all participants benefit and know it. Just not that they’re automatic nor that they’re the important ones for an ethical system to analyze.
So yes, win-win can happen, but that’s boring—there’s nobody arguing against that. It’s the win-lose and win-win-less-than-I-wanted cases which are actually interesting.
I did not say engineer something so that no one wants to destroy it. Just that if you have actually reached towards the greatest good for the greatest number, then the fewest should want to destroy it.
Or have I misunderstood you?
My argument is going something along the lines of the Tautological argument that ( I think) Mills (but maybe Bentham) made about Utilitarianism (paraphrasing much), “People who object to Utilitarianism that it will end up with some kind of calculated dystopia where we trade off a few people’s happiness for the many actually prove the principle of utilitarianism in their very objection to this. Such a system would be anti-utilitarian. No one likes that. Therefore it is not utilitarianism at all.”
Perhaps I misunderstood you. I was merely pointing out that any concrete allocation of resources and status (the primary function of an ethical system) is going to have opponents based on who feels the loss.
It’s not (necessarily) that they object to Utilitarianism, it’s that they object to THIS particular application to them. This will be the case for any concrete policy.
I suppose this is technically true, but all concrete choices are not created equally.
Some policies tend towards win-win, for example “Let’s pave the cowpaths.” In that case, they are only going to bother someone with a systemic interest in the cowpaths not getting paved. Not to dismiss their interests entirely, like “they have some job that depends on routing people around the long way” or something, but this is going to, on balance, tend to be less people and less intense opposition (and more easily answered) than more zero-sum competitive approaches, for example.
I guess this is getting into a separate argument though: “Win-win thinking is fundamentally more Utilitarian than competitive zero-sum thinking.”
Well, no—that’s my main comment on your post. Any given Utilitarian priority (the aggregation of individual utility that you optimize) is NOT win-win. It’s win-on-average, which is still a loss for some.
Do you believe in the existence of win-win? If so, why wouldn’t they tend to behave as I am suggesting? Also if you believe win-wins exist and think they do not behave this way, then how do you understand a win-win?
I only think the very simplest of examples are fully win-win. Almost all of the real world consists of so many dimensions and players that it’s more win-kinda-win-win-too-much-feels-like-losing-but-maybe-is-technically-a-win-lose-big-win-slightly-etc-for-thousands-of-terms-in-the-equation.
Also, a whole lot matters whether it’s a win or a loss, what you’re comparing it to. Many things are a slight win compared to the worse outcomes (for the person in question) and a loss compared to perfect, but unlikely, outcomes.
I do totally believe that many negotiations are more successful if you can convince the loser that they’re winning. And a fair number of actual cooperative situations where all participants benefit and know it. Just not that they’re automatic nor that they’re the important ones for an ethical system to analyze.
So yes, win-win can happen, but that’s boring—there’s nobody arguing against that. It’s the win-lose and win-win-less-than-I-wanted cases which are actually interesting.