Because it’s still all being justified on consequentialist grounds, which is how we decide which rules to have and what counts as a virtue etc. in the first place. They will be the rules and virtues that lead to the best real world consequences.
The problem here is that what you need to justify is why you call some consequences better than others, because I might beg to differ. If you say “I just do” I would have to pull out my gun and say “well, I don’t”. In this scenario morality is reduced to might makes right, but then why call it morality? I think the purpose of morality is to give me a guideline to decide even when I consider some consequences to be much more preferable than others to not act on this preference because it would negate our ability to peacefully coexist. In which case you might respond that our inability to peacefully coexist is a consequence that I am taking into account, which I think means we either talk about different things and don’t actually disagree, or your reasoning is circular.
If it is the case that we merely talk about different things, I still think it is a good thing to make what I call agency ethics explicit so that we don’t forget to take its consequences into account.
If you meet a paperclip maximizer, pulling out your gun could be a moral response. No, it wouldn’t mean “might makes right”; the causality goes in the opposite direction: in this specific situation, force could be the best way to achieve a moral outcome. We use violence against e.g. viruses or bacteria all the time.
With humans it’s complicated because we actually don’t know our own values. What we feel are approximations, or deductions based on potentially wrong premises. So there is a very real possibility that we will do something to maximize our values, only to realize later that we actually acted against our values. Imagine an atheist reflecting on a memory where as a former believer he burned a witch. (What is the strategy he could have followed as a believer, to avoid this outcome?)
So we have some heuristics about moral reasonings that are more likely to change, or less likely to change, and we kinda try to take this into account. It’s usually not explicit, because, well, being open about a possibility that your values may change in the future (and debating which ones are most likely to) does not bring you much applause in a community built around those values. But still, our moral judgement of “hurting random people is evil” is much more stable than our moral judgement of “we must optimize for what Lord Jehovah wants”. Therefore, we hesitate to torture people in the name of Lord Jehovah, even when, hypothetically, it should be the right thing to do. There are people who don’t do this discounting and always do the right thing; we call them fanatics, and we don’t like them, although it may be difficult or impossible to explain explicitly why. But in our minds, there is this intuition that we might be wrong about what the right thing is, and that in some things we are more likely to be wrong than in some other things. In some way we are hedging our moral judgements against possible future changes of our values. And it’s not some kind of Brownian motion of values; we can feel that some changes are more likely than other changes.
And this is probably the reason why we don’t pull a gun on a person living immoral, but not too horrible life. (At some level, we do: like, if there is a terrorist saying that he will execute his hostages, then obviously shoot him if you can.)
I’m not quite sure what your point is and how that relates to what I have written.
The paperclip maximizer, the fanatic and the terrorist all violate agency ethics and the virus is not even an agent.
If you are opposed to my explanations, can you find an example where retribution is justified without the other party violating agency or where someone violates agency while a retribution in kind is unjustified?
Sorry. I have reacted only to a part of your previous comment, but not to your original argument. So, uhm, here is a bit silly scenario that examines agency:
There is a terrorist somewhere, holding your family as hostages. He announced that he is going to execute them in five minutes.
There are no policemen nearby, and they can’t get there in five minutes. Luckily, there is one former soldier. Unfortunately, he doesn’t have a gun with him. Fortunately, there is some other guy, who has a gun, but is not interested in the situation.
So, this soldier goes to the guy with a gun and asks silently: “Excuse me. We have this situation here, with only one terrorist, who is not paying good attention to what happens around him. Luckily, I was trained exactly for this kind of situation, and could reliably kill him with one shot. Could I borrow your gun, please? Of course, unless you want to do this yourself.”
And the guy says: “Uhm, I don’t care. I have no big problem with giving you my gun, but right at this moment I am watching a very interesting kitten video on youtube. It only takes ten minutes. So please don’t disturb me, I really enjoy watching this video. We can discuss the gun later.”
So the soldier, respecting this guy’s agency, waits respectfully. Ten minutes later (after your family was executed, and the terrorist now has some new hostages), the video ends, the guy asks: “Sorry, what did you need that gun for?” “To kill a terrorist.” “Yeah, no problem, take it.” The soldier kills the terrorist and everyone goes home. I mean, except for the terrorist and your family; they are dead.
How happy you are about the fact that the soldier respected that guy’s decision to finish watching the kitten video undisturbed? Imagine that there was an option that the soldier could inconspicuously turn off the wi-fi, so the guy would have paid him attention sooner; would that be an ethically preferable option?
The terrorist would be an agent diminishing the value of your scenario, so let’s say a bear is mauling a friend of mine while the guy watching cats on the internet is sitting on his bear repellant. I could push the guy away and save my friend, which of course I would do. However, I’m still committing an infraction against the guy who’s bear repellant I stole, I cannot argue that it would have been his moral duty to hand it over to me, and the guy has the right to ask for compensation in return. So I’m still a defector and society would do well to defect against me in proportion, which in this scenario I am of course perfectly willing to accept.
Now let’s say that two people are being mauled by the bear and the guy’s brain is somehow a bear repellant. Should I kill the guy? The retribution I deserve for that would be proportionally worse than in the first case. I might choose to, but I’d be a murderer and deserve to die in return.
So I’m still a defector and society would do well to defect against me in proportion
Which, of course, they wouldn’t do. They wouldn’t have much sympathy for the guy sitting one bear repellant, who chose not to help. In fact, refusing to help can be illegal.
I suppose in your terms, you could say that the guy-sitting-on-the-repellant is a defector, therefore it’s okay to defect against him.
I suppose in your terms, you could say that the guy-sitting-on-the-repellant is a defector, therefore it’s okay to defect against him.
No. My point is that the guy is not a defector. He merely refuses to cooperate which is an entirely different thing. So I am the defector whether or not society chooses to defect in return. And I really mean that society would do well to defect against me proportionally in return in order to discourage defection. Or to put it differently if I want to help and the guy does not, why should he have to bear (no pun intended) the cost and not me?
Societies often punish people that refuse to help. Why not consider people that break the law as defectors?
In fact, that would be an alternative (and my preferred) way to fix you second and third objection to value ethics. Consider everyone who breaks the laws and norms within your community as a defector. Where I live, torture is illegal and most people think it’s wrong to push the fat man, so pushing the fat man is (something like) breaking a norm.
Have you read Whose Utilitarianism?? Not sure if it addresses any of your concerns, but it’s good and about utilitarianism.
Okay, makes sense. There could be a technical problem with evaluating a punishment “in proportion”, because some things could be difficult to evaluate, but that is also a (much greater) problem in consequentialist ethics.
Perhaps precommitting works here. It’s a bad idea to make a rule “you must respect people’s agency except when you really need to violate it”. Adopting that rule would be beneficial in specific situations (like the one above) but generally would end in disaster.
If you instead make a rule “you must respect people’s agency unconditionally”, that rule is more practical. But you can’t make that rule and then change your mind when you’re in one of the rare situations where the other way happens to be better—if you did that, so would everyone else and you’d be screwed, on the average. So instead you precommit to following the rule and always respect people’s agency, even when not doing so is beneficial.
It’s a counterfactual mugging where instead of having to be the kind of person who would give Omega $100, you precommit to be the kind of person who would let his family die in this scenario because it benefits him in counterfactual scenarios. Thus, letting your family die here is ethical (although it may not be something people could realistically be expected to follow.)
(I don’t believe this, by the way, because while you can’t make a rule “respect people’s agency unless you really need to violate it” you can have a rule that says “respect people’s agency unless your excuse is good enough to convince a jury”.)
The problem here is that what you need to justify is why you call some consequences better than others, because I might beg to differ. If you say “I just do” I would have to pull out my gun and say “well, I don’t”. In this scenario morality is reduced to might makes right, but then why call it morality? I think the purpose of morality is to give me a guideline to decide even when I consider some consequences to be much more preferable than others to not act on this preference because it would negate our ability to peacefully coexist. In which case you might respond that our inability to peacefully coexist is a consequence that I am taking into account, which I think means we either talk about different things and don’t actually disagree, or your reasoning is circular.
If it is the case that we merely talk about different things, I still think it is a good thing to make what I call agency ethics explicit so that we don’t forget to take its consequences into account.
If you meet a paperclip maximizer, pulling out your gun could be a moral response. No, it wouldn’t mean “might makes right”; the causality goes in the opposite direction: in this specific situation, force could be the best way to achieve a moral outcome. We use violence against e.g. viruses or bacteria all the time.
With humans it’s complicated because we actually don’t know our own values. What we feel are approximations, or deductions based on potentially wrong premises. So there is a very real possibility that we will do something to maximize our values, only to realize later that we actually acted against our values. Imagine an atheist reflecting on a memory where as a former believer he burned a witch. (What is the strategy he could have followed as a believer, to avoid this outcome?)
So we have some heuristics about moral reasonings that are more likely to change, or less likely to change, and we kinda try to take this into account. It’s usually not explicit, because, well, being open about a possibility that your values may change in the future (and debating which ones are most likely to) does not bring you much applause in a community built around those values. But still, our moral judgement of “hurting random people is evil” is much more stable than our moral judgement of “we must optimize for what Lord Jehovah wants”. Therefore, we hesitate to torture people in the name of Lord Jehovah, even when, hypothetically, it should be the right thing to do. There are people who don’t do this discounting and always do the right thing; we call them fanatics, and we don’t like them, although it may be difficult or impossible to explain explicitly why. But in our minds, there is this intuition that we might be wrong about what the right thing is, and that in some things we are more likely to be wrong than in some other things. In some way we are hedging our moral judgements against possible future changes of our values. And it’s not some kind of Brownian motion of values; we can feel that some changes are more likely than other changes.
And this is probably the reason why we don’t pull a gun on a person living immoral, but not too horrible life. (At some level, we do: like, if there is a terrorist saying that he will execute his hostages, then obviously shoot him if you can.)
I’m not quite sure what your point is and how that relates to what I have written.
The paperclip maximizer, the fanatic and the terrorist all violate agency ethics and the virus is not even an agent.
If you are opposed to my explanations, can you find an example where retribution is justified without the other party violating agency or where someone violates agency while a retribution in kind is unjustified?
Sorry. I have reacted only to a part of your previous comment, but not to your original argument. So, uhm, here is a bit silly scenario that examines agency:
There is a terrorist somewhere, holding your family as hostages. He announced that he is going to execute them in five minutes.
There are no policemen nearby, and they can’t get there in five minutes. Luckily, there is one former soldier. Unfortunately, he doesn’t have a gun with him. Fortunately, there is some other guy, who has a gun, but is not interested in the situation.
So, this soldier goes to the guy with a gun and asks silently: “Excuse me. We have this situation here, with only one terrorist, who is not paying good attention to what happens around him. Luckily, I was trained exactly for this kind of situation, and could reliably kill him with one shot. Could I borrow your gun, please? Of course, unless you want to do this yourself.”
And the guy says: “Uhm, I don’t care. I have no big problem with giving you my gun, but right at this moment I am watching a very interesting kitten video on youtube. It only takes ten minutes. So please don’t disturb me, I really enjoy watching this video. We can discuss the gun later.”
So the soldier, respecting this guy’s agency, waits respectfully. Ten minutes later (after your family was executed, and the terrorist now has some new hostages), the video ends, the guy asks: “Sorry, what did you need that gun for?” “To kill a terrorist.” “Yeah, no problem, take it.” The soldier kills the terrorist and everyone goes home. I mean, except for the terrorist and your family; they are dead.
How happy you are about the fact that the soldier respected that guy’s decision to finish watching the kitten video undisturbed? Imagine that there was an option that the soldier could inconspicuously turn off the wi-fi, so the guy would have paid him attention sooner; would that be an ethically preferable option?
The terrorist would be an agent diminishing the value of your scenario, so let’s say a bear is mauling a friend of mine while the guy watching cats on the internet is sitting on his bear repellant. I could push the guy away and save my friend, which of course I would do. However, I’m still committing an infraction against the guy who’s bear repellant I stole, I cannot argue that it would have been his moral duty to hand it over to me, and the guy has the right to ask for compensation in return. So I’m still a defector and society would do well to defect against me in proportion, which in this scenario I am of course perfectly willing to accept.
Now let’s say that two people are being mauled by the bear and the guy’s brain is somehow a bear repellant. Should I kill the guy? The retribution I deserve for that would be proportionally worse than in the first case. I might choose to, but I’d be a murderer and deserve to die in return.
Which, of course, they wouldn’t do. They wouldn’t have much sympathy for the guy sitting one bear repellant, who chose not to help. In fact, refusing to help can be illegal.
I suppose in your terms, you could say that the guy-sitting-on-the-repellant is a defector, therefore it’s okay to defect against him.
No. My point is that the guy is not a defector. He merely refuses to cooperate which is an entirely different thing. So I am the defector whether or not society chooses to defect in return. And I really mean that society would do well to defect against me proportionally in return in order to discourage defection. Or to put it differently if I want to help and the guy does not, why should he have to bear (no pun intended) the cost and not me?
Societies often punish people that refuse to help. Why not consider people that break the law as defectors?
In fact, that would be an alternative (and my preferred) way to fix you second and third objection to value ethics. Consider everyone who breaks the laws and norms within your community as a defector. Where I live, torture is illegal and most people think it’s wrong to push the fat man, so pushing the fat man is (something like) breaking a norm.
Have you read Whose Utilitarianism?? Not sure if it addresses any of your concerns, but it’s good and about utilitarianism.
Okay, makes sense. There could be a technical problem with evaluating a punishment “in proportion”, because some things could be difficult to evaluate, but that is also a (much greater) problem in consequentialist ethics.
Perhaps precommitting works here. It’s a bad idea to make a rule “you must respect people’s agency except when you really need to violate it”. Adopting that rule would be beneficial in specific situations (like the one above) but generally would end in disaster.
If you instead make a rule “you must respect people’s agency unconditionally”, that rule is more practical. But you can’t make that rule and then change your mind when you’re in one of the rare situations where the other way happens to be better—if you did that, so would everyone else and you’d be screwed, on the average. So instead you precommit to following the rule and always respect people’s agency, even when not doing so is beneficial.
It’s a counterfactual mugging where instead of having to be the kind of person who would give Omega $100, you precommit to be the kind of person who would let his family die in this scenario because it benefits him in counterfactual scenarios. Thus, letting your family die here is ethical (although it may not be something people could realistically be expected to follow.)
(I don’t believe this, by the way, because while you can’t make a rule “respect people’s agency unless you really need to violate it” you can have a rule that says “respect people’s agency unless your excuse is good enough to convince a jury”.)