It’s important to distinguish between the definition of the word “ethics” and the content of ethics. The definition of “ethics” is roughly something like “How one should act”, “How one should be” , “What rules (if any) one should adopt”, etc—it’s an intuitive definition. The content of ethics is the specifics of how one should act, which need not be intuitive. If how one should act happens to be counterintuitive, that doesn’t prevent it from being part of ethics because “ethics” means “how one should act”.
For example, suppose it is true that you should push the fat man in front of a trolley, but your moral intuitions tell you otherwise. In that case, pushing the fat man is ethical, because ethics is what one should do, not what one intuits to be ethical.
I’m not so sure about that. However, I’m also unsure about how to properly express my objection. I will try and maybe then we need to accept that we don’t have the same information and Aumann’s agreement theorem doesn’t apply.
Our brain recognizes a pattern and informs us about that in the form of an intuitive understanding of a part of reality. Our language system then slaps a name onto that concept. However the content of the concept is the part that is formed by our intuition, not the name. If different people use different names for the same concept, that is preferable to different people using the same name to refer to different concepts.
We intuitively understand that people ought or ought not to do something and label that concept ethics or morality. If you think you discovered another concept, it should be named differently, or we could take these as the distinction between ethics and morality and agree that I am referring to whatever word we choose to mean what I intended it to mean.
The other side of that is that I may be mistaken about what people think when they talk about utilitarian ethics or morality, in which case we might have stumbled upon the point where we could dissolve a disagreement which is a good thing. In that case I would like to ask: Is it in your opinion relevant to the discussion of what people should or should not do that I consider certain actions as defecting to which I feel a strong inclination to defect in return including hurting or killing these people in order to stop their defecting? If no, then we are talking about different things and don’t actually disagree. If yes, then we probably talk about the same thing and I maintain, that my moral intuitions do play a significant role in the argument.
We intuitively understand that people ought or ought not to do something and label that concept ethics or morality.
We intuitively have ideas that people ought or ought not to do something, and some people end their investigation of morality there, without further looking into what people ought to do, but that doesn’t mean that ethics is limited to what people intuitively think people ought to do—ethics is what people should actually do, whether it’s intuitive or not. For example, it may seem counterintuitive to push an innocent man in front of a trolley—he’s innocent, so it’s wrong to kill him, right? But assuming that the fat man and each person tied to the track have equal value, by not pushing the fat man, you’re choosing the preservation of lesser value over the preservation of greater value, so even though pushing the fat man may seem unethical (because of your intuitions), it’s actually the the thing that you should do, and therefore it’s ethical.
Is it in your opinion relevant to the discussion of what people should or should not do that I consider certain actions as defecting to which I feel a strong inclination to defect in return including hurting or killing these people in order to stop their defecting?
Maybe? I think I need an example to understand what you’re asking.
But assuming that the fat man and each person tied to the track have equal value, by not pushing the fat man, you’re choosing the preservation of lesser value over the preservation of greater value,
I understand that is the definition of utilitarianism. But I’m saying that is not how we decide what should or should not be done. So how do we decide which of us is right? I’d say by going back to our intuitions. You seem to be assuming your conclusion by taking maximization of value as axiom.
I think I need an example to understand what you’re asking.
The fat man happens to be my friend. I see a guy trying to push him and shoot that guy (edit: fully aware that he tried to save the five). Would you call me a murderer (in the moral sense not the legal)? Or would you acknowledge that I acted defending my friend and just point out that the five who are now dead might have made this outcome less preferable?
You seem to be assuming your conclusion by taking maximization of value as axiom.
It’s contradictory to say that something of lesser value is what should be preferred—if so, then what does “lesser value” mean? If something has greater value, we prefer it—that’s what it means for something to have greater value.
The fat man happens to be my friend. I see a guy trying to push him and shoot that guy (edit: fully aware that he tried to save the five). Would you call me a murderer (in the moral sense not the legal)? Or would you acknowledge that I acted defending my friend and just point out that the five who are now dead might have made this outcome less preferable?
I wouldn’t call you a murderer because you didn’t kill the five people, and only killed in the defense of another. As for whether the five being dead makes the outcome less preferable, it’s important to remember that value is agent-relative, and “less preferable” presumes an agent who has those preferences. Assuming everyone involved is a stranger to me, I have no reason to value any of them more highly than the others, and since I assign a positive value to human life, I would prefer the preservation of the greater number of lives. On the other hand, you value the life of your friend more than you value the lives of five strangers (and the person trying to save them), so you act based on what you value more highly. There is no requirement that we should agree about what is more valuable—what is more valuable for me may be less valuable for you. (This is why I’m not a utilitarian, despite being a consequentialist.) Since I value the lives of five strangers more highly, I should save them. You value your friend more highly, so you should save him.
We agree on this point. But suppose that the fat man is a stranger to you, and the five people tied to the tracks are strangers as well. If you assign a positive value to strangers’ lives, the five people have a greater value than the one person. So in this case you should push the fat stranger, even though you shouldn’t push your friend.
So if the fat man was not my friend but just as much a stranger as the five you would call me a murderer? Because if not, I guess on some level you acknowledge that I operate under a different moral framework that I tried to explicate as agency ethics.
Whether you’re a murderer depends on whether you caused the situation, i.e. tied the five to the tracks. If you discover the situation (not having caused it) and then do nothing and don’t save the five, you’re not a murderer. Once you discover the situation, you should save whomever you value more. If the fat man is your friend, you should save him, if everyone is a stranger, then you should save the five and kill the fat man.
What if there are no five people on the track but a cat and I just happen to value the cat more than the fat man? Should I push him? If not, what makes that scenario different, i.e. why does it matter if a human life is at stake?
You should save whatever you value more, whether it’s a human, a cat, a loaf of bread (if you’re a kind of being who really really likes bread and/or doesn’t care about human life), or whatever.
Who says that ethics has to be intuitive?
If what you call ‘ethics’ is unrelated to what we intuitively understand as ‘ethics’, why call it that?
It’s important to distinguish between the definition of the word “ethics” and the content of ethics. The definition of “ethics” is roughly something like “How one should act”, “How one should be” , “What rules (if any) one should adopt”, etc—it’s an intuitive definition. The content of ethics is the specifics of how one should act, which need not be intuitive. If how one should act happens to be counterintuitive, that doesn’t prevent it from being part of ethics because “ethics” means “how one should act”.
For example, suppose it is true that you should push the fat man in front of a trolley, but your moral intuitions tell you otherwise. In that case, pushing the fat man is ethical, because ethics is what one should do, not what one intuits to be ethical.
I’m not so sure about that. However, I’m also unsure about how to properly express my objection. I will try and maybe then we need to accept that we don’t have the same information and Aumann’s agreement theorem doesn’t apply.
Our brain recognizes a pattern and informs us about that in the form of an intuitive understanding of a part of reality. Our language system then slaps a name onto that concept. However the content of the concept is the part that is formed by our intuition, not the name. If different people use different names for the same concept, that is preferable to different people using the same name to refer to different concepts.
We intuitively understand that people ought or ought not to do something and label that concept ethics or morality. If you think you discovered another concept, it should be named differently, or we could take these as the distinction between ethics and morality and agree that I am referring to whatever word we choose to mean what I intended it to mean.
The other side of that is that I may be mistaken about what people think when they talk about utilitarian ethics or morality, in which case we might have stumbled upon the point where we could dissolve a disagreement which is a good thing. In that case I would like to ask: Is it in your opinion relevant to the discussion of what people should or should not do that I consider certain actions as defecting to which I feel a strong inclination to defect in return including hurting or killing these people in order to stop their defecting? If no, then we are talking about different things and don’t actually disagree. If yes, then we probably talk about the same thing and I maintain, that my moral intuitions do play a significant role in the argument.
We intuitively have ideas that people ought or ought not to do something, and some people end their investigation of morality there, without further looking into what people ought to do, but that doesn’t mean that ethics is limited to what people intuitively think people ought to do—ethics is what people should actually do, whether it’s intuitive or not. For example, it may seem counterintuitive to push an innocent man in front of a trolley—he’s innocent, so it’s wrong to kill him, right? But assuming that the fat man and each person tied to the track have equal value, by not pushing the fat man, you’re choosing the preservation of lesser value over the preservation of greater value, so even though pushing the fat man may seem unethical (because of your intuitions), it’s actually the the thing that you should do, and therefore it’s ethical.
Maybe? I think I need an example to understand what you’re asking.
I understand that is the definition of utilitarianism. But I’m saying that is not how we decide what should or should not be done. So how do we decide which of us is right? I’d say by going back to our intuitions. You seem to be assuming your conclusion by taking maximization of value as axiom.
The fat man happens to be my friend. I see a guy trying to push him and shoot that guy (edit: fully aware that he tried to save the five). Would you call me a murderer (in the moral sense not the legal)? Or would you acknowledge that I acted defending my friend and just point out that the five who are now dead might have made this outcome less preferable?
It’s contradictory to say that something of lesser value is what should be preferred—if so, then what does “lesser value” mean? If something has greater value, we prefer it—that’s what it means for something to have greater value.
I wouldn’t call you a murderer because you didn’t kill the five people, and only killed in the defense of another. As for whether the five being dead makes the outcome less preferable, it’s important to remember that value is agent-relative, and “less preferable” presumes an agent who has those preferences. Assuming everyone involved is a stranger to me, I have no reason to value any of them more highly than the others, and since I assign a positive value to human life, I would prefer the preservation of the greater number of lives. On the other hand, you value the life of your friend more than you value the lives of five strangers (and the person trying to save them), so you act based on what you value more highly. There is no requirement that we should agree about what is more valuable—what is more valuable for me may be less valuable for you. (This is why I’m not a utilitarian, despite being a consequentialist.) Since I value the lives of five strangers more highly, I should save them. You value your friend more highly, so you should save him.
Then I guess we actually agree.
We agree on this point. But suppose that the fat man is a stranger to you, and the five people tied to the tracks are strangers as well. If you assign a positive value to strangers’ lives, the five people have a greater value than the one person. So in this case you should push the fat stranger, even though you shouldn’t push your friend.
So if the fat man was not my friend but just as much a stranger as the five you would call me a murderer? Because if not, I guess on some level you acknowledge that I operate under a different moral framework that I tried to explicate as agency ethics.
Whether you’re a murderer depends on whether you caused the situation, i.e. tied the five to the tracks. If you discover the situation (not having caused it) and then do nothing and don’t save the five, you’re not a murderer. Once you discover the situation, you should save whomever you value more. If the fat man is your friend, you should save him, if everyone is a stranger, then you should save the five and kill the fat man.
What if there are no five people on the track but a cat and I just happen to value the cat more than the fat man? Should I push him? If not, what makes that scenario different, i.e. why does it matter if a human life is at stake?
You should save whatever you value more, whether it’s a human, a cat, a loaf of bread (if you’re a kind of being who really really likes bread and/or doesn’t care about human life), or whatever.