I might say: “Killing Joe is bad because Joe would like not to be killed, and enjoys continuing to live. Also, Joe’s friends would be sad if Joe died.” This is not a sophisticated argument. If an atheist would have a hard time making it, it’s only because one feels awkward making such an unsophisticated argument in a debate about morality.
This doesn’t answer the question. Why is doing things Joe doesn’t like, or making his friends sad, bad? Consequentialism isn’t a moral system by itself; you need axioms or goals.
Why is doing things Joe doesn’t like, or making his friends sad, bad?
Because ceteris paribus, I prefer not to make Joe or his friends sad (which is an instance of the more general rule, “don’t violate people’s preferences, ceteris paribus”). And before you say that makes morality “arbitrary” or something along those lines, note that the overwhelming majority of society (in most Western First World countries, anyway—I don’t how it is in, say, the Middle East) agrees with me.
So yes, technically you could have a preference for violating other people’s preferences, and those preferences would technically be just as valid as mine, but in practice, if you act upon that preference, you are violating one of society’s rules, and game theory says that defectors get punished. So unless you want to get locked up for a long time, don’t kill people.
Of course, you might find this unsatisfactory for several reasons. For example, you might demand that morality hold anywhere and everywhere, whether a society exists to enforce it or not. However, the behavior of other animals in the wild definitely contradicts that idea, and humans, for all their intelligence, are still animals at their core, and therefore likely to behave the same way if deprived of societal norms. (Mind you, given enough time, they could probably implement a society from scratch—after all, we did it once—but that’ll take a long time.) Unless you’re a moral realist or something, which is indefensible for other reasons, I don’t really see how you could argue your way out of this point.
In morals, as in logic, you can’t explain something by appealing to something else unless the chain terminates in an axiom.
The question “why is it bad to rape and murder?” can be rephrased as, “how can we determine if a thing is bad, in the case of rape and murder?”
The answer “rape and murder are bad by definition” may be unsatisfying, but at least it’s a workable way: everything on the list is bad, everything else is not. But the answer “because they make others sad” assumes you can determine making others sad is bad. You substitute one question for another, and unless we keep asking why, we won’t have answered the original question.
A million dollars is a lot more zero-sum than not killing someone—if I give you a million dollars I lose a million dollars. To make the analogy more accurate, you’d need to stipulate that Joe will kill me if I don’t kill him.
Also, I don’t think it’s fair to ignore the fact that for most people, not killing someone is vastly easier to do at non-self-destructive costs. I appreciate that this is a quantitative argument rather than a categorical counterargument, but if we have atheists who base their sense of morality on a vague consequentialism that they can’t quite fully articulate, that’s still no worse than Robertson’s (presumed) divine command theory, and they should be able to make such such arguments without being accused of hypocrisy for not also advocating actions that would score much worse under their vague consequentialism.
you’d need to stipulate that Joe will kill me if I don’t kill him.
And note that many (most?) people and many (most?) legal systems do in fact hold that in such situations (war, self-defence) you are entitled to kill Joe.
A million dollars is a lot more zero-sum than not killing someone—if I give you a million dollars I lose a million dollars. To make the analogy more accurate, you’d need to stipulate that Joe will kill me if I don’t kill him.
No, just that you’ll get some benefit from killing him, e.g., you get to have sex with his wife.
I guess you’re worried that if the same argument works in both cases then you might end up obliged to give Joe $1M. But those reasons why you should give Joe the money have exactly parallel reasons why you should keep it, and to zeroth order they all cancel out, so no such obligation.
If you look with a bit more detail, then the reasons might be stronger one way than the other; for instance, if you are quite rich and Joe is quite poor, he might benefit more from the money than you would. We don’t generally have norms saying you should give him the money in this case for all sorts of good reasons, but instead we have taxation (compulsory) and charity (optional) which end up having an effect a bit like saying that rich people should give some of their money to much poorer people.
In typical cases, (1) if you give Joe a $1M then your loss will be bigger than Joe’s gain, so even aside from other considerations you probably shouldn’t, and (2) if you kill Joe then Joe’s loss will be bigger than your gain, so even aside from other considerations you probably shouldn’t. So the simple-minded “do whatever makes people happiest” principle (a.k.a. total utilitarianism, but you don’t have to be a total utilitarian for this to be a reason, as opposed to the only possible reason, for doing something) gives the “right” answers in most cases.
I guess you’re worried that if the same argument works in both cases then you might end up obliged to give Joe $1M.
No, I’m claiming neither Kindly nor you actually believe the argument you’ve given.
So the simple-minded “do whatever makes people happiest” principle (a.k.a. total utilitarianism, but you don’t have to be a total utilitarian for this to be a reason, as opposed to the only possible reason, for doing something) gives the “right” answers in most cases.
Except, you’re not doing that, i.e., you’re not giving all your income to charity. So since you’re willing to ignore parts of your ethics when its inconvenient, why not also ignore the parts about not killing Joe when it would be convenient were Joe to die.
I’m claiming neither Kindly nor you actually believe the argument you’ve given.
Your overconfidence in your mind-reading abilities is noted.
Except you’re not doing that [...]
The fact that someone doesn’t act as a perfect utility maximizer doesn’t mean that utility gains aren’t worth seeking, for them out for others. If you ask “why did you buy that thing?” and I say I bought it because it was half the price of the alternative, am I refuted if you point out that I don’t always buy the cheapest things I can?
As I said: a reason, not the only possible reason.
How do you distinguish the part of your ethics that you ignore in practice, e.g., not giving all your money to charity, from the part you insist you and everybody follow, e.g., not killing Joe even though he’s being really really annoying.
Giving all my money to charity isn’t a part of my ethics.
Increasing net utility (or something of the kind) is one of the things I care about. So the fact that something increases net utility is a reason to do it, and the fact that something decreases net utility is a reason not to. But net utility isn’t the only thing I care about, so a thing that increases net utility isn’t necessarily a thing I think I should do.
What I insist on, though, is another matter again. That’s a matter of Schelling points and traditions and the like, optimized (inter alia) for being easy to remember and intuitively plausible.
So:
Giving $1M to Joe: increases his utility, decreases mine, probably not a win overall in terms of net utility. Fails various other tests too. Not in any sense any sort of moral obligation.
Giving $100 to Joe, who is much poorer than me: net utility increase, might be a good thing to do on those terms. Probably reasonable not to do simply on the grounds that I care more about my own utility than that of strangers, that if I’m trying to do maximum good there are others who need the money much more than Joe, etc.
Giving $100 to a carefully chosen effective charity: close to the best thing I can do for net utility with the money. I still care more about my own utility than about strangers’, though, so not necessarily obligatory even “internally”.
Giving at least a few percent of one’s income to effective charities, provided one is reasonably comfortable financially: almost always a big net utility gain, not too burdensome, has the same form as various traditional practices, easy to remember and to do. I’d be comfortable recommending this as a principle everyone should be following.
The attentive reader will notice that not killing people just for being annoying clearly fits into the same category as the last of those.
What changes is that I would like to have a million dollars as much as Joe would. Similarly, if I had to trade between Joe’s desire to live and my own, the latter would win.
In another comment you claim that I do not believe my own argument. This is false. I know this because if we suppose that Joe would like to be killed, and Joe’s friends would not be said if he died, then I am okay with Joe’s death. So there is no other hidden factor that moves me.
I’m not sure what the observation that I do not give all of my money away to charity has to do with anything.
A rule whereby you do not kill people without their consent is much easier to implement, and results in many fewer bad consequences (including perverse incentives), than a rule whereby you do not refuse to give people a million dollars without their consent.
That would mean that atheist morality is context dependent, for instance applying different standards at peacetime and wartime. Historically, Christian morality serms to be similar.
For all that Christian moralists criticize situationalist ethics, I’ve found that all ethical systems inevitably end up being situationalist; i.e. “thou shalt not kill” except when God commands otherwise.
The problem is they have a hard time saying what.
I don’t think that’s true in any important way.
I might say: “Killing Joe is bad because Joe would like not to be killed, and enjoys continuing to live. Also, Joe’s friends would be sad if Joe died.” This is not a sophisticated argument. If an atheist would have a hard time making it, it’s only because one feels awkward making such an unsophisticated argument in a debate about morality.
This doesn’t answer the question. Why is doing things Joe doesn’t like, or making his friends sad, bad? Consequentialism isn’t a moral system by itself; you need axioms or goals.
Because ceteris paribus, I prefer not to make Joe or his friends sad (which is an instance of the more general rule, “don’t violate people’s preferences, ceteris paribus”). And before you say that makes morality “arbitrary” or something along those lines, note that the overwhelming majority of society (in most Western First World countries, anyway—I don’t how it is in, say, the Middle East) agrees with me.
So yes, technically you could have a preference for violating other people’s preferences, and those preferences would technically be just as valid as mine, but in practice, if you act upon that preference, you are violating one of society’s rules, and game theory says that defectors get punished. So unless you want to get locked up for a long time, don’t kill people.
Of course, you might find this unsatisfactory for several reasons. For example, you might demand that morality hold anywhere and everywhere, whether a society exists to enforce it or not. However, the behavior of other animals in the wild definitely contradicts that idea, and humans, for all their intelligence, are still animals at their core, and therefore likely to behave the same way if deprived of societal norms. (Mind you, given enough time, they could probably implement a society from scratch—after all, we did it once—but that’ll take a long time.) Unless you’re a moral realist or something, which is indefensible for other reasons, I don’t really see how you could argue your way out of this point.
Doesn’t that also imply you should feed utility monsters?
Sure. After all, I value humans much more highly than pigs. Doesn’t that imply that humans are utility monsters, at least compared to other animals?
EDIT: Vegans, on the other hand, should have a much harder time with the idea of utility monsters (at least from what little I know about veganism).
And that’s pretty much the difference between the two kinds of “moral realism”.
You can always keep asking why. That’s not particularly interesting.
In morals, as in logic, you can’t explain something by appealing to something else unless the chain terminates in an axiom.
The question “why is it bad to rape and murder?” can be rephrased as, “how can we determine if a thing is bad, in the case of rape and murder?”
The answer “rape and murder are bad by definition” may be unsatisfying, but at least it’s a workable way: everything on the list is bad, everything else is not. But the answer “because they make others sad” assumes you can determine making others sad is bad. You substitute one question for another, and unless we keep asking why, we won’t have answered the original question.
Okay, then interpret my answer as “rape and murder are bad because they make others sad, and making others sad is bad by definition”.
Replace “Killing Joe”, with say “not giving Joe a million dollars” in that argument, what changes?
A million dollars is a lot more zero-sum than not killing someone—if I give you a million dollars I lose a million dollars. To make the analogy more accurate, you’d need to stipulate that Joe will kill me if I don’t kill him.
Also, I don’t think it’s fair to ignore the fact that for most people, not killing someone is vastly easier to do at non-self-destructive costs. I appreciate that this is a quantitative argument rather than a categorical counterargument, but if we have atheists who base their sense of morality on a vague consequentialism that they can’t quite fully articulate, that’s still no worse than Robertson’s (presumed) divine command theory, and they should be able to make such such arguments without being accused of hypocrisy for not also advocating actions that would score much worse under their vague consequentialism.
And note that many (most?) people and many (most?) legal systems do in fact hold that in such situations (war, self-defence) you are entitled to kill Joe.
No, just that you’ll get some benefit from killing him, e.g., you get to have sex with his wife.
Does anything need to?
I guess you’re worried that if the same argument works in both cases then you might end up obliged to give Joe $1M. But those reasons why you should give Joe the money have exactly parallel reasons why you should keep it, and to zeroth order they all cancel out, so no such obligation.
If you look with a bit more detail, then the reasons might be stronger one way than the other; for instance, if you are quite rich and Joe is quite poor, he might benefit more from the money than you would. We don’t generally have norms saying you should give him the money in this case for all sorts of good reasons, but instead we have taxation (compulsory) and charity (optional) which end up having an effect a bit like saying that rich people should give some of their money to much poorer people.
In typical cases, (1) if you give Joe a $1M then your loss will be bigger than Joe’s gain, so even aside from other considerations you probably shouldn’t, and (2) if you kill Joe then Joe’s loss will be bigger than your gain, so even aside from other considerations you probably shouldn’t. So the simple-minded “do whatever makes people happiest” principle (a.k.a. total utilitarianism, but you don’t have to be a total utilitarian for this to be a reason, as opposed to the only possible reason, for doing something) gives the “right” answers in most cases.
No, I’m claiming neither Kindly nor you actually believe the argument you’ve given.
Except, you’re not doing that, i.e., you’re not giving all your income to charity. So since you’re willing to ignore parts of your ethics when its inconvenient, why not also ignore the parts about not killing Joe when it would be convenient were Joe to die.
Your overconfidence in your mind-reading abilities is noted.
The fact that someone doesn’t act as a perfect utility maximizer doesn’t mean that utility gains aren’t worth seeking, for them out for others. If you ask “why did you buy that thing?” and I say I bought it because it was half the price of the alternative, am I refuted if you point out that I don’t always buy the cheapest things I can?
As I said: a reason, not the only possible reason.
How do you distinguish the part of your ethics that you ignore in practice, e.g., not giving all your money to charity, from the part you insist you and everybody follow, e.g., not killing Joe even though he’s being really really annoying.
Giving all my money to charity isn’t a part of my ethics.
Increasing net utility (or something of the kind) is one of the things I care about. So the fact that something increases net utility is a reason to do it, and the fact that something decreases net utility is a reason not to. But net utility isn’t the only thing I care about, so a thing that increases net utility isn’t necessarily a thing I think I should do.
What I insist on, though, is another matter again. That’s a matter of Schelling points and traditions and the like, optimized (inter alia) for being easy to remember and intuitively plausible.
So:
Giving $1M to Joe: increases his utility, decreases mine, probably not a win overall in terms of net utility. Fails various other tests too. Not in any sense any sort of moral obligation.
Giving $100 to Joe, who is much poorer than me: net utility increase, might be a good thing to do on those terms. Probably reasonable not to do simply on the grounds that I care more about my own utility than that of strangers, that if I’m trying to do maximum good there are others who need the money much more than Joe, etc.
Giving $100 to a carefully chosen effective charity: close to the best thing I can do for net utility with the money. I still care more about my own utility than about strangers’, though, so not necessarily obligatory even “internally”.
Giving at least a few percent of one’s income to effective charities, provided one is reasonably comfortable financially: almost always a big net utility gain, not too burdensome, has the same form as various traditional practices, easy to remember and to do. I’d be comfortable recommending this as a principle everyone should be following.
The attentive reader will notice that not killing people just for being annoying clearly fits into the same category as the last of those.
What changes is that I would like to have a million dollars as much as Joe would. Similarly, if I had to trade between Joe’s desire to live and my own, the latter would win.
In another comment you claim that I do not believe my own argument. This is false. I know this because if we suppose that Joe would like to be killed, and Joe’s friends would not be said if he died, then I am okay with Joe’s death. So there is no other hidden factor that moves me.
I’m not sure what the observation that I do not give all of my money away to charity has to do with anything.
Um, what are you using to compare preferences across people.
How about Joe’s desire to live against you desire to not have him annoy you, or to have sex with his wife, or any number of other possible motives?
Do you have a point?
A rule whereby you do not kill people without their consent is much easier to implement, and results in many fewer bad consequences (including perverse incentives), than a rule whereby you do not refuse to give people a million dollars without their consent.
I’m not talking about a general rule against killing, I’m talking killing this particular guy named Joe, who’s really annoying me.
Intuition. Terminal values.
You’d be amazed what can seem intuitive when you find yourself in a situation where it would be really convenient for Joe to die.
That would mean that atheist morality is context dependent, for instance applying different standards at peacetime and wartime. Historically, Christian morality serms to be similar.
For all that Christian moralists criticize situationalist ethics, I’ve found that all ethical systems inevitably end up being situationalist; i.e. “thou shalt not kill” except when God commands otherwise.