“In a society in which the narrow pursuit of material self-interest is the norm, the shift to an ethical stance is more radical than many people realize. In comparison with the needs of people starving in Somalia, the desire to sample the wines of the leading French vineyards pales into insignificance. Judged against the suffering of immobilized rabbits having shampoos dripped into their eyes, a better shampoo becomes an unworthy goal. An ethical approach to life does not forbid having fun or enjoying food and wine, but it changes our sense of priorities. The effort and expense put into buying fashionable clothes, the endless search for more and more refined gastronomic pleasures, the astonishing additional expense that marks out the prestige car market in cars from the market in cars for people who just want a reliable means to getting from A to B, all these become disproportionate to people who can shift perspective long enough to take themselves, at least for a time, out of the spotlight. If a higher ethical consciousness spreads, it will utterly change the society in which we live.”
—Peter Singer
As it is probably intended, the more reminders like this I read, the more ethical I should become. As it actually works, the more of this I read, the less I become interested in ethics. Maybe I am extraordinarily selfish and this effect doesn’t happen to most, but it should be at least considered that constant preaching of moral duties can have counterproductive results.
I suspect it’s because authors of “ethical remainders” are usually very bad at understanding human nature.
What they essentially do is associate “ethical” with “unpleasant”, because as long as you have some pleasure, you are obviously not ethical enough; you could do better by giving up some more pleasure, and it’s bad that you refuse to do so. The attention is drawn away from good things you are really doing, to the hypothetical good things you are not doing.
But humans are usually driven by small incentives, by short-term feelings. The best thing our rationality can do is better align these short-term feelings with out long-term goals, so we actually feel happy when contributing to our long-term goals. And how exactly are these “ethical remainders” contributing to the process? Mostly by undercutting your short-term ethical motivators, by always reminding you that what you did was not enough, therefore you don’t deserve the feelings of satisfaction. Gradually they turn these motivators off, and you no longer feel like doing anything ethical, because they convinced you (your “elephant”) that you can’t.
Ethics without understanding human nature is just a pile of horseshit. Of course that does not prevent other people from admiring those who speak it.
Yes. And it works this way even without insisting that more can be done; even if you live up to the demands, or even if the moral preachers recognise your right to be happy sometimes, the warm feeling from doing good is greatly diminished when you are told that philantrophy is just being expected, that helping others is not something one does naturally with joy, but that it should be a conscious effort, a hard work, to be done properly.
Not to mention the remarks of Mark Twain on a fundraiser he attended once:
Well, Hawley worked me up to a great state. I couldn’t wait for him to get through [his speech]. I had four hundred dollars in my pocket. I wanted to give that and borrow more to give. You could see greenbacks in every eye. But he didn’t pass the plate, and it grew hotter and we grew sleepier. My enthusiasm went down, down, down - $100 at a time, till finally when the plate came round I stole 10 cents out of it. [Prolonged laughter.] So you see a neglect like that may lead to crime.
It might be worth taking a look at Karen Horney’s work. She was an early psychoanalyst who wrote that if a child is abused, neglected, or has normal developmental stages overly interfered with, they are at risk of concluding that just being a human being isn’t good enough, and will invent inhuman standards for themselves.
I’m working on understanding the implications (how do you get living as a human being right? :-/ ), but I think she was on to something.
I wasn’t abused or neglected. Did she check experimentally that abuse or neglect is more prevalent among rationalists than in the general population?
Of course that’s not something a human would ordinarily do to check a plausible-sounding hypothesis, so I guess she probably didn’t, unless something went horribly wrong in her childhood.
Second thought: Maybe I should have not mentioned her theory about why people adopt inhuman standards, and just focused on the idea that inhuman standards are likely to backfire, Viliam_Bur did.
Also—if I reread I’ll check this—I think Horney focused on inhuman standards of already having a quality, which is not quite the same thing as having inhuman standards about what one ought to achieve, though I think they’re related.
I was thinking about prase in particular, who sounds as though he might have some problems with applying high standards in a way that’s bad for him.
Horney died in 1952, so she might not have had access to rationalists in your sense of the word.
When I said it might be worth taking a look at Horney’s work, I really did mean I thought it might be worth exploring, not that I’m very sure it applies. It seems to be of some use for me.
To be clear, I don’t have problems with applying high standards to myself, unless not wishing to apply such standards qualifies as a problem. However I am far more willing to consider myself an altruist (and perhaps behave accordingly) when other people don’t constantly remind me that it’s my moral obligation.
Thanks for the explanation, and my apologies for jumping to conclusions.
I’ve been wondering why cheerleading sometimes damages motivation—there’s certainly a big risk of it damaging mine. The other half would be why cheerleading sometimes works, and what the differences are between when it works and when it doesn’t.
At least for me, I tend to interpret cheerleading as “Let me take you over for my purposes. This project probably isn’t worth it for you, that’s why I’m pushing you into it instead of letting you see its value for yourself.” with a side order of “You’re too stupid to know what’s valuable, that’s why you have to be pushed.”
I’m not sure what cheerleading feels like to people who like it.
The feeling of being forced to pursue someone else’s goals is certainly part of it. But even if the goals align, being pushed usually means that one’s good deeds aren’t going to be fully appreciated by others, which too is a great demotivator.
Judged against the suffering of immobilized rabbits having shampoos dripped into their eyes, a better shampoo becomes an unworthy goal.
I’m not at all convinced that this is the case. After all, the shampoos are being designed to be less painful, and you don’t need to test on ten thousand rabbits. Considering the distribution of the shampoos, this may save suffering even if you regard human and rabbit suffering as equal in disutility.
An ethical approach to life does not forbid having fun or enjoying food and wine
I’m not at all convinced of this. It seems to me that a genuinely ethical life requires extraordinary, desperate asceticism. Anything less is to place your own wellbeing above those of your fellow man. Not just above, but many orders of magnitude above, for even trivial luxuries.
That sounds to me like exactly the sort of excuse a bad person would use to justify valuing their selfish whims over the lives of other people. If we’re holding our ideas to scrutiny, I think the idea that the ‘Sunday Catholic’ school of ethics is consistent could take a long, hard look.
That she gives more than most others doesn’t imply that her belief that giving even more is practically impossible isn’t hypocritical. Yes, she very likely believes it, thus it is not a conscious lie, but only a small minority of falsities are conscious lies.
Yeah, but there’s also a certain plausibility to the heuristic which says that you don’t get to second-guess her knowledge of what works for charitable giving until you’re—not giving more—but at least playing in the same order of magnitude as her. Maybe her pushing a little bit harder on that “hypocrisy” would cause her mind to collapse, and do you really want to second-guess her on that if she’s already doing more than an order of magnitude better than what your own mental setup permits?
I am actually inclined to believe Wise’s hypothesis (call it H) that being overly selfless can hamper one’s ability to help others. I was only objecting to army1987′s implicit argument that because she (Wise) clearly believes H, Dolores1984′s suspicion of H being a self-serving untrue argument is unwarranted.
There’s an Italian proverb “Everybody is a faggot with other people’s asses”, meaning more-or-less ’everyone is an idealist when talking about issues that don’t directly affect them/situations they have never experienced personally”.
I use “hypocrisy” to denote all instances of people violating their own declared moral standards, especially when they insist they aren’t doing it after receiving feedback (if they can realise what they did after being told, only then I’d prefer to call it a ‘mistake’). The reason why I don’t restrict the word to deliberate lying is that I think deliberate lying of this sort is extremely rare; self-serving biases are effective in securing that.
I don’t believe it’s practically impossible to give more than I do. I could push myself farther than I do. I don’t perfectly live up to my own ideals. Given that I’m a human, I doubt any of you find that surprising.
This is why I think it’s not too terribly useful to give labels like “good person” or “bad person,” especially if our standard for being a “bad person” is “someone with anything less than 100% adherence to all the extrapolated consequences of their verbally espoused values.” In the end, I think labeling people is just a useful approximation to labeling consequences of actions.
Julia, Jeff, and others accomplish a whole lot of good. Would they, on average, end up accomplishing more good if they spent more time feeling guilty about the fact that they could, in theory, be helping more? This is a testable hypothesis. Are people in general more likely to save more lives if they spend time thinking about being happy and avoiding burnout, or if they spend time worrying that they are bad people making excuses for allowing themselves to be happy?
The question here is not whether any individual person could be giving more; the answer is virtually always “yes.” The question is, what encourages giving? How do we ensure that lives are actually being saved, given our human limitations and selfish impulses? I think there’s great value in not generating an ugh-field around charity.
I believe Peter Singer actually originally advocated the asceticism you mention, but eventually moved towards “try to give 10% of your income”, because people were actually willing to do that, and his goal was to actually help people, not uphold a particular abstract ideal.
An interesting implication, if this generalizes: “Don’t advocate the moral beliefs you think people should follow. Advocate the moral beliefs which hearing you advocate them would actually cause other people to behave better.”
Just a sidenote: If you are the kind of person who is often worried about letting people down, entertaining the suspicion that most people follow this strategy already is a fast, efficient way to drive yourself completely insane.
“You’re doing fine.”
“Oh, I know this game. I’m actually failing massively, but you thought, well, this is the best he can do, so I might as well make him think he succeeded. DON’T LIE TO ME! AAAAH...”
Well, Jeff and I give about a third of our income, so I’d say we’re not Sunday Catholics but Sunday-Monday-and-part-of-Tuesday Catholics.
Seriously, though, I advocate that people do what will result in the most good, which is usually not to try for perfection. Dolores1984, you’ve said before that rather than fail at a high standard of helping you’d rather not help at all. (Correct me if that summary is wrong). I’d rather see people set a standard in keeping with their level of motivation, if that’s what will mean they take a stab at helping.
That’s fair. In my case, I think I’ve decided that, so long as we’re all going to be bad people, and value some human life much more than others, I’d rather care a lot about a few people than a little about a lot of people, and calibrate my charitable giving accordingly. It does not seem, in particular, less morally defensible, and it’s certainly more along the lines of what humans were built to do. To that end, I adopted a shelter cat who was about to be put down. My views may change slightly, however, when I am less thoroughly and completely broke.
The coalition of modules in your mind that believes in ascetism being the only acceptable solution is most likely vastly outnumbered by the hedonistic modules. (Most people for which this wasn’t the case were most likely filtered out of the gene pool.) As with politics, if you refuse to make compromises and insist on pushing your agenda while outnumbered, you will lose, or at best (worst?) create a deadlock in which nobody is happy. If you’re not so absolute, you’re more likely to achieve at least some of your aims.
As those who know me can attest, I often make the point that radical self-sacrificing utilitarianism isn’t found in humans and isn’t a good target to aim for. Almost no one would actually take on serious harm with certainty for a small chance of helping distant others. Robin Hanson often presents evidence for this, e.g. this presentation on “why doesn’t anyone create investment funds for future people?” However, sometimes people caught up in thoughts of the good they can do, or a self-image of making a big difference in the world, are motivated to think of themselves as really being motivated primarily by helping others as such. Sometimes they go on to an excessive smart sincere syndrome, and try (at the conscious/explicit level) to favor altruism at the severe expense of their other motivations: self-concern, relationships, warm fuzzy feelings.
Usually this doesn’t work out well, as the explicit reasoning about principles and ideals is gradually overridden by other mental processes, leading to exhaustion, burnout, or disillusionment. The situation winds up worse according to all of the person’s motivations, even altruism. Burnout means less good gets done than would have been achieved by leading a more balanced life that paid due respect to all one’s values. Even more self-defeatingly, if one actually does make severe sacrifices, it will tend to repel bystanders.
Whatever utility calculus you follow, it is a mathematical model.
“All models are false.”
In particular, what’s going wrong here is your model is treating you, the agent, as atomic. In reality, as Kaj Sotala described very well below, you are not an atomic agent, you have an internal architecture, and this architecture has very important ramifications for how you should think about utilities.
If I may make an analogy from the field of AI. In the old days, AI was concerned about something called “discrete search,” which is just a brute force way to look for an optimum in a state space, where each state is essentially an atomic point. Alpha-beta pruning search Deep Blue uses to play chess is an example of discrete search. At some point it was realized that for many problems atomic point-like states resulted in a combinatorial explosion, and in addition states had salient features describable by, say, logical languages. As this realization was implemented, you no longer had a state-as-a-point, but state-as-a-collection-of-logical-statements. And the field of planning was born. Planning has some similarities to discrete search, but because we “opened up” the states into a full blown logical description, the character of the problem is quite different.
“In a society in which the narrow pursuit of material self-interest is the norm, the shift to an ethical stance is more radical than many people realize. In comparison with the needs of people starving in Somalia, the desire to sample the wines of the leading French vineyards pales into insignificance. Judged against the suffering of immobilized rabbits having shampoos dripped into their eyes, a better shampoo becomes an unworthy goal. An ethical approach to life does not forbid having fun or enjoying food and wine, but it changes our sense of priorities. The effort and expense put into buying fashionable clothes, the endless search for more and more refined gastronomic pleasures, the astonishing additional expense that marks out the prestige car market in cars from the market in cars for people who just want a reliable means to getting from A to B, all these become disproportionate to people who can shift perspective long enough to take themselves, at least for a time, out of the spotlight. If a higher ethical consciousness spreads, it will utterly change the society in which we live.” —Peter Singer
As it is probably intended, the more reminders like this I read, the more ethical I should become. As it actually works, the more of this I read, the less I become interested in ethics. Maybe I am extraordinarily selfish and this effect doesn’t happen to most, but it should be at least considered that constant preaching of moral duties can have counterproductive results.
I suspect it’s because authors of “ethical remainders” are usually very bad at understanding human nature.
What they essentially do is associate “ethical” with “unpleasant”, because as long as you have some pleasure, you are obviously not ethical enough; you could do better by giving up some more pleasure, and it’s bad that you refuse to do so. The attention is drawn away from good things you are really doing, to the hypothetical good things you are not doing.
But humans are usually driven by small incentives, by short-term feelings. The best thing our rationality can do is better align these short-term feelings with out long-term goals, so we actually feel happy when contributing to our long-term goals. And how exactly are these “ethical remainders” contributing to the process? Mostly by undercutting your short-term ethical motivators, by always reminding you that what you did was not enough, therefore you don’t deserve the feelings of satisfaction. Gradually they turn these motivators off, and you no longer feel like doing anything ethical, because they convinced you (your “elephant”) that you can’t.
Ethics without understanding human nature is just a pile of horseshit. Of course that does not prevent other people from admiring those who speak it.
Yes. And it works this way even without insisting that more can be done; even if you live up to the demands, or even if the moral preachers recognise your right to be happy sometimes, the warm feeling from doing good is greatly diminished when you are told that philantrophy is just being expected, that helping others is not something one does naturally with joy, but that it should be a conscious effort, a hard work, to be done properly.
xkcd reference.
Not to mention the remarks of Mark Twain on a fundraiser he attended once:
It might be worth taking a look at Karen Horney’s work. She was an early psychoanalyst who wrote that if a child is abused, neglected, or has normal developmental stages overly interfered with, they are at risk of concluding that just being a human being isn’t good enough, and will invent inhuman standards for themselves.
I’m working on understanding the implications (how do you get living as a human being right? :-/ ), but I think she was on to something.
I wasn’t abused or neglected. Did she check experimentally that abuse or neglect is more prevalent among rationalists than in the general population?
Of course that’s not something a human would ordinarily do to check a plausible-sounding hypothesis, so I guess she probably didn’t, unless something went horribly wrong in her childhood.
Second thought: Maybe I should have not mentioned her theory about why people adopt inhuman standards, and just focused on the idea that inhuman standards are likely to backfire, Viliam_Bur did.
Also—if I reread I’ll check this—I think Horney focused on inhuman standards of already having a quality, which is not quite the same thing as having inhuman standards about what one ought to achieve, though I think they’re related.
I was thinking about prase in particular, who sounds as though he might have some problems with applying high standards in a way that’s bad for him.
Horney died in 1952, so she might not have had access to rationalists in your sense of the word.
When I said it might be worth taking a look at Horney’s work, I really did mean I thought it might be worth exploring, not that I’m very sure it applies. It seems to be of some use for me.
To be clear, I don’t have problems with applying high standards to myself, unless not wishing to apply such standards qualifies as a problem. However I am far more willing to consider myself an altruist (and perhaps behave accordingly) when other people don’t constantly remind me that it’s my moral obligation.
Thanks for the explanation, and my apologies for jumping to conclusions.
I’ve been wondering why cheerleading sometimes damages motivation—there’s certainly a big risk of it damaging mine. The other half would be why cheerleading sometimes works, and what the differences are between when it works and when it doesn’t.
At least for me, I tend to interpret cheerleading as “Let me take you over for my purposes. This project probably isn’t worth it for you, that’s why I’m pushing you into it instead of letting you see its value for yourself.” with a side order of “You’re too stupid to know what’s valuable, that’s why you have to be pushed.”
I’m not sure what cheerleading feels like to people who like it.
No need to apologise.
The feeling of being forced to pursue someone else’s goals is certainly part of it. But even if the goals align, being pushed usually means that one’s good deeds aren’t going to be fully appreciated by others, which too is a great demotivator.
I think the feeling that one’s good deeds will be unappreciated is especially a risk for altruism.
I’m not at all convinced that this is the case. After all, the shampoos are being designed to be less painful, and you don’t need to test on ten thousand rabbits. Considering the distribution of the shampoos, this may save suffering even if you regard human and rabbit suffering as equal in disutility.
I’m not at all convinced of this. It seems to me that a genuinely ethical life requires extraordinary, desperate asceticism. Anything less is to place your own wellbeing above those of your fellow man. Not just above, but many orders of magnitude above, for even trivial luxuries.
Julia Wise would disagree, on the grounds that this is impossible to maintain and you do more good if you stay happy.
And the great philosopher Diogenes would disagree with her.
So, how many lives did he save again?
Clever guy, but I’m not sure if you want to follow his example.
That sounds to me like exactly the sort of excuse a bad person would use to justify valuing their selfish whims over the lives of other people. If we’re holding our ideas to scrutiny, I think the idea that the ‘Sunday Catholic’ school of ethics is consistent could take a long, hard look.
We’re talking about a person who, along with her partner, gives to efficient charity twice as much money as she spends on herself. There’s no way she doesn’t actually believe what she says and still does that.
That she gives more than most others doesn’t imply that her belief that giving even more is practically impossible isn’t hypocritical. Yes, she very likely believes it, thus it is not a conscious lie, but only a small minority of falsities are conscious lies.
Yeah, but there’s also a certain plausibility to the heuristic which says that you don’t get to second-guess her knowledge of what works for charitable giving until you’re—not giving more—but at least playing in the same order of magnitude as her. Maybe her pushing a little bit harder on that “hypocrisy” would cause her mind to collapse, and do you really want to second-guess her on that if she’s already doing more than an order of magnitude better than what your own mental setup permits?
I am actually inclined to believe Wise’s hypothesis (call it H) that being overly selfless can hamper one’s ability to help others. I was only objecting to army1987′s implicit argument that because she (Wise) clearly believes H, Dolores1984′s suspicion of H being a self-serving untrue argument is unwarranted.
There’s an Italian proverb “Everybody is a faggot with other people’s asses”, meaning more-or-less ’everyone is an idealist when talking about issues that don’t directly affect them/situations they have never experienced personally”.
You’re using hypocritical in a weird way—I’d only normally use it to mean ‘lying’, not ‘mistaken’.
I use “hypocrisy” to denote all instances of people violating their own declared moral standards, especially when they insist they aren’t doing it after receiving feedback (if they can realise what they did after being told, only then I’d prefer to call it a ‘mistake’). The reason why I don’t restrict the word to deliberate lying is that I think deliberate lying of this sort is extremely rare; self-serving biases are effective in securing that.
You underestimate force of habit, prase.
Can you explain?
I don’t believe it’s practically impossible to give more than I do. I could push myself farther than I do. I don’t perfectly live up to my own ideals. Given that I’m a human, I doubt any of you find that surprising.
This is why I think it’s not too terribly useful to give labels like “good person” or “bad person,” especially if our standard for being a “bad person” is “someone with anything less than 100% adherence to all the extrapolated consequences of their verbally espoused values.” In the end, I think labeling people is just a useful approximation to labeling consequences of actions.
Julia, Jeff, and others accomplish a whole lot of good. Would they, on average, end up accomplishing more good if they spent more time feeling guilty about the fact that they could, in theory, be helping more? This is a testable hypothesis. Are people in general more likely to save more lives if they spend time thinking about being happy and avoiding burnout, or if they spend time worrying that they are bad people making excuses for allowing themselves to be happy?
The question here is not whether any individual person could be giving more; the answer is virtually always “yes.” The question is, what encourages giving? How do we ensure that lives are actually being saved, given our human limitations and selfish impulses? I think there’s great value in not generating an ugh-field around charity.
Julia Wise holds the distinction of having actually tried it though. Few people are selfless enough to even make the attempt.
I believe Peter Singer actually originally advocated the asceticism you mention, but eventually moved towards “try to give 10% of your income”, because people were actually willing to do that, and his goal was to actually help people, not uphold a particular abstract ideal.
An interesting implication, if this generalizes: “Don’t advocate the moral beliefs you think people should follow. Advocate the moral beliefs which hearing you advocate them would actually cause other people to behave better.”
Just a sidenote: If you are the kind of person who is often worried about letting people down, entertaining the suspicion that most people follow this strategy already is a fast, efficient way to drive yourself completely insane.
“You’re doing fine.”
“Oh, I know this game. I’m actually failing massively, but you thought, well, this is the best he can do, so I might as well make him think he succeeded. DON’T LIE TO ME! AAAAH...”
Sometimes I wonder how much of LW is “nerds” rediscovering on their own how neuro-typical communication works.
I don’t mean to say I am not a “nerd” in this sense :).
The result bears about as much resemblance to real people as an FRP character sheet and rulebook.
Is it justified? Pretend we care nothing for good and bad people. Do these “bad people” do more good than “good people”?
Do you live a life of extraordinary, desperate asceticism? If not, why not? If so, are you happy?
Well, Jeff and I give about a third of our income, so I’d say we’re not Sunday Catholics but Sunday-Monday-and-part-of-Tuesday Catholics.
Seriously, though, I advocate that people do what will result in the most good, which is usually not to try for perfection. Dolores1984, you’ve said before that rather than fail at a high standard of helping you’d rather not help at all. (Correct me if that summary is wrong). I’d rather see people set a standard in keeping with their level of motivation, if that’s what will mean they take a stab at helping.
That’s fair. In my case, I think I’ve decided that, so long as we’re all going to be bad people, and value some human life much more than others, I’d rather care a lot about a few people than a little about a lot of people, and calibrate my charitable giving accordingly. It does not seem, in particular, less morally defensible, and it’s certainly more along the lines of what humans were built to do. To that end, I adopted a shelter cat who was about to be put down. My views may change slightly, however, when I am less thoroughly and completely broke.
Fallacy of grey much? We’re all going to be bad people, but some of us are going to be worse people than others.
The coalition of modules in your mind that believes in ascetism being the only acceptable solution is most likely vastly outnumbered by the hedonistic modules. (Most people for which this wasn’t the case were most likely filtered out of the gene pool.) As with politics, if you refuse to make compromises and insist on pushing your agenda while outnumbered, you will lose, or at best (worst?) create a deadlock in which nobody is happy. If you’re not so absolute, you’re more likely to achieve at least some of your aims.
Or, as Carl Shulman put it:
If I may be so bold as to summarize this thread:
Whatever utility calculus you follow, it is a mathematical model.
“All models are false.”
In particular, what’s going wrong here is your model is treating you, the agent, as atomic. In reality, as Kaj Sotala described very well below, you are not an atomic agent, you have an internal architecture, and this architecture has very important ramifications for how you should think about utilities.
If I may make an analogy from the field of AI. In the old days, AI was concerned about something called “discrete search,” which is just a brute force way to look for an optimum in a state space, where each state is essentially an atomic point. Alpha-beta pruning search Deep Blue uses to play chess is an example of discrete search. At some point it was realized that for many problems atomic point-like states resulted in a combinatorial explosion, and in addition states had salient features describable by, say, logical languages. As this realization was implemented, you no longer had a state-as-a-point, but state-as-a-collection-of-logical-statements. And the field of planning was born. Planning has some similarities to discrete search, but because we “opened up” the states into a full blown logical description, the character of the problem is quite different.
I think we need to “open up the agent.”