Why Don’t People Help Others More?
As Peter Singer writes in his book The Life You Can Save: “[t]he world would be a much simpler place if one could bring about social change merely by making a logically consistent moral argument”. Many people one encounters might agree that a social change movement is noble yet not want to do anything to promote it, or want to give more money to a charity yet refrain from doing so. Additional moralizing doesn’t seem to do the trick. …So what does?
Motivating people to altruism is relevant for the optimal philanthropy movement. For a start on the answer, like many things, I turn to psychology. Specifically, the psychology Peter Singer catalogues in his book.
A Single, Identifiable Victim
One of the most well-known motivations behind helping others is a personal connection, which triggers empathy. When psychologists researching generosity paid participants to join a psychological experiment and then later gave these participants the opportunity to donate to a global poverty fighting organization Save the Children, two different kinds of information were given.
One random group of participants were told “Food shortages in Malawi are affecting more than three million children” and some additional information about how the need for donations was very strong, and these donations could help stop the food shortages.
Another random group of participants were instead shown the photo of Rokia, a seven-year-old Malawian girl who is desperately poor. The participants were told that “her life will be changed for the better by your gift”.
Furthermore, a third random group of participants were shown the photo of Rokia, told about who she is and that “her life will be changed for the better”, but ALSO told about the general information about the famine and told the same “food shortages [...] are affecting more than three million”—a combination of both the previous groups.
Lastly, a fourth random group was shown the photo of Rokia, informed about her the same as the other groups, and then given information about another child, identified by name, and told that their donation would also affect this child too for the better.
It’s All About the Person
Interestingly, the group who was told ONLY about Rokia gave the most money. The group who was told about both children reported feeling less overal emotion than those who only saw Rokia, and gave less money. The group who was told about both Rokia and the general famine information gave even less than that, followed by the group that only got the general famine information.1,2 It turns out that information about a single person was the most salient for creating an empathetic response to trigger a willingness to donate.1,2
This continues through additional studies. In another generosity experiment, one group of people was told that a single child needed a lifesaving medical treatment that costs $300K, and was given the opportunity to contribute towards this fund. A second random group of people was told that eight children needed a lifesaving treatment, and all of them would die unless $300K could be provided, and was given an opportunity to contribute. More people opted to donate toward the single child.3,4
This is the basis for why we’re so willing to chase after lost miners or Baby Jessica no matter the monetary cost, but turn a blind eye to the mass unknown starving in the developing world. Indeed, the person doesn’t even need to be particularly identified, though it does help. In another experiment, people asked by researchers to make a donation to Habitat for Humanity were more likely to do so if they were told that the family “has been selected” rather than that they “will be selected”—even though all other parts of the pitch were the same, and the participants got no information about who the families actually were5.
The Deliberative and The Affective
Why is this the case? Researcher Paul Slovic thinks that humans have two different processes for deciding what to do. The first is an affective system that responds to emotion, rapidly processing images and stories and generating an intuitive feeling that leads to immediate action. The second is a deliberative system that draws on reasoning, and operates on words, numbers, and abstractions, which is much slower to generate action.6
To follow up, the Rokia experiment was done again, except yet another twist was added—there were two groups, one told only about Rokia exactly as before, and one told only the generic famine information exactly as before. Within each group, half the group took a survey designed to arouse their emotions by asking them things like “When you hear the word ‘baby’ how do you feel?” The other half of both groups was given emotionally neutral questions, like math puzzles.
This time, the Rokia group gave far more, but those in the group who randomly had their emotions aroused gave even more than those who heard about Rokia but had finished math problems. On the other side, those who heard the generic famine information showed no increase in donation regardless of how heightened their emotions were.1
Futility and Making a Difference
Imagine you’re told that there are 3000 refugees at risk in a camp in Rwanda, and you could donate towards aid that would save 1500 of them. Would you do it? And how much would you donate?
Now this time imagine that you can still save 1500 refugees with the same amount of money, but the camp has 10000 refugees. In an experiment where these two scenarios were presented not as a thought experiment but as realities to two separate random groups, the group that heard of only 3000 refugees were more likely to donate, and donated larger amounts.7,8
Enter another quirk of our giving psychology, right or wrong: futility thinking. We think that if we’re not making a sizable difference, it’s not worth making the difference at all—it will only be a drop in the ocean and the problem will keep raging on.
Am I Responsible?
People are also far less likely to help if they’re with other people. In this experiment, students were invited to participate in a market research survey. However, when the researcher gave the students their questionnaire to fill out, she went into a back room separated from the office only by a curtain. A few minutes later, noises strongly suggested that she had got on a chair to get something from a high shelf, and then fell off it, loudly complaining that she couldn’t feel or move her foot.
With only one student taking the survey, 70% of them stopped what they were doing and offered assistance. However, when there were two students taking the survey, this number dropped down dramatically. Most noticeably, when the group was two students—but one of the students was a stooge who was in on it and would always not respond, the response rate of the non-stooge participant was only 7%.9
This one is known as diffusion of responsibility, better known as the bystander effect—we help more often when we think it is our responsibility to do so, and—again for right or for wrong—we naturally look to others to see if they’re helping before doing so ourselves.
What’s Fair In Help?
It’s clear that people value fairness, even to their own detriment. In a game called “the Ultimatum Game”, one participant is given a sum of money by the researcher, say $10, and told they can split this money with an anonymous second player in any proportion they choose—give them $10, give them $7, give them $5, give them nothing, everything is fair game. The catch is, however, the second player, after hearing of the split anonymously, gets to vote to accept it or reject it. Should the split be accepted, both players walk away with the agreed amount. But should the split be rejected, both players walk away with nothing.
A Fair Split
The economist, expecting ideally rational and perfectly self-interested players, predicts that the second player would accept any split that gets them money, since anything is better than nothing. And the first player, understanding this, would naturally offer $1 and keep $9 for himself. At no point are identities revealed, so reputation and retribution are no issue.
But the results turn out to be quite different—the vast majority offer an equal split. Yet, when an offer comes around that offers $2 or less, it is almost always rejected, even though $2 is better than nothing.10 And this effect persists even when played for thousands of dollars and persists across nearly all cultures.
Splitting and Anchoring in Charity
This sense of fairness persists into helping as well—people generally have a strong tendency not to want to help more than the other people around them, and if they find themselves the only ones helping on a frequent basis, they start to feel a “sucker”. On the flipside, if others are doing more, they will follow suit.11,12,13
Those told the average donation to a charity nearly always tend to give that amount, even if the average told to them is a lie, having secretly been increased or decreased. And it can be replicated even without lying—those told about an above average gift were far more likely to donate more, even attempting to match that gift.14,15 Overall, we tend to match the behavior of our reference class—those people we identify with—and this includes how much we help. We donate more when we believe others are donating more, and donate less when we believe others are doing so.
Challenging the Self-Interest Norm
But there’s a way to break this cycle of futility, responsibility, and fairness—challenge the norm by openly communicating about helping others. While many religious and secular values insist that the best giving is anonymous giving, this turns out to not always be the case. While there may be other reasons to give anonymously, don’t forget the benefits of giving openly—being open about helping inspires others to help, and can help challenge the norms of the culture.
Indeed, many organizations now exist to help challenge the norms of donations and try to create a culture where they give more. GivingWhatWeCan is a community of 230 people (including me!) who have all pledged to donate at least 10% of their income to organizations working on ending extreme poverty, and submit statements proving so. BolderGiving has a bunch of inspiring stories of over 100 people who all give at least 20% of their income, with a dozen giving over 90%! And these aren’t all rich people, some of them are even ordinary students.
Who’s Willing to Be Altruistic?
While people are not saints, experiments have shown that people tend to grossly overestimate how self-interested other people are—for one example, people estimated that males would overwhelmingly favor a piece of legislation to “slash research funding to a disease that affects only women”, even while—being male—they themselves do not support such legislation.16
This also manifests itself in an expectation that people be “self-interested” in their philanthropic cause—suggesting much stronger support for volunteers in Students Against Drunk Driving who themselves knew people killed in drunk driving accidents versus those people who had no such personal experiences but just thought it to be “a very important cause”.17
Alex de Tocqueville, echoing the early economists who expected $9/$1 splits in the Ultimatum Game, wrote in 1835 that “Americans enjoy explaining almost every act of their lives on the principle of self-interest”.18 But this isn’t always the case, and in challenging the norm, people make it more acceptable to be altruistic. It’s not just “goody two-shoes”, and it’s praiseworthy to be “too charitable”.
A Bit of a Nudge
A somewhat pressing problem in getting people to help was in organ donation—surely no one was inconvenienced by having their organs donated after they had died. Yet, why would people not sign up? And how could we get more people to sign up?
In Germany, only 12% of the population are registered organ donors. In nearby Austria, that number is 99.98%. Are people in Austria just less worried about what will happen to them after they die, or just that more altruistic? It turns out the answer is far more simple—in Germany you must put yourself on the register to become a potential donor (opt-in), whereas in Austria you are a potential donor unless you object (opt-out). While people may be, for right or for wrong, worried about the fate of their body after it is dead, they appear less likely to express these reservations in opt-out systems.19
While Richard Thaler and Cass Sunstein argue in their book Nudge: Improving Decisions About Health, Wellness, and Happiness that we sometimes suck at making decisions in our own interest and all could do better with more favorable “defaults”, such defaults are also pressing in helping people.
While opt-out organ donation is a huge deal, there’s another similar idea—opt-out philanthropy. Back before 2008 when the investment bank Bear Stearns still existed, Bear Stearns listed their guiding principle as philanthropy as fostering good citizenship and well-rounded individuals. To this effect, they required the top 1000 most highest paid employees to donate 4% of their salary and bonuses to non-profits, and prove it with their tax returns. This resulted in more than $45 million in donations during 2006. Many employees described the requirement as “getting themselves to do what they wanted to do anyway”.
Conclusions
So, according to this bit of psychology, what could we do to get other people to help more, besides moralize? Well, we have five key take-aways:
(1) present these people with a single and highly identifiable victim that they can help
(2) nudge them with a default of opt-out philanthropy
(3) be more open about our willingness to be altruistic and encourage other people to help
(4) make sure people understand the average level of helping around them, and
(5) instill a responsibility to help and an understanding that doing so is not futile.
Hopefully, with these tips and more, helping people more can be come just one of those things we do.
References
(Note: Links are to PDF files.)
1: D. A. Small, G. Loewenstein, and P. Slovic. 2007. “Sympathy and Callousness: The Impact of Deliberative Thought on Donations to Identifiable and Statistical Victims”. Organizational Behavior and Human Decision Processes 102: p143-53
2: Paul Slovic. 2007. “If I Look at the Mass I Will Never Act: Psychic Numbing and Genocide”. Judgment and Decision Making 2(2): p79-95.
3: T. Kogut and I. Ritov. 2005. “The ‘Identified Victim’ Effect: An Identified Group, or Just a Single Individual?”. Journal of Behavioral Decision Making 18: p157-67.
4: T. Kogut and I. Ritov. 2005. “The Singularity of Identified Victims in Separate and Joint Evaluations”. Organizational Behavior and Human Decision Processes 97: p106-116.
5: D. A. Small and G. Lowenstein. 2003. “Helping the Victim or Helping a Victim: Altruism and Identifiability”. Journal of Risk and Uncertainty 26(1): p5-16.
6: Singer cites this from Paul Slovic, who in turn cites it from: Seymour Epstein. 1994. “Integration of the Cognitive and the Psychodynamic Unconscious”. American Psychologist 49: p709-24. Slovic refers to the affective system as “experiential” and the deliberative system as “analytic”. This is also related to Daniel Kahneman’s popular book Thinking Fast and Slow.
7: D. Fetherstonhaugh, P. Slovic, S. M. Johnson, and J. Friedrich. 1997. “Insensitivity to the Value of Human Life: A Study of Psychophysical Numbing”. Journal of Risk and Uncertainty 14: p283-300.
8: Daniel Kahneman and Amos Tversky. 1979. “Prospect Theory: An Analysis of Decision Under Risk.” Econometrica 47: p263-91.
9: Bib Lantané and John Darley. 1970. The Unresponsive Bystander: Why Doesn’t He Help?. New York: Appleton-Century-Crofts, p58.
10: Martin Nowak, Karen Page, and Karl Sigmund. 2000. “Fairness Versus Reason in the Ultimatum Game”. Science 289: p1183-75.
11: Lee Ross and Richard E. Nisbett. 1991. The Person and the Situation: Perspectives of Social Psychology. Philadelphia: Temple University Press, p27-46.
12: Robert Cialdini. 2001. Influence: Science and Practice, 4th Edition. Boston: Allyn and Bacon.
13: Judith Lichtenberg. 2004. “Absence and the Unfond Heart: Why People Are Less Giving Than They Might Be”. in Deen Chatterjee, ed. The Ethics of Assistance: Morality and the Distant Needy. Cambridge, UK: Cambridge University Press.
14: Jen Shang and Rachel Croson. Forthcoming. “Field Experiments in Charitable Contribution: The Impact of Social Influence on the Voluntary Provision of Public Goods”. The Economic Journal.
15: Rachel Croson and Jen Shang. 2008. “The Impact of Downward Social Information on Contribution Decision”. Experimental Economics 11: p221-33.
16: Dale Miller. 199. “The Norm of Self-Interest”. American Psychologist 54: 1053-60.
17: Rebecca Ratner and Jennifer Clarke. Unpublished. “Negativity Conveyed to Social Actors Who Lack a Personal Connection to the Cause”.
18: Alexis de Tocqueville in J.P. Mayer ed., G. Lawrence, trans. 1969. Democracy in America. Garden City, N.Y.: Anchor, p546.
19: Eric Johnson and Daniel Goldstein. 2003. “Do Defaults Save Lives?”. Science 302: p1338-39.
(This is an updated version of an earlier draft from my blog.)
- Why I Took the Giving What We Can Pledge by 28 Dec 2016 0:02 UTC; 26 points) (EA Forum;
- 18 Aug 2012 17:41 UTC; 26 points) 's comment on [Link] Reddit, help me find some peace I’m dying young by (
- To Inspire People to Give, Be Public About Your Giving by 16 Sep 2014 17:34 UTC; 22 points) (EA Forum;
- 29 Apr 2014 21:43 UTC; 15 points) 's comment on LessWrong as social catalyst by (
- On private marriage contracts by 12 Jan 2013 14:53 UTC; 11 points) (
- 14 Aug 2012 18:59 UTC; 5 points) 's comment on Who Wants To Start An Important Startup? by (
- 2 Oct 2012 5:44 UTC; 2 points) 's comment on Rationality Quotes August 2012 by (
This post has clarified something really important for me: why I’ve had a lot of trouble being motivated to expand my business.
When I work with individual people, I’m motivated to help them. But when I think about the broader concept of “helping people”, it feels like something I should care about, but don’t. So, this article made me realize that this isn’t something that’s wrong with me, it’s just normal. (And presumably, it means that when other people talk about how they care about people and their mission, they’re probably thinking of some specific people somewhere in there!)
When I think back to when I’ve been more motivated by my work, it’s been when I’ve had specific exemplars that I’ve thought about. Like, when I was a programmer, I always knew at least some of my software’s users, at the very least as people on the other end of a phone call or email conversation, on up to seeing some of them on a regular basis.
I don’t have the same frequency of contact any more in my business, and in recent years I’ve had the challenge that the people I mainly interact with are people who’ve already been working with me for some time—which means they no longer have the same sort of challenges or needs as people who haven’t worked with me at all. (Indeed, I used to use myself as one of my exemplars, in that I tended to think in terms of, “what do I wish someone else had told me?” or “what would I have wanted to find in a book about this?”… but I am no longer similar enough to that older self that I have any real clue any more what he would’ve wanted or been able to use.)
This post also provides a further rationale for what some internet marketing gurus advise: that you develop a “customer avatar”—an imaginary customer who embodies the traits of your target audience—rather than thinking about demographics or multiple people. This advice is usually given in the context of being better able to write persuasively to that audience, because you’ll have a specific person you’re talking to, and because you’ll be able to better imagine what they need. However, I can see now that it also has the additional benefit of being more motivating: it feels much better to help that one imaginary person who has a problem right now, than to imagine helping countless numbers of vague, faceless people who might at some point have that problem.
It also reminds me of Robert Frtiz’s writing: he’s always saying you shouldn’t try to make rules for what you want to create, or what you care about in general, but instead focus on specific creations that you care about existing. i.e., don’t try to define yourself as a painter of landscapes or even a painter; just focus on the next thing you want to make, whether it’s a painting or a business or tonight’s dinner.
And in a final ironic meta-twist, the post itself is an illustration of its own point. I knew before about abstract/concrete construal, but only in the abstract. ;-) This post provided a sufficiently concrete construal that I can actually do something about it. Well done, sir!
Thanks for making me understand something extremely important with regard to creative work: Every creator should have a single, identifiable victim of his creations!
As far as I can tell, unless I have very specific information about the person I’m writing to, I’m writing for myself as of the moment just before I had whatever insight I’m trying to convey.
That could explain why I’m much more concerned with being clear than with being persuasive.
Why on earth would they? Evolving this degree of altruism was impressive enough!
Plus helping people is totally gay.
Laughing out loud here.
Totally agreed- Perhaps I am an exception but I don’t really help people if there is nothing in it for me, I end up helping people a lot anyway because there is nearly ALWAYS something in it for you if you know how to take advantage of a situation, but regardless… Many people are more altruistic than me and don’t seem to realize they are doing themselves a favour, and some of them are not even acting. Its amazing really.
If it benefits you, it isn’t altruism—at least by the conventional biological definition of altruism.
Here is Trivers (1971) in “The Evolution of Reciprocal Altruism” on the topic:
Well, if you count opportunity costs as detrimental, something that benefits both you and me but doesn’t benefit you as much as something else you could do with your time and resources is still altruistic.
Traditionally, you compare with a “null” action—not the best thing you could possibly be doing.
I am confused by the concept here of “Opt-Out Philanthropy” having the example of Bear Sterns. What Bear Sterns did was to require a 4% donation and proof of same. Where is the opt out option, other than quitting?
Alternate frame: Bear Sterns contributed 5% of profits to charity, and directed it proportional to people’s salaries, then paid costs to document this in a strange way in order to signal.
I think it might be more along the lines of making the employees signal. The employees try to convince each other, and by extension themselves, that they’d have donated anyway. They then proceed to act like the sort of person who would have donated.
Bear Stearns didn’t really have an “opt-out” option. To my knowledge, no company has ever publicly implemented a giving requirement and allowed people to opt-out from it.
How statistics can move us: Not very many people have good “translators” to transform left-brain information like words, statistics and factual data into the right-brain format of imagery, which triggers emotions. We question our ethics when we realize that “one death is a tragedy, one million deaths is a statistic.” but, to me, this is a design challenge, we simply have not communicated effectively to the right half of the brain. I learned to do this on myself, and it allows me to do really neat things like change my emotional reactions so that I respond differently to deliberate improvements that I make to my ideas. You cannot simply make a person emotional and present them with a statistic. That’s a non sequitur to the right brain. You need to communicate the statistic using specific visualizations.
This is the type of imagery I’m thinking might work: Tell the story of Rokia while showing her face, and then zoom out. Imagine one of those Final Fantasy cut scenes that’s totally seamless, to keep them connected to their feelings about Rokia while the scope increases. You zoom out to show her village. A village is a lot to visually process, so the designer would make a few scenes stand out that people consciously notice and process, and recognize that each person repeats the feeling of Rokia. So, you notice a crying family of people, disabled child struggling, and a kid hugging an extremely skinny dog. This carries the emotional impact of hearing about Rokia and repeats it a few times within that larger scope. Then you zoom out further till you can see the city glowing in unhappy “emergency zone” red graphic effects. (It would work best to show these earlier, at intense emotional points, so that by the time we’re zooming out, the person is conditioned to associate that graphic effect with an “emergency zone” feeling). Then zoom out further to see even more villages glowing in “emergency zone” red, until you have one of those incredible scenes that gives you a sense of awe (like when Harry Potter is flying on his broomstick and the scenery is incredibly detailed, except it’s filled with all these emergency zones.) And then, to bring it all together again, and ensure that this macrocosm is merged with the microcosm of Rokia, bubbles could pop out of each of those zones showing more unhappy scenes—one at a time at first so that you get the gist of what’s in the bubbles—and then more and more rapidly to show this really, really numerous amount of them representing “millions”. Or, you could try literally showing millions of them—the subconscious brain is supposed to be able to process all that. Then the screen stays filled with them for a moment so you can take it all in, and whatever realizations can bubble up from the subconscious into the conscious.
THAT might communicate the scope of the problem to the right side of the brain.
That sort of intensive, macrocosm-detailing visual communication is what the right brain needs in order to hear “millions of Rokias”.
This type of visualization would obviously create a lot of stress—stress is power and this power can be really bad, or it can be really good. So, I answered the question: Would the stress be constructive?
Interesting idea, although you risk triggering associations like the “final fantasy” one you mention above that would switch people’s serious reactions off. You’re also going to hit saturation very soon I think. Somebody needs to do this and draw the curves, figure out how much room there is for sadness in the human mind.
Sadness Engineering. I think I’ve had an idea for a startup.
I’m essentially working on this. Anyone who intends to be working on it seriously, PM me.
Do you think people notice cut scenes like that? Back before I learned anything about graphic design, I didn’t notice the techniques they were using—what I noticed were the images they presented to me in the visual foreground. I don’t think people will get distracted by the technique or associate it with entertainment. I think most wouldn’t notice it. But that’s an interesting point—to wonder how many people would notice that and how many wouldn’t.
Even if that is the case, a really good designer could make it their task to re-frame the technique of seamless cut-scenes such that they look serious or don’t distract you from the serious context. For instance, adding a shaky helicopter ride in that’s so wobbly, you automatically assume it’s real.
A clever designer could get around minor presentation challenges like these, I feel.
Shouldn’t be hard to get this done. Kickstarter? Or even a philanthropic graphic designer?
It would take a brilliant designer to do it well, not to mention the influence of a person with excellent leadership skills contributing their vision for how to transform the stress into purpose, and someone who knows a lot about psychology and visual communication to analyze how to pull off the effect correctly. I have no objection to assisting with making such a project successful, by contributing my understanding of psychology and design, but I have next to no experience doing video and animation. I have projects of my own that take priority due to being pre-existing, so if you or someone you know wants to lead this, go for it.
The harder challenge would be a little later—if it works, all charitable appeals will look the same.
Good one, Nancy. But we might all be more purpose-oriented. It could change us on the inside, if it worked. Even if the appeals looked the same, if people put more toward charity, the result would still be an increase in charitable behavior.
Intense stress can be constructive. You’re totally right that people will not have any idea how to deal with it. This could be either very good or very bad for the charity presenting intensely distressing imagery like what I think is needed to get people to react emotionally to statistics. If you present yourself as the solution to all of this, the guide who makes those feelings constructive, that could be very good. If the people can’t handle the stress, they’ll shut off. If you ASSIST them with handling stress, you will be seen as a leader in a hard situation, a source of comfort that gives a constructive outlet to emotion, meaning to pain. The difference could be this:
You see a crying child, you’re a little sad, you give her 20 bucks.
You see a dying country, you are moved to act now, suddenly life has purpose, you experience a renewed sense of meaning.
It would have to be done very, very carefully to have the most constructive effect. Then again, what doesn’t?
Somebody probably broke their leg next door behind just a curtain, and only 70% of the study subjects would go help? And only 7% would help if another person is in the room and the other person doesn’t go? Is anyone else very surprised by how low these numbers are? I would have expected something like 95% and 50%.
I am surprised, but unsurprised by my surprise.
That is, I’ve come to expect that the results of these sorts of studies will consistently find that the “do the right thing” rates in the population are lower than I would naively expect them to be. That hasn’t really altered my intuitive sense that of course most people will “do the right thing,” although it has severely lowered my confidence in that intuition.
A psych teacher I had in college told the story of introducing a class decades earlier to the Milgram experiment, and (as he always did) asking for a show of hands of how many people in the room thought they would go all the way into the danger zone. Usually he got no hands; this year he got one hand. So after class he asked the guy (an older student) why he’d raised his hand, and the student explained that he was a war veteran, and knew perfectly well how easy it was to get him to do things that “nobody would ever do!”
The funny thing about Milgram is that the people who trusted the scientist and obeyed were factually right even if it didn’t feel that way to them at the time. No experimental subject was being hurt.
Of course, by the same token they were factually wrong: no experimental subject was learning to memorize a list of words.
I thought the experimental subjects reported feeling extreme duress about causing apparent harm to another human being.
You’re reffering to the experiment itself; they’re talking about the experiment within the experiment.
I was compelled to post that clarification after being primed for “helping behavior.”
I’m just being overly literal because people believed that an experimental subject would be hurt and coincidentally it was the actual experimental subjects who were hurt. This would have been more apparent if Milgram had just hooked subjects up to real electrodes and told them he was testing reinforcement learning and then instructed them to actually shock themselves (which presumably many people would have done up to some pain threshold).
It would have been even more apparent if he could have lied about the voltage/current levels so that subjects received exactly as much physical pain (in terms of negative utility) to themselves from the actual shock as the emotional pain they would experience if they administered a shock of the imagined strength to another person.
I’m going to guess that it would vary not insignificantly depending on what the person pretending to break their leg looked like.
Before reading the paragraph with statistics I tried guessing what the results might have been, and I ended up with 50% for one person. It did not feel right at all, but just going off of similar studies quoted on this site I thought a healthy dose of pessimism was in order. The results just don’t seem surprising anymore, but its nice to be reminded that you have to try really hard to be cynical enough, and that if you want to be the kind of person who is 95% likely to help in a situation like this you need to do some major self modification or be self aware of your memetic and genetic programming often enough to veto it.
It wouldn’t hurt to get first-aid training. Knowing what to do is often more important (practically) than knowing that you should do something. Training for dealing with combative persons is probably useful too.
Random question (not expecting expert quality answers here, but curious)
If you have only minimal first aid training and no particular combat training, you are in a mostly deserted area, and you hear a gunshot somewhere nearby but out of sight, what do you think the rational altruistic response?
Edit: In this scenario, you either don’t have a phone, there are no police in the area, or otherwise “easy way out” are not available. You can choose to whether and how to help by yourself, and that’s it.
Is it really a gunshot? Do you assume that the gunshot was at a person rather than a bear or mountain lion? (I don’t know where you are but “deserted” implies no people so an animal or an inanimate object is a more likely target.)
But let’s assume I am in an urban setting, the sound really is a gunshot, and your presumption that an innocent has been shot by an assailant with generally evil intent, e.g. the assailant is likely shoot me too if I appear, is correct. (OK, I am making a LOT of assumptions here.) In that case, the right answer is quite simple: get as far away as possible. Reasoning:
I have no way to combat the assailant successfully;
I have no way to assist the victim in any meaningful fashion;
there is a good chance I will also become a victim if I attempt to render aid.
In the end you will do no one any good, not even yourself. Also, you will not be able to do anyone any good in future.
Now, if you run away you have the opportunity to get First-Aid, EMT, and firearms training. You can get a concealed weapons permit and carry a gun with you at all times. (Well, in places that permit concealed carry.) Now if you happen to be in a similar situation again you will be in a position to both be an effective combatant AND be able to render aid to the victim. At that point in time you might decide to attempt to render aid. This becomes a much more difficult decision to make … but it wasn’t your original scenario.
Yeah. This is more or less what I’ve come to realize.
The story I was working on was originally prompted by an analogy about the weird psychological quirks that afflicted me when I was trying to decide where to donate money. I felt a compulsion to donate to a less efficient charity because it’s cause was “almost done,” and I wanted to finish it off and get a satisfying “Ding! Achievement” feeling.
This prompted someone to say “that’s like ignoring evidence that someone’s dying of a gunshot wound, so that you can finish throwing the last few starfish back into the ocean.” Which I then felt compelled to try and turn into a rationalist fan-parable.
A lot of pieces of it seem worthwhile, but the work is buckling and straining over the ridiculousness of a young girl and old man trying to do anything about a nearby gunshot. The gunshot itself isn’t all that important to the story, but I need something that:
a) is obviously more important than a few starfish
b) time sensitive
c) is plausibly addressable by a young girl and old man
d) the evidence for said bad thing that needs addressing is uncertain.
Why can’t I resist a puzzle?
Hi, I don’t know you, but I’d like to solve your puzzle.
A child opens a door to a house, and ten cats run out. The animal loving cat lady inside is disabled and so they have to get the cats for her. They’re rushing to get all ten cats back inside before any of them wander off too far and get lost. While they’re doing this, they see a child playing near an open manhole. She’s doing cartwheels and rolling on the ground. She’s too far to yell to. While they get the cats, they keep seeing her nearly missing the manhole. They keep thinking about going over there and putting the manhole cover back on. This would obviously take both the old man and the young girl—those are heavy… but they also want to get every cat before they’re gone...
A scream? Perhaps “HELP!”
A car crash?
I was thinking a scream earlier today, but the car crash actually works better for poetic reasons.
A child crying somewhere nearby?
I don’t think this is likely to come up as specified. In a rural area (of the United States, at least; other parts of the world might be different), isolated gunshots probably aren’t indicative of danger to a human and can be safely ignored unless you have good reason not to. In an urban area, you’re probably close to a payphone or another human with a phone, so even if you’re not carrying a phone yourself that should probably be your priority.
That being said, in the situation you’ve given any benefits to the victim would be overwhelmed by danger to you. You don’t know whether the assailant is still in the area, and you don’t know the disposition of anyone involved. It’d probably be slightly safer if you waited a few minutes before investigating (a single shot means a brief incident, and a sane attacker probably wouldn’t stick around long), but your scope for rendering aid is so limited that I still think I’d favor leaving the area and getting help if possible.
Call 911 (or local equivalent) and ask what to do?
Additionally: Prioritize minimizing risk to yourself (and anyone else once you learn of their presence, if any) while grabbing any minimal-risk opportunity to obtain more information on the situation (e.g. seeing the face or hearing the voice of the person who shot and the person who was shot, if any).
Sounds about right. In the process of writing what’s sort of supposed to be a rationalist parable, but the takeaway lesson is sort of vague and I’m not sure what to encourage.
Meant to specify that you didn’t have a phone, or that you are in a place without easy access to cops.
Leave and get help.
What are you going to do against a person with a gun even if someone’s been shot?
There’s a very good chance the person or animal targeted is already dead and it’s too late.
If you move toward a gunshot sound, and visibility isn’t perfect (say you’re going through the bushes) you may be shot because you’re mistaken for a wild animal or enemy.
The last thing you’d want to do is rush over and see what happened or get involved in a conflict when you have no idea how it started or who is telling the truth.
In this scenario I just don’t think “rational” and “altruistic” are compatible (not even using an approximation to an acausal decision theory and assuming that so do other people, which makes altruism rational in the prisoners’ dilemma). I’d just run the fuck away.
Anyway, this isn’t analogous at all to the scenario in the study. There’s no way you’ll harm anyone (including yourself) by asking the researcher if she’s OK and (say) calling an ambulance or something if she isn’t.
I think there is a very good case that a hypothetical rational agent motivated entirely by helping others would run away—that doesn’t mean it isn’t altruistic, just that there is realistically nothing to do to help others except leave to survive and find other others to help.
Call the police and run?
I’m surprised too, and I’d like to think I’d check on her in that situation; I’ve checked up on people with significantly less provocation.
OTOH, how good was the setup? Could the subjects have had a clear view of the actor the whole time, and actually been thinking, “Why is she pretending to be injured? … weirdo”
Peter Singer’s account of the experiment made it clear that all of the action occurred behind a curtain which made it so the subject could not be viewed. Unfortunately since the source is cited to be a book, I can’t follow up on this easily.
I suspect the figures would vary a lot depending on what country/culture the students are from.
(Also, maybe some of the students had guessed that they were being tested about that, and wanted to screw up the results—though 30% sounds like a lot for this hypothesis. What were they students of?)
I am curious if they collected data for how often they said something on the assumption the other researcher should go, especially if they saw the back as private.
The subjects were (falsely) told that the other researcher was another student taking the same survey.
Interesting. This raises my level of surprise a bit.
Serously—they audibly complained about not feeling or moving their foot, and no one did anything? This sounds fishy.
This was really well written. Thanks for posting it.
You’re welcome. Thanks for the encouraging feedback!
Related: Heuristics & Biases in Charity
It’s good to have reminders of these effects, because they are large and important for anyone hoping to impact behavior, altruistically or otherwise.
Rather than see it as a mystery of “why don’t we help each other more?” I think it’s better to think of this as “how do we get people to help each other more, and more usefully?” or in two parts: “How do we get people to do things?” and “What things should we get people to do?” None of this seems intrinsically linked to altruism as such.
“How do we get people to help each other more?” would be a more accurate title, yeah.
The starting point of your subject is the question: “Why should people help others?”
Once you have answered this, we can move on with the discussion.
It’s perfectly possible to study the empirical question of how to influence how altruistic people are, without making any sort of argument for why people should be altruistic. This empirical question seems to be what peter_hurford is writing about.
Yes. But the post’s title is unfortunate:
Can be read to imply the author is surprised by people helping others less than expected.
“Because that’s how we got here, and that’s how we got all the awesome stuff we have … and induction.”
Maybe I got a little confused with the conditional words of the English language. What I meant was: logically, before one can answer the questions “Why don’t people help each other more?”, one should be able to answer the question “Why do people help each other?”, that is, what is it that makes people help each other in the first place.
Once you have an answer to that, you can proceed to asking why don’t people help each other more than they are doing it now.
From what I’ve seen the charities that exploit a personal connection to needy individuals (usually “starving” children) to collect donations are routinely among the least efficient and apparently many are essentially conduits for overvalued tax write-offs. The fraudsters have clearly identified what kind of advertising works best to solicit donations, so why don’t good charities exploit the same advertising? For example the red cross website should be plastered with personal stories of children who are currently living in red cross shelters or otherwise benefiting from donations.
Cynical explanation: because advertising that works is low-prestige. (i.e., it does nothing to build the image of the organization, because it’s not a costly signal.)
Note that this applies to (theoretically) profit-oriented businesses as well: it is a tired trope among direct marketing professionals that nearly every businessperson cares more about how the ad reflects on him/her self than how it affects his/her profits.
Actually, this effect can also be seen in the average small business website design: the uneducated businessperson chooses a site design that is personally pleasing and which he/she perceives to signal status, over a site design favoring customer preferences or ease of use.
I’m not saying anybody consciously does this; it’s just that most people lack sufficient reflectiveness to even consider anything other than their gut reactions to these things, and their gut reactions respond positively to personal prestige and signalling. To even try to take somebody else’s point of view is an immense leap of cognitive effort that our brains usually try to avoid as much as humanly possible. ;-)
How is that a donation? Why do they get a tax write off for that?
What do you mean? They have to donate 4% of their income. A donation is a donation, by definition.
Donations in the US are tax deductible.
A donation is when you freely give money. If you’re sending money as a requirement to work, it’s just an odd pay structure.
The US tax write-off for donations doesn’t give you a net profit; it just excludes the donation from your taxable income, as if it had never passed through your hands. And even that applies incompletely: write-offs are taken instead of, not in addition to, the standard deduction. So if the employees were going to deduct a bunch of stuff anyway, then they’re indifferent between the two pay structures; and if not, then they’d be better off if they were just payed 4% less and the company did the donating.
I suppose. Based on my consequentialism, I’m not too bothered by that. I still think that “opt-out philanthropy” is interesting, though admittedly Bear Stearns did not give one the ability to opt-out.
Isn’t “opt-out philanthropy” just quasi-socialism? Or maybe socialism + democracy? I suppose the clear distinction is that tax structures are supposed to apply uniformly, and opt-out is specifically at the individual level.
How many people would take the time to opt out of paying a few thousand dollars in taxes every year? Perhaps if every line item in the budget had to be individually opted-out on a tax form each year it would work on a national scale.
Is it? What does “quasi-socialism” mean? And if it is quasi-socialism, why would that matter?
Opt-out philanthropy would have a very similar effect that taxation does in a socialist society. The differences are that individuals can legally choose whether or not to pay and can also choose who to donate to under opt-out philanthropy. When you average out tax cheats, tax shelters, and philanthropic goals the effects of both seem pretty similar to me. I was wondering what would happen if a government switched from socialism to a pure opt-out donation system, e.g. you can download the national/state/local budget and an opt-out form every year, and fill out the opt-out form for each line item in the budget that you don’t want to donate toward. Then you make your yearly donation/pay your taxes and most people probably won’t spend time opting out of sane and worthwhile line items. Democratic socialism approximates such a system, but by turning line items on and off for everyone at the same time. That may not be optimal.
Those are two big differences, but otherwise I think you’re right. In other places of “Life You Can Save”, Peter Singer talks about opt-out philanthropy in the form of an additional 1% income tax automatically donated to a GiveWell top charity, unless you change either the target or opt-out completely.
~
I don’t think democratic socialism approximates the system, because no actual (legal) opt-outing takes place. Secondly, my intuitions would be that such a system for all taxes would be disastrous in the case that I predict it would massively lower the government budget and make funding programs completely unpredictable.
I presume Rokia was able to buy a hybrid and some prime real estate after all this.
The catch here is that it is free money. If the participants had to work for it, even a little bit, I bet that the sharing level would drop significantly, to the lowest level they think they can get away with, whatever it might be.
On the other hand, if the second player had to work for the money, I bet that her threshold for rejecting the offer would be higher. Especially if the participants were told that they had done about the same amount of work.
I’d be worried about being be signed up for cryonics in a country with opt-out organ donation.
Why?
The affective/deliberative dichotomy could be the same as near/far from construal level theory. Hypotheses:
Giving someone a number of people that’s been affected, like 3 million, triggers abstract processing and causes those people to be processed as socially distant. (Especially a large number—a small number, like 1, could cause you to start wondering about that particular person, causing them to be processed as socially close.)
The family that “will be selected” is construed as being “far away” temporally, causing them to be processed as socially distant instead of socially close.
By the way, I recommend this review over Robin Hanson’s writings for understanding near/far. Some of the stuff I’ve seen him write on near/far has made me think “no, that sounds wrong”, although I haven’t yet clarified my own interpretations enough to write them up. At least with a review you’re getting it straight from the source.
I think there should be some use of the “moral sphere” model in understanding the dilemma presented. The moral sphere is conceptually easy to understand—each person extends moral consideration varying from the center, oneself, outward into society(or world in whole) until a boundary of moral exclusion is reached, and beyond this boundary exist ‘them’. The model would thus have Buddha being an idealized moral example having no boundary of exclusion and no decrease in moral consideration from self to the rest of the world.
The next consideration is that of culture, here in America, where we have schooled everyone to be...well....pitiful bitches. Seriously, the schooling process both breaks down community -even sense of community and neighborliness—and creates drones waiting for instruction from authority. (school is designed to do this see John Taylor Gatto and Ivan Illich for history and arguments). The studies you cite (and the fact you used deTocequville who witnessed the US pre-compulsory schooling) indict the culture created by our adoption of a school system meant to create a compliant, mindless, consumer society. To really impact the level of altruism in our culture you would do what you could to steer people away from the school system.
As for immediate intimate remedies look at how your “community” is structured—is it a really community(holistic relationships) or just a network(conditional, purposed relationships), or worse a hierarchical structure (relationships at work)? A final consideration is that if you are conditioning yourself to be more responsive in helping and more considerate of others you are going to find yourself developing skills in asserting authority, taking charge, and trying to solve dilemmas where you are being compelled to sacrifice from your own ‘good will’, or whatever tune your heartstrings get played to—this is undoing what schooling trains into everyone in various degrees.
Would you predict that students raised outside of the school system are, as a group, more altruistic than those raised within it?
I wouldn’t make such a broad prediction, but it is easy to see schooling decreases personal authority, without which the individual cannot act altruistically or selfishly (I argue both are the same, but depend on what one considers self—John Livingston’s ‘Rogue Primate’ expounds on this concept). I would suggest looking at the Amish culture as a case study. Historically, you can contrast early America (that of Franklin, Jefferson, Edison and the other American pioneers) and Hitler’s Germany ( the Nazi system was adopted from the new American schools and called the Indiana system by Germans so I have read). I would, based on these and other examples predict that the students raised outside the school system would have greater potential for many qualities including altruism, but that manifestation of these qualities will vary by environment and individual.