I accept all the argument for why one should be an effective altruist, and yet I am not, personally, an EA. This post gives a pretty good avenue for explaining how and why. I’m in Daniel’s position up through chunk 4, and reach the state of mind where
everything is his problem. The only reason he’s not dropping everything to work on ALS is because there are far too many things to do first.
and find it literally unbearable. All of a sudden, it’s clear that to be a good person is to accept the weight of the world on your shoulders. This is where my path diverges; EA says “OK, then, that’s what I’ll do, as best I can”; from my perspective, it’s swallowing the bullet. At this point, your modus ponens is my modus tollens; I can’t deal with what the argument would require of me, so I reject the premise. I concluded that I am not a good person and won’t be for the foreseeable future, and limited myself to the weight of my chosen community and narrowly-defined ingroup.
I don’t think you’re wrong to try to convert people to EA. It does bear remembering, though, that not everyone is equipped to deal with this outlook, and some people will find that trying to shut up and multiply is lastingly unpleasant, such that an altruistic outlook becomes significantly aversive.
Exciting vs. burdensome seems to be a matter of how you think about success and failure. If you think “we can actually make things better!”, it’s exciting. If you think “if you haven’t succeeded immediately, it’s all your fault”, it’s burdensome.
If I’m working at my capacity, I don’t see how it’s my fault for not having the world fixed immediately. I can’t do any more than I can do and I don’t see how I’m responsible for more than what my efforts could change.
From my perspective, it’s “I have to think about all the problems in the world and care about them.” That’s burdensome. So instead I look vaguely around for 100% solutions to these problems, things where I don’t actually need to think about people currently suffering (as I would in order to determine how effective incremental solutions are), things sufficiently nebulous and far-in-the-future that I don’t have to worry about connecting them to people starving in distant lands.
I’ve read that. It’s definitely been the best argument for convincing me to try EA that I’ve encountered. Not convincing, currently, but more convincing than anything else.
I’ve seen the claim that EA is about how you spend at least some of the money you put into charity, not a claim that improving the world should be your primary goal.
Once you’ve decided to compare charities with each other to see which would make the most effective use of your money, can you avoid comparing charitable donation with all the non-charitable uses you might make of your money?
Peter Singer, to take one prominent example, argues that whether you do or not (and most people do), morally you cannot. To buy an expensive pair of shoes (he says) is morally equivalent to killing a child. Yvain has humorously suggested measuring sums of money in dead babies. At least, I think he was being humorous, but he might at the same time be deadly serious.
No, except by interpreting the words “morally equivalent” in that sentence in a way that nobody does, including Peter Singer. Most people, including Peter Singer, think of a pair of good shoes (or perhaps the comparison was to an expensive suit, it doesn’t matter) as something nice to have, and the death of a child as a tragedy. These two values are not being equated. Singer is drawing attention to the causal connection between spending your money on the first and not spending it on the second. This makes buying the shoes a very bad thing to do: its value is that of (a nice thing) - (a really good thing); saving the child has the value (a really good thing) - (a nice thing).
The only symmetry here is that of “equal and opposite”.
The claimed moral equivalence is between buying shoes and killing—not saving—a child. It’s also claimed equivalence between actions, not between values.
A lot of people around here see little difference between actively murdering someone and standing by while someone is killed while we could easily save them. This runs contrary to the general societal views that say it’s much worse to kill someone by your own hand than to let them die without interfering. Or even if you interfere, but your interference is sufficiently removed from the actual death.
For instance, what do you think George Bush Sr’s worst action was? A war? No; he enacted an embargo against Iraq that extended over a decade and restricted basic medical supplies from going into the country. The infant moratily rate jumped up to 25% during that period, and other people didn’t fare much better. And yet few people would think an embargo makes Bush more evil than the killers at Columbine.
This is utterly bizarre on many levels, but I’m grateful too—I can avoid thinking of myself as a bad person for not donating any appreciable amount of money to charity, when I could easily pay to cure a thousand people of malaria per year.
When you ask how bad an action is, you can mean (at least) two different things.
How much harm does it do?
How strongly does it indicate that the person who did it is likely to do other bad things in future?
Killing someone in person is psychologically harder for normal decent people than letting them die, especially if the victim is a stranger far away, and even more so if there isn’t some specific person who’s dying. So actually killing someone is “worse”, if by that you mean that it gives a stronger indication of being callous or malicious or something, even if there’s no difference in harm done.
In some contexts this sort of character evaluation really is what you care about. If you want to know whether someone’s going to be safe and enjoyable company if you have a drink with them, you probably do prefer someone who’d put in place an embargo that kills millions rather than someone who would shoot dozens of schoolchildren.
That’s perfectly consistent with (1) saying that in terms of actual harm done spending money on yourself rather than giving it to effective charities is as bad as killing people, and (2) attempting to choose one’s own actions on the basis of harm done rather than evidence of character.
How strongly does it indicate that the person who did it is likely to do other bad things in future?
But this recurses until all the leaf nodes are “how much harm does it do?” so it’s exactly equivalent to how much harm we expect this person to inflict over the course of their lives.
Killing someone in person is psychologically harder for normal decent people than letting them die, especially if the victim is a stranger far away, and even more so if there isn’t some specific person who’s dying. So actually killing someone is “worse”, if by that you mean that it gives a stronger indication of being callous or malicious or something, even if there’s no difference in harm done.
By the same token, it’s easier to kill people far away and indirectly than up close and personal, so someone using indirect means and killing lots of people will continue to have an easy time killing more people indirectly. So this doesn’t change the analysis that the embargo was ten thousand times worse than the school shooting.
But this recurses [...] so it’s exactly equivalent to how much harm we expect [...]
For an idealized consequentialist, yes. However, most of us find that our moral intuitions are not those of an idealized consequentialist. (They might be some sort of evolution-computed approximation to something slightly resembling idealized consequentialism.)
So this doesn’t change the analysis that the embargo was ten thousand times worse [...]
That depends on the opportunities the person in question has to engage in similar indirectly harmful behaviour. GHWB is no longer in a position to cause millions of deaths by putting embargoes in place, after all.
For the avoidance of doubt, I’m not saying any of this in order to deny (1) that the embargo was a more harmful action than the Columbine massacre, or (2) that the sort of consequentialism frequently advocated (or assumed) on LW leads to the conclusion that the embargo was a more harmful action than the Columbine massacre. (It isn’t perfectly clear to me whether you think 1, or think 2-but-not-1 and are using this partly as an argument against full-on consequentialism.)
But if the question is who is more evil*, GHWB or the Columbine killers?”, the answer depends on what you mean by “evil” and most people most of the time don’t mean “causing harm”; they mean something they probably couldn’t express in words but that probably ends up being close to “having personality traits that in our environment of evolutionary adaptedness correlate with being dangerous to be closely involved with”—which would include, e.g., a tendency to respond to (real or imagined) slights with extreme violence, but probably wouldn’t include a tendency to callousness when dealing with the lives of strangers thousands of miles away.
Reminds me of the time the Texas state legislature forgot that ‘similar to’ and ‘identical to’ are reflexive.
I’m somewhat persuaded by arguments that choices not made, which have consequences, like X preventably dying, can have moral costs.
Not INFINITELY EXPLODING costs, which is what you need in order to experience the full brunt of responsibility of “We are the last two people alive, and you’re dying right in front of me, and I could help you, but I’m not going to.” when deciding to buy shoes or not, when there are 7 billion of us, and you’re actually dying over there, and someone closer to you is not helping you.
Reminds me of the time the Texas state legislature forgot that ‘similar to’ and ‘identical to’ are reflexive.
In case anyone else was curious about this, here’s a quote:
Barbara Ann Radnofsky, a Houston lawyer and Democratic candidate for attorney general, says that a 22-word clause in a 2005 constitutional amendment designed to ban gay marriages erroneously endangers the legal status of all marriages in the state.
The amendment, approved by the Legislature and overwhelmingly ratified by voters, declares that “marriage in this state shall consist only of the union of one man and one woman.” But the troublemaking phrase, as Radnofsky sees it, is Subsection B, which declares:
“This state or a political subdivision of this state may not create or recognize any legal status identical or similar to marriage.”
Under utilitarianism, every instance buying an expensive pair shoes is the same as killing a child, but not every case of killing a child is equivalent to buying an expensive pair of shoes.
(I have the impression that you’re pretending not to understand, because you find that a rhetorically more effective way of indicating your contempt for the idea we’re discussing. But I’m going to take what you say at face value anyway.)
The context here is the idea (stated forcefully by Peter Singer, but he’s by no means the first) that you are responsible for the consequences of choosing not to do things as well as for those of choosing to do things, and that spending money on luxuries is ipso facto choosing not to give it to effective charities.
In which case: if you spent, say, $2000 on a camera (some cameras are much cheaper, some much more expensive) then that’s comparable to the estimated cost of saving one life in Africa by donating to one of the most effective charities. In which case, by choosing to buy the camera rather than make a donation to AMF or some such charity, you have chosen to let (on average) one more person in Africa die prematurely than otherwise would have died.
(Not necessarily specifically a child. It may be more expensive to save children’s lives, in which case it would need to be a more expensive camera.)
Of course there isn’t a specific child you have killed all by yourself personally, but no one suggested there is.
So, that was the original claim that Richard Kennaway described. Your objection to this wasn’t to argue with the moral principles involved but to suggest that there’s a symmetry problem: that “killing a child is morally equivalent to buying an expensive luxury” is less plausible than “buying an expensive luxury is morally equivalent to killing a child”.
Well, of course there is a genuine asymmetry there, because there are some quantifiers lurking behind those sentences. (Singer’s claim is something like “for all expensive luxury purchases, there exists a morally equivalent case of killing a child”; your proposed reversal is something like “for all cases of killing a child, there exists a morally equivalent case of buying an expensive luxury”.) Hence pianoforte611′s response.
You seemed happy to accept an amendment that attempts to fix up the asymmetry. And (I assumed) you were still assuming for the sake of argument the Singer-ish position that buying luxury goods is like killing children, and aiming to show that there’s an internal inconsistency in the thinking of those who espouse it because they won’t accept its reversal.
But I think there isn’t any such inconsistency, because to accept the Singer-ish position is to see spending money on luxuries as killing people because the money could instead have been used to save them, which means that there are cases in which one kills a child by spending money on luxuries.
Your argument against the reversed Singerian principle seems to me to depend on assuming that the original principle is wrong. Which would be fair enough if you weren’t saying that what’s wrong with the original principle is that its reversal is no good.
I have the impression that you’re pretending not to understand, because you find that a rhetorically more effective way of indicating your contempt for the idea we’re discussing.
Nope. I express my rhetorical contempt in, um, more obvious ways. It’s not exactly that I don’t understand, it’s rather that I see multiple ways of proceeding and I don’t know which one do you have in mind (you, of course, do).
By they way, as a preface I should point out that we are not discussing “right” and “wrong” which, I feel, are anti-useful terms in this discussion. Morals are value systems and they are not coherent in humans. We’re talking mostly about implications of certain moral positions and how they might or might not conflict with other values.
you are responsible for the consequences of choosing not to do things as well as for those of choosing to do things
Yes, I accept that.
by choosing to buy the camera rather than make a donation to AMF or some such charity, you have chosen to let (on average) one more person in Africa die prematurely than otherwise would have died.
Not quite. I don’t think you can make a causal chain there. You can make a probabilistic chain of expectations with a lot of uncertainty in it. Averages are not equal to specific actions—for a hypothetical example, choosing a lifestyle which involves enough driving so that in 10 years you drive the average amount of miles per traffic fatality does not mean you kill someone every 10 years.
However in this thread I didn’t focus on that issue—for the purposes of this argument I accepted the thesis and looked into its implications.
Your objection to this wasn’t to argue with the moral principles involved but to suggest that there’s a symmetry problem
Correct.
“killing a child is morally equivalent to buying an expensive luxury” is less plausible than “buying an expensive luxury is morally equivalent to killing a child”
It’s not an issue of plausibility. It’s an issue of bringing to the forefront the connotations and value conflicts.
Singer goes for shock value by putting an equals sign between what is commonly considered heinous and what’s commonly considered normal. He does this to make the normal look (more) heinous, but you can reduce the gap from both directions—making the heinous more normal works just as well.
your proposed reversal is something like “for all cases of killing a child, there exists a morally equivalent case of buying an expensive luxury”.
I am not exactly proposing it, I am pointing out that the weaker form of this reversal (for some cases) logically follows from the Singer’s proposition and if you don’t think it does, I would like to know why it doesn’t.
to accept the Singer-ish position is to see spending money on luxuries as killing people because the money could instead have been used to save them, which means that there are cases in which one kills a child by spending money on luxuries.
Well, to accept the Singer position means that you kill a child every time you spend the appropriate amount of money (and I don’t see what “luxuries” have to do with it—you kill children by failing to max out your credit cards as well).
In common language, however, “killing a child” does not mean “fail to do something which could, we think, on the average, avoid one death somewhere in Africa”. “Killing a child” means doing something which directly and causally leads to a child’s death.
Your argument against the reversed Singerian principle seems to me to depend on assuming that the original principle is wrong.
No. I think the original principle is wrong, but that’s irrelevant here—in this context I accept the Singerian principle in order to more explicitly show the problems inherent in it.
Taking that position conveniently gets one out of having to see buying a TV as equivalent to letting a child die—but I don’t see how it’s a coherent one. (Especially if, as seems to be the case, you agree with the Singerian position that you’re as responsible for the consequences of your inactions as of your actions.)
Suppose you have a choice between two actions. One will definitely result in the death of 10 children. The other will kill each of 100 children with probability 1⁄5, so that on average 20 children die but no particular child will definitely die. (Perhaps what it does is to increase their chances of dying in some fashion, so that even the ones that do die can’t be known to be the rest of your action.) Which do you prefer?
I say the first is clearly better, even though it might be more unpleasant to contemplate. On average, and the large majority of the time, it results in fewer deaths.
In which case, taking an action (or inaction) that results in the second is surely no improvement on taking an action (or inaction) that results in the first.
Incidentally, I’m happy to bite the bullet on the driving example. Every mile I drive incurs some small but non-zero risk of killing someone, and what I am doing is trading off the danger to them (and to me) against the convenience of driving. As it happens, the risk is fairly small, and behind a Rawlsian veil of ignorance I’m content to choose a world in which people drive as much as I do rather than one in which there’s much less driving, much more inconvenience, and fewer deaths on the road. (I’ll add that I don’t drive very much, and drive quite carefully.)
making the heinous more normal works just as well.
I think that when you come at it from that direction, what you’re doing is making explicit how little most people care in practice about the suffering and death of strangers far away. Which is fair enough, but my impression is that most thoughtful people who encounter the Singerian argument have (precisely by being confronted with it) already seen that.
the weaker form of this reversal [...] logically follows from Singer’s proposition and if you don’t think it does, I would like to know why it doesn’t.
I agree: it does. The equivalence seems obvious enough to me that I’m not sure why it’s supposed to change anyone’s mind about anything, though :-).
I don’t see what “luxuries” have to do with it
Only the fact that trading luxuries against other people’s lives seems like a worse problem than trading “necessities” against other people’s lives.
“Killing a child” means doing something which directly and causally leads to a child’s death.
Sure. Which is why the claim people actually make (at least when they’re being careful about their words) is not “buying a $2000 camera is killing a child” but “buying a $2000 camera is morally equivalent to killing a child”.
I said upfront that human morality is not coherent.
However I think that the root issue here is whether you can do morality math.
You’re saying you can—take the suffering of one person, multiply it by a thousand and you have a moral force that’s a thousand times greater! And we can conveniently think of it as a number, abstracting away the details.
I’m saying morality math doesn’t work, at least it doesn’t work by normal math rules. “A single death is a tragedy; a million deaths is a statistic”—you may not like the sentiment, but it is a correct description of human morality. Let me illustrate.
First, a simple example of values/preferences math not working (note: it’s not a seed of a new morality math theory, it’s just an example). Imagine yourself as an interior decorator and me as a client.
You: Welcome to Optimal Interior Decorating! How can I help you? I: I would like to redecorate my flat and would like some help in picking a colour scheme. You: Very well. What is your name? I: Lumifer! You: What is your quest? I: To find out if strange women lyin’ in ponds distributin’ swords are a proper basis for a system of government! You: What is your favourite colour? I: Purple! You: Excellent. We will paint everything in your flat purple. I: Errr... You: Please show me your preferred shade of purple so that we can paint everything in this particular colour and thus maximize your happiness.
And now back to the serious matters of death and dismemberment. You offered me a hypothetical:
Suppose you have a choice between two actions. One will definitely result in the death of 10 children. The other will kill each of 100 children with probability 1⁄5
Let me also suggest one for you.
You’re in a boat, somewhere offshore. Another boat comes by and it’s skippered by Joker, relaxing from his tussles with Batman. He notices you and cries: “Hey! I’ve got an offer for you!” Joker’s offer looks as follows. Sometime ago he put a bomb with a timer under a children’s orphanage. He can switch off the bomb with a radio signal, but if he doesn’t, the bomb will go off (say, in a couple of hours) and many dozens of children will be killed and maimed. Joker has also kidnapped a five-year-old girl who, at the moment, is alive and unharmed in the cabin.
Joker says that if you go down into the cabin and personally kill the five-year-old girl with your bare hands—you can strangle her or beat her to death or something else, your choice—he, Joker, will press the button and deactivate the bomb. It will not go off and you will save many, many children.
Now, in this example the morality math is very clear. You need to go down into the cabin and kill that little girl. Shut up, multiply, and kill.
And yet I have doubts about your ability to do that. I consider that (expected) lack of ability to be a very good thing.
Consider a concept such as decency. It’s a silly thing, there is no place for it in the morality math. You got to maximize utility, right? And yet...
I suspect there were people who didn’t like the smell of burning flesh and were hesitant to tie women to stakes on top of firewood. But then they shut up and multiplied by the years of everlasting torment the witch’s soul would suffer, and picked up their torches and pitchforks.
I suspect there were people who didn’t particularly enjoy dragging others to the guillotine or helping arrange an artificial famine to kill off the enemies of the state. But then they shut up and multiplied by the number of poor and downtrodden people in the country, and picked up their knives and guns.
In a contemporary example, I suspect there are people who don’t think it’s a neighbourly thing to scream at pregnant women walking to a Planned Parenthood clinic and shove highly realistic bloody fetuses into their face. But then they shut up and multiplied by the number of unborn children killed each day, and they picked up their placards and megaphones.
So, no, I don’t think shut up and multiply is good advice always. Sometimes it’s appropriate, but some other times it’s a really bad idea and has bloody terrible failure modes. Often enough these other times are when people believe that morality math trumps all other considerations. So they shut up, multiply, and kill.
Accounting for possible failure modes and the potential effects of those failure modes is a crucial part of any correctly done “morality math”.
Granted, people can’t really be relied upon to actually do it right, and it may not be a good idea to “shut up and multiply” if you can expect to get it wrong… but then failing to shut up and multiply can also have significant consequences. The worst thing you can do with morality math is to only use it when it seems convenient to you, and ignore it otherwise.
However, none of this talk of failure modes represents a solid counterargument to Singer’s main point. I agree with you that there is no strict moral equivalence to killing a child, but I don’t think it matters. The point still holds that by buying luxury goods you bear moral responsibility for failing to save children who you could (and should) have saved.
Yeah, I wondered about adding a note to that effect. But it seems unlikely to me that the AMF is that much more effective than everything else out there. Maybe it’s $4000 now. Maybe it always was $4000. Or $1000. I don’t think the exact numbers are very critical.
I’ve seen the claim that EA is about how you spend at least some of the money you put into charity, not a claim that improving the world should be your primary goal.
RichardKennaway:
Once you’ve decided to compare charities with each other to see which would make the most effective use of your money, can you avoid comparing charitable donation with all the non-charitable uses you might make of your money?
Richard’s question is a good one, but even if there’s no good answer it’s a psychological fact that people can get convinced that they should redirect their existing donations to cost-effective charities but not that charity should crowd out other spending—and that this is an easier sell. So the framing of EA that Nancy describes has practical value.
The biggest problem I have with ‘dead baby’ arguments is that I value them significantly below the value of a high functioning adult. Given the opportunity to save one or the other, I would pick the adult, and I don’t find that babies have a whole lot of intrinsic value until they’re properly programmed.
I’m not sure why one would optimize your charitable donations for QALYs/utilons if your goal wasn’t improving the world. If you care about acquiring warm fuzzies, and donating to marginally improve the world is a means toward that end, then EA doesn’t seem to affect you much, except by potentially guilting you into no longer considering lesser causes virtuous in the sense that creates warm fuzzies for you.
except by potentially guilting you into no longer considering lesser causes virtuous in the sense that creates warm fuzzies for you.
For me the idea of EA just made those lesser causes not generate fuzzies anymore, no guilt involved. It’s difficult to enjoy a delusion you’re conscious of.
Understanding the emotional pain of others, on a non-verbal level, can lead in at least two directions, which I’ve usually seen called “sympathy” and “personal distress” in the psych literature. Personal distress involves seeing the problem as (primarily, or at least importantly) as one’s own. Sympathy involves seeing it as that person’s. Some people, including Albert Schweitzer, claim(ed) to be able to feel sympathy without significant personal distress, and as far as I can see that seems to be true. Being more like them strikes me as a worthwhile (sub)goal. (Until I get there, if ever—I feel your pain. Sorry, couldn’t resist.)
Hey I just realized—if you can master that, and then apply the sympathy-without-personal-distress trick to yourself as well, that looks like it would achieve one of the aims of Buddhism.
apply the sympathy-without-personal-distress trick to yourself
If you do this, would not the result be that you do not feel distress from your own misfortunes? And if you don’t feel distress, what, exactly, is there to sympathize with?
Wouldn’t you just shrug and dismiss the misfortune as irrelevant?
Follow-up question: are all things that we consider misfortunes similar to the “burn yourself” situation, in that there is some sort of “damage” that is part of what makes the misfortune bad, separately from and additionally to the distress/discomfort/pain involved?
Consider a possible invention called a neuronic whip (taken from Asimov’s Foundation series). The neuronic whip, when fired at someone, does no direct damage but triggers all of the “pain” nerves at a given intensity.
Assume that Jim is hit by a neuronic whip, briefly and at low intensity. There is no damage, but there is pain. Because there is pain, Jim would almost certainly consider this a misfortune, and would prefer that it had not happened; yet there is no damage.
So, considering this counterexample, I’d say that no, not every possible misfortune includes damage. Though I imagine that most do.
That is true; but it’s enough to create a single counterexample, so I can simply specify the neuronic whip being used under circumstances where there is no social damage (e.g. the neuronic whip was discharged accidentally, no-one know Jim was there to be hit by it).
Let’s say you cut your finger while chopping vegetables. If you don’t feel distress, you still feel the pain. But probably less pain: the CNS contains a lot of feedback loops affecting how pain is felt. For example, see this story from Scientific American. So sympathize with whatever relatively-attitude-independent problem remains, and act upon that. Even if there would be no pain and just tissue damage, as hyporational suggests, that could be sufficient for action.
Huh, that sounds like the sympathy/empathy split, except I think reversed; empathy is feeling pain from other’s distress vs. sympathy is understanding other’s pain as it reflects your own distress. Specifically mitigating ‘feeling pain from other’s distress’ as applied to a broad sphere of ‘others’ has been a significant part of my turn away from an altruistic outlook; this wasn’t hard, since human brains naturally discount distant people and I already preferred getting news through text, which keeps distant people’s distress viscerally distant.
But you don’t have to bear it alone. It’s not as if one person has to care about everything (nor each single person has to care for all).
Maybe the multiplication (in the example the care for a single bird multiplied by the number of birds) should be followed by a division by the number of persons available to do the caring (possibly adjusted by the expected amount of individual caring).
Intellectually, I know that you are right; I can take on some of the weight while sharing it. Intuitively, though, I have impossibly high standards, for myself and for everything else. For anyone I take responsibility for caring for, I have the strong intuition that if I was really trying, all their problems would be fixed, and that they have persisting problems means that I am inherently inadequate. This is false. I know it is false. Nonetheless, even at the mild scales I do permit myself to care about, it causes me significant emotional distress, and for the sake of my sanity I can’t let it expand to a wider sphere, at least not until I am a) more emotionally durable and b) more demonstrably competent.
Or in short, blur out the details and this is me:
“Yeah,” said the Boy-Who-Lived, “that pretty much nails it. Every time someone cries out in prayer and I can’t answer, I feel guilty about not being God.”
Also, I forget which post (or maybe HPMOR chapter) I got this from, but… it is not useful to assign fault to a part of the system you cannot change, and dividing by the size of the pre-existing altruist (let alone EA) community still leaves things feeling pretty huge.
Having a keen sense for problems that exist, and wanting to demolish them and fix the place from which they spring is not an instinct to quash.
That it causes you emotional distress IS a problem, insofar as you have the ability to perceive and want to fix the problems in absence of the distress. You can test that by finding something you viscerally do not care for and seeing how well your problem-finder works on it; if it’s working fine, the emotional reaction is not helpful, and fixing it will make you feel better, and it won’t come at the cost of smashing your instincts to fix the world.
It’s Harry talking about Blame, chapter 90. (It’s not very spoily, but I don’t know how the spoiler syntax works and failed after trying for a few minutes)
“That’s not how responsibility works, Professor.” Harry’s voice was patient, like he was explaining things to a child who was certain not to understand. He wasn’t looking at her anymore, just staring off at the wall to her right side. “When you do a fault analysis, there’s no point in assigning fault to a part of the system you can’t change afterward, it’s like stepping off a cliff and blaming gravity. Gravity isn’t going to change next time. There’s no point in trying to allocate responsibility to people who aren’t going to alter their actions. Once you look at it from that perspective, you realize that allocating blame never helps anything unless you blame yourself, because you’re the only one whose actions you can change by putting blame there. That’s why Dumbledore has his room full of broken wands. He understands that part, at least.”
I don’t think I understand what you wrote, there AnthonyC; world-scale problems are hard, not immutable.
“A part of the system that you cannot change” is a vague term (and it’s a vague term in the HPMOR quote as well). We think we know what it means, but then you can ask questions like “if there are ten things wrong with the system and you can change only one, but you get to pick which one, which ones count as a part of the system that you can’t change?”
Besides, I would say that the idea is just wrong. It is useful to assign fault to a part of the system that you cannot change, because you need to assign the proper amount of fault as well as just assigning fault, and assigning fault to the part that you can’t change affects the amounts that you assign to the parts that you can change.
The point is that if you actually believe in, say, Christianity (that is, you truly internally believe and not just go to church on Sundays so that neighbors don’t look at you strangely), it’s not your church community which shares your burden. It’s Jesus who lifts this burden off your shoulders.
Ah, that’s probably not what the parent meant then. What he was referring to was analogous to sharing your burden with the church community (or, in context, the effective altruism community).
Here’s a weird reframing. Think of it like playing a game like Tetris or Centipede. Yep, you are going to lose in the end, but that’s not an issue. The idea is to score as many points as possible before that happens.
If you save someone’s life on expectation, you save someone’s life on expectation. This is valuable even if there are lots more people whose lives you could hypothetically save.
I accept all the argument for why one should be an effective altruist, and yet I am not, personally, an EA. This post gives a pretty good avenue for explaining how and why. I’m in Daniel’s position up through chunk 4.
Ditto, though I diverged differently. I said, “Ok, so the problems are greater than available resources, and in particular greater than resources I am ever likely to be able to access. So how can I leverage resources beyond my own?”
I ended up getting an engineering degree and working for a consulting firm advising big companies what emerging technologies to use/develop/invest in. Ideal? Not even close. But it helps direct resources in the direction of efficiency and prosperity, in some small way. I have to shut down the part of my brain that tries to take on the weight of the world, or my broken internal care-o-meter gets stuck at “zero, despair, crying at every news story.” But I also know that little by little, one by one, painfully slowly, the problems will get solved as long as we move in the right direction, and we can then direct the caring that we do have in a bit more concentrated way afterwards. And as much as it scares me to write this, in the far future, when there may be quadrillions of people? A few more years of suffering by a few billion people here, now won’t add or subtract much from the total utility of human civilization.
Read that at the time and again now. Doesn’t help. Setting threshold less than perfect still not possible; perfection would itself be insufficient. I recognize that this is a problem but it is an intractable one and looks to remain so for the foreseeable future.
Edit: Forget that… I finally get it. Like, really get it. You said:
and find it literally unbearable. All of a sudden, it’s clear that to be a good person is to accept the weight of the world on your shoulders
Oh, my gosh… I think that’s why I gave up Christianity. I wish I could say I gave it up because I wanted to believe what’s true, but that’s probably not true. Honestly, I probably gave it up because having the power to impact someone else’s eternity through outreach or prayer, and sometimes not using that power, was literally unbearable for me. I considered it selfish to do anything that promoted mere earthly happiness when the Bible implied that outreach and prayer might impact someone’s eternal soul.
And now I think that, personally, being raised Christian might have been an incredible blessing. Otherwise, I might have shared your outlook. But after 22 years of believing in eternal souls, actions with finite effects don’t seem nearly as important as they probably would had I not come from the perspective that people’s lives on earth are just specks, just one-infinitieth total existence.
I accept all the argument for why one should be an effective altruist, and yet I am not, personally, an EA. This post gives a pretty good avenue for explaining how and why. I’m in Daniel’s position up through chunk 4, and reach the state of mind where
and find it literally unbearable. All of a sudden, it’s clear that to be a good person is to accept the weight of the world on your shoulders. This is where my path diverges; EA says “OK, then, that’s what I’ll do, as best I can”; from my perspective, it’s swallowing the bullet. At this point, your modus ponens is my modus tollens; I can’t deal with what the argument would require of me, so I reject the premise. I concluded that I am not a good person and won’t be for the foreseeable future, and limited myself to the weight of my chosen community and narrowly-defined ingroup.
I don’t think you’re wrong to try to convert people to EA. It does bear remembering, though, that not everyone is equipped to deal with this outlook, and some people will find that trying to shut up and multiply is lastingly unpleasant, such that an altruistic outlook becomes significantly aversive.
This is why I prefer to frame EA as something exciting, not burdensome.
Exciting vs. burdensome seems to be a matter of how you think about success and failure. If you think “we can actually make things better!”, it’s exciting. If you think “if you haven’t succeeded immediately, it’s all your fault”, it’s burdensome.
This just might have more general application.
If I’m working at my capacity, I don’t see how it’s my fault for not having the world fixed immediately. I can’t do any more than I can do and I don’t see how I’m responsible for more than what my efforts could change.
From my perspective, it’s “I have to think about all the problems in the world and care about them.” That’s burdensome. So instead I look vaguely around for 100% solutions to these problems, things where I don’t actually need to think about people currently suffering (as I would in order to determine how effective incremental solutions are), things sufficiently nebulous and far-in-the-future that I don’t have to worry about connecting them to people starving in distant lands.
Do we have any data on which EA pitches tend to be most effective?
I’ve read that. It’s definitely been the best argument for convincing me to try EA that I’ve encountered. Not convincing, currently, but more convincing than anything else.
I’ve seen the claim that EA is about how you spend at least some of the money you put into charity, not a claim that improving the world should be your primary goal.
Once you’ve decided to compare charities with each other to see which would make the most effective use of your money, can you avoid comparing charitable donation with all the non-charitable uses you might make of your money?
Peter Singer, to take one prominent example, argues that whether you do or not (and most people do), morally you cannot. To buy an expensive pair of shoes (he says) is morally equivalent to killing a child. Yvain has humorously suggested measuring sums of money in dead babies. At least, I think he was being humorous, but he might at the same time be deadly serious.
I always find it curious how people forget that equality is symmetrical and works in both directions.
So, killing a child is morally equivalent to buying an expensive pair of shoes? That’s interesting...
See also http://xkcd.com/1035/, last panel.
One man’s modus ponens… I don’t lose much sleep when I hear that a child I had never heard of before was killed.
No, except by interpreting the words “morally equivalent” in that sentence in a way that nobody does, including Peter Singer. Most people, including Peter Singer, think of a pair of good shoes (or perhaps the comparison was to an expensive suit, it doesn’t matter) as something nice to have, and the death of a child as a tragedy. These two values are not being equated. Singer is drawing attention to the causal connection between spending your money on the first and not spending it on the second. This makes buying the shoes a very bad thing to do: its value is that of (a nice thing) - (a really good thing); saving the child has the value (a really good thing) - (a nice thing).
The only symmetry here is that of “equal and opposite”.
Did anyone actually need that spelled out?
These verbal contortions do not look convincing.
The claimed moral equivalence is between buying shoes and killing—not saving—a child. It’s also claimed equivalence between actions, not between values.
A lot of people around here see little difference between actively murdering someone and standing by while someone is killed while we could easily save them. This runs contrary to the general societal views that say it’s much worse to kill someone by your own hand than to let them die without interfering. Or even if you interfere, but your interference is sufficiently removed from the actual death.
For instance, what do you think George Bush Sr’s worst action was? A war? No; he enacted an embargo against Iraq that extended over a decade and restricted basic medical supplies from going into the country. The infant moratily rate jumped up to 25% during that period, and other people didn’t fare much better. And yet few people would think an embargo makes Bush more evil than the killers at Columbine.
This is utterly bizarre on many levels, but I’m grateful too—I can avoid thinking of myself as a bad person for not donating any appreciable amount of money to charity, when I could easily pay to cure a thousand people of malaria per year.
When you ask how bad an action is, you can mean (at least) two different things.
How much harm does it do?
How strongly does it indicate that the person who did it is likely to do other bad things in future?
Killing someone in person is psychologically harder for normal decent people than letting them die, especially if the victim is a stranger far away, and even more so if there isn’t some specific person who’s dying. So actually killing someone is “worse”, if by that you mean that it gives a stronger indication of being callous or malicious or something, even if there’s no difference in harm done.
In some contexts this sort of character evaluation really is what you care about. If you want to know whether someone’s going to be safe and enjoyable company if you have a drink with them, you probably do prefer someone who’d put in place an embargo that kills millions rather than someone who would shoot dozens of schoolchildren.
That’s perfectly consistent with (1) saying that in terms of actual harm done spending money on yourself rather than giving it to effective charities is as bad as killing people, and (2) attempting to choose one’s own actions on the basis of harm done rather than evidence of character.
But this recurses until all the leaf nodes are “how much harm does it do?” so it’s exactly equivalent to how much harm we expect this person to inflict over the course of their lives.
By the same token, it’s easier to kill people far away and indirectly than up close and personal, so someone using indirect means and killing lots of people will continue to have an easy time killing more people indirectly. So this doesn’t change the analysis that the embargo was ten thousand times worse than the school shooting.
For an idealized consequentialist, yes. However, most of us find that our moral intuitions are not those of an idealized consequentialist. (They might be some sort of evolution-computed approximation to something slightly resembling idealized consequentialism.)
That depends on the opportunities the person in question has to engage in similar indirectly harmful behaviour. GHWB is no longer in a position to cause millions of deaths by putting embargoes in place, after all.
For the avoidance of doubt, I’m not saying any of this in order to deny (1) that the embargo was a more harmful action than the Columbine massacre, or (2) that the sort of consequentialism frequently advocated (or assumed) on LW leads to the conclusion that the embargo was a more harmful action than the Columbine massacre. (It isn’t perfectly clear to me whether you think 1, or think 2-but-not-1 and are using this partly as an argument against full-on consequentialism.)
But if the question is who is more evil*, GHWB or the Columbine killers?”, the answer depends on what you mean by “evil” and most people most of the time don’t mean “causing harm”; they mean something they probably couldn’t express in words but that probably ends up being close to “having personality traits that in our environment of evolutionary adaptedness correlate with being dangerous to be closely involved with”—which would include, e.g., a tendency to respond to (real or imagined) slights with extreme violence, but probably wouldn’t include a tendency to callousness when dealing with the lives of strangers thousands of miles away.
Reminds me of the time the Texas state legislature forgot that ‘similar to’ and ‘identical to’ are reflexive.
I’m somewhat persuaded by arguments that choices not made, which have consequences, like X preventably dying, can have moral costs.
Not INFINITELY EXPLODING costs, which is what you need in order to experience the full brunt of responsibility of “We are the last two people alive, and you’re dying right in front of me, and I could help you, but I’m not going to.” when deciding to buy shoes or not, when there are 7 billion of us, and you’re actually dying over there, and someone closer to you is not helping you.
In case anyone else was curious about this, here’s a quote:
Oops.
Under utilitarianism, every instance buying an expensive pair shoes is the same as killing a child, but not every case of killing a child is equivalent to buying an expensive pair of shoes.
Are some cases of killing a child equivalent to buying expensive shoes?
Those in which the way you kill the child is by spending money on luxuries rather than saving the child’s life with it.
Do elaborate. How exactly does that work?
For example, I have some photographic equipment. When I bought, say, a camera, did I personally kill a child by doing this?
(I have the impression that you’re pretending not to understand, because you find that a rhetorically more effective way of indicating your contempt for the idea we’re discussing. But I’m going to take what you say at face value anyway.)
The context here is the idea (stated forcefully by Peter Singer, but he’s by no means the first) that you are responsible for the consequences of choosing not to do things as well as for those of choosing to do things, and that spending money on luxuries is ipso facto choosing not to give it to effective charities.
In which case: if you spent, say, $2000 on a camera (some cameras are much cheaper, some much more expensive) then that’s comparable to the estimated cost of saving one life in Africa by donating to one of the most effective charities. In which case, by choosing to buy the camera rather than make a donation to AMF or some such charity, you have chosen to let (on average) one more person in Africa die prematurely than otherwise would have died.
(Not necessarily specifically a child. It may be more expensive to save children’s lives, in which case it would need to be a more expensive camera.)
Of course there isn’t a specific child you have killed all by yourself personally, but no one suggested there is.
So, that was the original claim that Richard Kennaway described. Your objection to this wasn’t to argue with the moral principles involved but to suggest that there’s a symmetry problem: that “killing a child is morally equivalent to buying an expensive luxury” is less plausible than “buying an expensive luxury is morally equivalent to killing a child”.
Well, of course there is a genuine asymmetry there, because there are some quantifiers lurking behind those sentences. (Singer’s claim is something like “for all expensive luxury purchases, there exists a morally equivalent case of killing a child”; your proposed reversal is something like “for all cases of killing a child, there exists a morally equivalent case of buying an expensive luxury”.) Hence pianoforte611′s response.
You seemed happy to accept an amendment that attempts to fix up the asymmetry. And (I assumed) you were still assuming for the sake of argument the Singer-ish position that buying luxury goods is like killing children, and aiming to show that there’s an internal inconsistency in the thinking of those who espouse it because they won’t accept its reversal.
But I think there isn’t any such inconsistency, because to accept the Singer-ish position is to see spending money on luxuries as killing people because the money could instead have been used to save them, which means that there are cases in which one kills a child by spending money on luxuries.
Your argument against the reversed Singerian principle seems to me to depend on assuming that the original principle is wrong. Which would be fair enough if you weren’t saying that what’s wrong with the original principle is that its reversal is no good.
Nope. I express my rhetorical contempt in, um, more obvious ways. It’s not exactly that I don’t understand, it’s rather that I see multiple ways of proceeding and I don’t know which one do you have in mind (you, of course, do).
By they way, as a preface I should point out that we are not discussing “right” and “wrong” which, I feel, are anti-useful terms in this discussion. Morals are value systems and they are not coherent in humans. We’re talking mostly about implications of certain moral positions and how they might or might not conflict with other values.
Yes, I accept that.
Not quite. I don’t think you can make a causal chain there. You can make a probabilistic chain of expectations with a lot of uncertainty in it. Averages are not equal to specific actions—for a hypothetical example, choosing a lifestyle which involves enough driving so that in 10 years you drive the average amount of miles per traffic fatality does not mean you kill someone every 10 years.
However in this thread I didn’t focus on that issue—for the purposes of this argument I accepted the thesis and looked into its implications.
Correct.
It’s not an issue of plausibility. It’s an issue of bringing to the forefront the connotations and value conflicts.
Singer goes for shock value by putting an equals sign between what is commonly considered heinous and what’s commonly considered normal. He does this to make the normal look (more) heinous, but you can reduce the gap from both directions—making the heinous more normal works just as well.
I am not exactly proposing it, I am pointing out that the weaker form of this reversal (for some cases) logically follows from the Singer’s proposition and if you don’t think it does, I would like to know why it doesn’t.
Well, to accept the Singer position means that you kill a child every time you spend the appropriate amount of money (and I don’t see what “luxuries” have to do with it—you kill children by failing to max out your credit cards as well).
In common language, however, “killing a child” does not mean “fail to do something which could, we think, on the average, avoid one death somewhere in Africa”. “Killing a child” means doing something which directly and causally leads to a child’s death.
No. I think the original principle is wrong, but that’s irrelevant here—in this context I accept the Singerian principle in order to more explicitly show the problems inherent in it.
Taking that position conveniently gets one out of having to see buying a TV as equivalent to letting a child die—but I don’t see how it’s a coherent one. (Especially if, as seems to be the case, you agree with the Singerian position that you’re as responsible for the consequences of your inactions as of your actions.)
Suppose you have a choice between two actions. One will definitely result in the death of 10 children. The other will kill each of 100 children with probability 1⁄5, so that on average 20 children die but no particular child will definitely die. (Perhaps what it does is to increase their chances of dying in some fashion, so that even the ones that do die can’t be known to be the rest of your action.) Which do you prefer?
I say the first is clearly better, even though it might be more unpleasant to contemplate. On average, and the large majority of the time, it results in fewer deaths.
In which case, taking an action (or inaction) that results in the second is surely no improvement on taking an action (or inaction) that results in the first.
Incidentally, I’m happy to bite the bullet on the driving example. Every mile I drive incurs some small but non-zero risk of killing someone, and what I am doing is trading off the danger to them (and to me) against the convenience of driving. As it happens, the risk is fairly small, and behind a Rawlsian veil of ignorance I’m content to choose a world in which people drive as much as I do rather than one in which there’s much less driving, much more inconvenience, and fewer deaths on the road. (I’ll add that I don’t drive very much, and drive quite carefully.)
I think that when you come at it from that direction, what you’re doing is making explicit how little most people care in practice about the suffering and death of strangers far away. Which is fair enough, but my impression is that most thoughtful people who encounter the Singerian argument have (precisely by being confronted with it) already seen that.
I agree: it does. The equivalence seems obvious enough to me that I’m not sure why it’s supposed to change anyone’s mind about anything, though :-).
Only the fact that trading luxuries against other people’s lives seems like a worse problem than trading “necessities” against other people’s lives.
Sure. Which is why the claim people actually make (at least when they’re being careful about their words) is not “buying a $2000 camera is killing a child” but “buying a $2000 camera is morally equivalent to killing a child”.
I said upfront that human morality is not coherent.
However I think that the root issue here is whether you can do morality math.
You’re saying you can—take the suffering of one person, multiply it by a thousand and you have a moral force that’s a thousand times greater! And we can conveniently think of it as a number, abstracting away the details.
I’m saying morality math doesn’t work, at least it doesn’t work by normal math rules. “A single death is a tragedy; a million deaths is a statistic”—you may not like the sentiment, but it is a correct description of human morality. Let me illustrate.
First, a simple example of values/preferences math not working (note: it’s not a seed of a new morality math theory, it’s just an example). Imagine yourself as an interior decorator and me as a client.
You: Welcome to Optimal Interior Decorating! How can I help you?
I: I would like to redecorate my flat and would like some help in picking a colour scheme.
You: Very well. What is your name?
I: Lumifer!
You: What is your quest?
I: To find out if strange women lyin’ in ponds distributin’ swords are a proper basis for a system of government!
You: What is your favourite colour?
I: Purple!
You: Excellent. We will paint everything in your flat purple.
I: Errr...
You: Please show me your preferred shade of purple so that we can paint everything in this particular colour and thus maximize your happiness.
And now back to the serious matters of death and dismemberment. You offered me a hypothetical:
Let me also suggest one for you.
You’re in a boat, somewhere offshore. Another boat comes by and it’s skippered by Joker, relaxing from his tussles with Batman. He notices you and cries: “Hey! I’ve got an offer for you!” Joker’s offer looks as follows. Sometime ago he put a bomb with a timer under a children’s orphanage. He can switch off the bomb with a radio signal, but if he doesn’t, the bomb will go off (say, in a couple of hours) and many dozens of children will be killed and maimed. Joker has also kidnapped a five-year-old girl who, at the moment, is alive and unharmed in the cabin.
Joker says that if you go down into the cabin and personally kill the five-year-old girl with your bare hands—you can strangle her or beat her to death or something else, your choice—he, Joker, will press the button and deactivate the bomb. It will not go off and you will save many, many children.
Now, in this example the morality math is very clear. You need to go down into the cabin and kill that little girl. Shut up, multiply, and kill.
And yet I have doubts about your ability to do that. I consider that (expected) lack of ability to be a very good thing.
Consider a concept such as decency. It’s a silly thing, there is no place for it in the morality math. You got to maximize utility, right? And yet...
I suspect there were people who didn’t like the smell of burning flesh and were hesitant to tie women to stakes on top of firewood. But then they shut up and multiplied by the years of everlasting torment the witch’s soul would suffer, and picked up their torches and pitchforks.
I suspect there were people who didn’t particularly enjoy dragging others to the guillotine or helping arrange an artificial famine to kill off the enemies of the state. But then they shut up and multiplied by the number of poor and downtrodden people in the country, and picked up their knives and guns.
In a contemporary example, I suspect there are people who don’t think it’s a neighbourly thing to scream at pregnant women walking to a Planned Parenthood clinic and shove highly realistic bloody fetuses into their face. But then they shut up and multiplied by the number of unborn children killed each day, and they picked up their placards and megaphones.
So, no, I don’t think shut up and multiply is good advice always. Sometimes it’s appropriate, but some other times it’s a really bad idea and has bloody terrible failure modes. Often enough these other times are when people believe that morality math trumps all other considerations. So they shut up, multiply, and kill.
Accounting for possible failure modes and the potential effects of those failure modes is a crucial part of any correctly done “morality math”.
Granted, people can’t really be relied upon to actually do it right, and it may not be a good idea to “shut up and multiply” if you can expect to get it wrong… but then failing to shut up and multiply can also have significant consequences. The worst thing you can do with morality math is to only use it when it seems convenient to you, and ignore it otherwise.
However, none of this talk of failure modes represents a solid counterargument to Singer’s main point. I agree with you that there is no strict moral equivalence to killing a child, but I don’t think it matters. The point still holds that by buying luxury goods you bear moral responsibility for failing to save children who you could (and should) have saved.
Now that the funding gap of the AMF has closed, I’m not sure this is still the case.
Yeah, I wondered about adding a note to that effect. But it seems unlikely to me that the AMF is that much more effective than everything else out there. Maybe it’s $4000 now. Maybe it always was $4000. Or $1000. I don’t think the exact numbers are very critical.
Then tell me where I can most cheaply save a life.
I don’t know, and I wouldn’t be surprised if there’s no way to reliably do it with less than $5000.
Presumably if you stole a child’s lunch money and bought a pair of shoes with it
NancyLebovitz:
RichardKennaway:
Richard’s question is a good one, but even if there’s no good answer it’s a psychological fact that people can get convinced that they should redirect their existing donations to cost-effective charities but not that charity should crowd out other spending—and that this is an easier sell. So the framing of EA that Nancy describes has practical value.
The biggest problem I have with ‘dead baby’ arguments is that I value them significantly below the value of a high functioning adult. Given the opportunity to save one or the other, I would pick the adult, and I don’t find that babies have a whole lot of intrinsic value until they’re properly programmed.
If you don’t take care of babies, you’ll eventually run out of adults. If you don’t have adults, the babies won’t be taken care of.
I don’t know what a balanced approach to the problem would look like.
I’m not sure why one would optimize your charitable donations for QALYs/utilons if your goal wasn’t improving the world. If you care about acquiring warm fuzzies, and donating to marginally improve the world is a means toward that end, then EA doesn’t seem to affect you much, except by potentially guilting you into no longer considering lesser causes virtuous in the sense that creates warm fuzzies for you.
For me the idea of EA just made those lesser causes not generate fuzzies anymore, no guilt involved. It’s difficult to enjoy a delusion you’re conscious of.
Understanding the emotional pain of others, on a non-verbal level, can lead in at least two directions, which I’ve usually seen called “sympathy” and “personal distress” in the psych literature. Personal distress involves seeing the problem as (primarily, or at least importantly) as one’s own. Sympathy involves seeing it as that person’s. Some people, including Albert Schweitzer, claim(ed) to be able to feel sympathy without significant personal distress, and as far as I can see that seems to be true. Being more like them strikes me as a worthwhile (sub)goal. (Until I get there, if ever—I feel your pain. Sorry, couldn’t resist.)
Hey I just realized—if you can master that, and then apply the sympathy-without-personal-distress trick to yourself as well, that looks like it would achieve one of the aims of Buddhism.
If you do this, would not the result be that you do not feel distress from your own misfortunes? And if you don’t feel distress, what, exactly, is there to sympathize with?
Wouldn’t you just shrug and dismiss the misfortune as irrelevant?
If you could switch off pain at will would you consider the tissue damage caused by burning yourself irrelevant?
I would not. This is a fair point.
Follow-up question: are all things that we consider misfortunes similar to the “burn yourself” situation, in that there is some sort of “damage” that is part of what makes the misfortune bad, separately from and additionally to the distress/discomfort/pain involved?
Consider a possible invention called a neuronic whip (taken from Asimov’s Foundation series). The neuronic whip, when fired at someone, does no direct damage but triggers all of the “pain” nerves at a given intensity.
Assume that Jim is hit by a neuronic whip, briefly and at low intensity. There is no damage, but there is pain. Because there is pain, Jim would almost certainly consider this a misfortune, and would prefer that it had not happened; yet there is no damage.
So, considering this counterexample, I’d say that no, not every possible misfortune includes damage. Though I imagine that most do.
No need for sci-fi.
Much of what could be called damage in this context wouldn’t necessarily happen within your body, you can take damage to your reputation for example.
You can certainly be deluded about receiving damage especially in the social game.
That is true; but it’s enough to create a single counterexample, so I can simply specify the neuronic whip being used under circumstances where there is no social damage (e.g. the neuronic whip was discharged accidentally, no-one know Jim was there to be hit by it).
Yes. I didn’t mean to refute your idea in any way and quite liked it. Forgot to upvote it though. I merely wanted to add a real world example.
Let’s say you cut your finger while chopping vegetables. If you don’t feel distress, you still feel the pain. But probably less pain: the CNS contains a lot of feedback loops affecting how pain is felt. For example, see this story from Scientific American. So sympathize with whatever relatively-attitude-independent problem remains, and act upon that. Even if there would be no pain and just tissue damage, as hyporational suggests, that could be sufficient for action.
Huh, that sounds like the sympathy/empathy split, except I think reversed; empathy is feeling pain from other’s distress vs. sympathy is understanding other’s pain as it reflects your own distress. Specifically mitigating ‘feeling pain from other’s distress’ as applied to a broad sphere of ‘others’ has been a significant part of my turn away from an altruistic outlook; this wasn’t hard, since human brains naturally discount distant people and I already preferred getting news through text, which keeps distant people’s distress viscerally distant.
But you don’t have to bear it alone. It’s not as if one person has to care about everything (nor each single person has to care for all).
Maybe the multiplication (in the example the care for a single bird multiplied by the number of birds) should be followed by a division by the number of persons available to do the caring (possibly adjusted by the expected amount of individual caring).
Intellectually, I know that you are right; I can take on some of the weight while sharing it. Intuitively, though, I have impossibly high standards, for myself and for everything else. For anyone I take responsibility for caring for, I have the strong intuition that if I was really trying, all their problems would be fixed, and that they have persisting problems means that I am inherently inadequate. This is false. I know it is false. Nonetheless, even at the mild scales I do permit myself to care about, it causes me significant emotional distress, and for the sake of my sanity I can’t let it expand to a wider sphere, at least not until I am a) more emotionally durable and b) more demonstrably competent.
Or in short, blur out the details and this is me:
Also, I forget which post (or maybe HPMOR chapter) I got this from, but… it is not useful to assign fault to a part of the system you cannot change, and dividing by the size of the pre-existing altruist (let alone EA) community still leaves things feeling pretty huge.
Having a keen sense for problems that exist, and wanting to demolish them and fix the place from which they spring is not an instinct to quash.
That it causes you emotional distress IS a problem, insofar as you have the ability to perceive and want to fix the problems in absence of the distress. You can test that by finding something you viscerally do not care for and seeing how well your problem-finder works on it; if it’s working fine, the emotional reaction is not helpful, and fixing it will make you feel better, and it won’t come at the cost of smashing your instincts to fix the world.
It’s Harry talking about Blame, chapter 90. (It’s not very spoily, but I don’t know how the spoiler syntax works and failed after trying for a few minutes)
I don’t think I understand what you wrote, there AnthonyC; world-scale problems are hard, not immutable.
“A part of the system that you cannot change” is a vague term (and it’s a vague term in the HPMOR quote as well). We think we know what it means, but then you can ask questions like “if there are ten things wrong with the system and you can change only one, but you get to pick which one, which ones count as a part of the system that you can’t change?”
Besides, I would say that the idea is just wrong. It is useful to assign fault to a part of the system that you cannot change, because you need to assign the proper amount of fault as well as just assigning fault, and assigning fault to the part that you can’t change affects the amounts that you assign to the parts that you can change.
That’s one way for people to become religious.
I’m not sure what point is being made here. Distributing burdens is a part of any group, why is religion exceptional here?
Theory of mind, heh… :-)
The point is that if you actually believe in, say, Christianity (that is, you truly internally believe and not just go to church on Sundays so that neighbors don’t look at you strangely), it’s not your church community which shares your burden. It’s Jesus who lifts this burden off your shoulders.
Ah, that’s probably not what the parent meant then. What he was referring to was analogous to sharing your burden with the church community (or, in context, the effective altruism community).
Yes, of course. I pointed out another way through which you don’t have to bear it alone.
Ah, I understand. Thanks for clearing up my confusion.
Here’s a weird reframing. Think of it like playing a game like Tetris or Centipede. Yep, you are going to lose in the end, but that’s not an issue. The idea is to score as many points as possible before that happens.
If you save someone’s life on expectation, you save someone’s life on expectation. This is valuable even if there are lots more people whose lives you could hypothetically save.
Ditto, though I diverged differently. I said, “Ok, so the problems are greater than available resources, and in particular greater than resources I am ever likely to be able to access. So how can I leverage resources beyond my own?”
I ended up getting an engineering degree and working for a consulting firm advising big companies what emerging technologies to use/develop/invest in. Ideal? Not even close. But it helps direct resources in the direction of efficiency and prosperity, in some small way. I have to shut down the part of my brain that tries to take on the weight of the world, or my broken internal care-o-meter gets stuck at “zero, despair, crying at every news story.” But I also know that little by little, one by one, painfully slowly, the problems will get solved as long as we move in the right direction, and we can then direct the caring that we do have in a bit more concentrated way afterwards. And as much as it scares me to write this, in the far future, when there may be quadrillions of people? A few more years of suffering by a few billion people here, now won’t add or subtract much from the total utility of human civilization.
Super relevant slatestarcodex post: Nobody Is Perfect, Everything is Commensurable.
Read that at the time and again now. Doesn’t help. Setting threshold less than perfect still not possible; perfection would itself be insufficient. I recognize that this is a problem but it is an intractable one and looks to remain so for the foreseeable future.
But what about the quantitative way? :(
Edit: Forget that… I finally get it. Like, really get it. You said:
Oh, my gosh… I think that’s why I gave up Christianity. I wish I could say I gave it up because I wanted to believe what’s true, but that’s probably not true. Honestly, I probably gave it up because having the power to impact someone else’s eternity through outreach or prayer, and sometimes not using that power, was literally unbearable for me. I considered it selfish to do anything that promoted mere earthly happiness when the Bible implied that outreach and prayer might impact someone’s eternal soul.
And now I think that, personally, being raised Christian might have been an incredible blessing. Otherwise, I might have shared your outlook. But after 22 years of believing in eternal souls, actions with finite effects don’t seem nearly as important as they probably would had I not come from the perspective that people’s lives on earth are just specks, just one-infinitieth total existence.