One Life Against the World
“Whoever saves a single life, it is as if he had saved the whole world.”
– The Talmud, Sanhedrin 4:5
It’s a beautiful thought, isn’t it? Feel that warm glow.
I can testify that helping one person feels just as good as helping the whole world. Once upon a time, when I was burned out for the day and wasting time on the Internet—it’s a bit complicated, but essentially, I managed to turn someone’s whole life around by leaving an anonymous blog comment. I wasn’t expecting it to have an effect that large, but it did. When I discovered what I had accomplished, it gave me a tremendous high. The euphoria lasted through that day and into the night, only wearing off somewhat the next morning. It felt just as good (this is the scary part) as the euphoria of a major scientific insight, which had previously been my best referent for what it might feel like to do drugs.
Saving one life probably does feel just as good as being the first person to realize what makes the stars shine. It probably does feel just as good as saving the entire world.
But if you ever have a choice, dear reader, between saving a single life and saving the whole world—then save the world. Please. Because beyond that warm glow is one heck of a gigantic difference.
For some people, the notion that saving the world is significantly better than saving one human life will be obvious, like saying that six billion dollars is worth more than one dollar, or that six cubic kilometers of gold weighs more than one cubic meter of gold. (And never mind the expected value of posterity.) Why might it not be obvious? Well, suppose there’s a qualitative duty to save what lives you can—then someone who saves the world, and someone who saves one human life, are just fulfilling the same duty. Or suppose that we follow the Greek conception of personal virtue, rather than consequentialism; someone who saves the world is virtuous, but not six billion times as virtuous as someone who saves one human life. Or perhaps the value of one human life is already too great to comprehend—so that the passing grief we experience at funerals is an infinitesimal underestimate of what is lost—and thus passing to the entire world changes little.
I agree that one human life is of unimaginably high value. I also hold that two human lives are twice as unimaginably valuable. Or to put it another way: Whoever saves one life, if it is as if they had saved the whole world; whoever saves ten lives, it is as if they had saved ten worlds. Whoever actually saves the whole world—not to be confused with pretend rhetorical saving the world—it is as if they had saved an intergalactic civilization.
Two deaf children are sleeping on the railroad tracks, the train speeding down; you see this, but you are too far away to save the child. I’m nearby, within reach, so I leap forward and drag one child off the railroad tracks—and then stop, calmly sipping a Diet Pepsi as the train bears down on the second child. “Quick!” you scream to me. “Do something!” But (I call back) I already saved one child from the train tracks, and thus I am “unimaginably” far ahead on points. Whether I save the second child, or not, I will still be credited with an “unimaginably” good deed. Thus, I have no further motive to act. Doesn’t sound right, does it?
Why should it be any different if a philanthropist spends $10 million on curing a rare but spectacularly fatal disease which afflicts only a hundred people planetwide, when the same money has an equal probability of producing a cure for a less spectacular disease that kills 10% of 100,000 people? I don’t think it is different. When human lives are at stake, we have a duty to maximize, not satisfice; and this duty has the same strength as the original duty to save lives. Whoever knowingly chooses to save one life, when they could have saved two—to say nothing of a thousand lives, or a world—they have damned themselves as thoroughly as any murderer.
Addendum: It’s not cognitively easy to spend money to save lives, since cliche methods that instantly leap to mind don’t work or are counterproductive. (I will post later on why this tends to be so.) Stuart Armstrong also points out that if we are to disdain the philanthropist who spends life-saving money inefficiently, we should be consistent and disdain more those who could spend money to save lives but don’t.
- On Caring by 7 Oct 2014 5:12 UTC; 307 points) (EA Forum;
- On Caring by 15 Oct 2014 1:59 UTC; 238 points) (
- Rationality: Common Interest of Many Causes by 29 Mar 2009 10:49 UTC; 85 points) (
- Mere Messiahs by 2 Dec 2007 0:49 UTC; 76 points) (
- Fake Selfishness by 8 Nov 2007 2:31 UTC; 74 points) (
- Passing the Recursive Buck by 16 Jun 2008 4:50 UTC; 48 points) (
- SBF’s comments on ethics are no surprise to virtue ethicists by 1 Dec 2022 4:18 UTC; 36 points) (
- The Importance of Self-Doubt by 19 Aug 2010 22:47 UTC; 28 points) (
- Scope Insensitivity Judo by 19 Jul 2019 17:33 UTC; 22 points) (
- 23 May 2011 9:28 UTC; 20 points) 's comment on What makes Less Wrong awesome? by (
- Help us Optimize the Contents of the Sequences eBook by 19 Sep 2013 4:31 UTC; 18 points) (
- 20 Jul 2014 13:02 UTC; 15 points) 's comment on [LINK] Another “LessWrongers are crazy” article—this time on Slate by (
- If physics is many-worlds, does ethics matter? by 10 Jul 2019 15:28 UTC; 14 points) (EA Forum;
- On Caring [Atlas Fellowship] by 14 Oct 2014 7:00 UTC; 12 points) (
- SBF’s comments on ethics are no surprise to virtue ethicists by 1 Dec 2022 4:21 UTC; 10 points) (EA Forum;
- [SEQ RERUN] One Life Against the World by 12 Jun 2011 1:38 UTC; 10 points) (
- The utility curve of the human population by 24 Sep 2009 21:00 UTC; 9 points) (
- Reason is not the only means of overcoming bias by 9 Sep 2010 22:59 UTC; 8 points) (
- Rationality Reading Group: Part W: Quantified Humanism by 24 Mar 2016 3:48 UTC; 7 points) (
- [SEQ RERUN] Risk-Free Bonds Aren’t by 14 Jun 2011 16:14 UTC; 7 points) (
- People who want to save the world by 15 May 2011 0:44 UTC; 5 points) (
- Sobre importar-se by 20 Jul 2023 16:33 UTC; 4 points) (EA Forum;
- 15 Jan 2010 21:38 UTC; 3 points) 's comment on Savulescu: “Genetically enhance humanity or face extinction” by (
- 2 Aug 2011 18:56 UTC; 3 points) 's comment on The elephant in the room, AMA by (
- 20 Feb 2017 17:45 UTC; 2 points) 's comment on Why I left EA by (EA Forum;
- Sulla solidarietà by 23 Dec 2022 1:48 UTC; 2 points) (EA Forum;
- 12 Oct 2015 12:58 UTC; 1 point) 's comment on EA’s Image Problem by (EA Forum;
- 他者を気づかうこと(care)について by 15 Jul 2023 8:38 UTC; 1 point) (EA Forum;
- 19 Nov 2014 21:59 UTC; 1 point) 's comment on Can science come to understand consciousness? A problem of philosophical zombies (Yes, I know, P-zombies again.) by (
- 19 Nov 2014 23:47 UTC; 0 points) 's comment on Can science come to understand consciousness? A problem of philosophical zombies (Yes, I know, P-zombies again.) by (
- 25 Mar 2010 7:42 UTC; 0 points) 's comment on Open Thread: February 2010, part 2 by (
Also, whoever saves a person to live another fifty years, it is as if they had saved fifty people to live one more year. Whoever saves someone who very much enjoys life, it is as if they saved many people who are not sure they really want to live. And whoever creates a life that would not have otherwise existed, it is as if they saved someone who had an entire lifetime yet to live.
Which is why I”m still puzzled by a simplistic moral dilemma that just won’t go away for me: are we morally obligated to have children, and as many as we can? Sans using that using energy or money to more efficiently “save” lives, of course. It seems to me we should encourage people to have children, a common thing that many more people will actually do than donate philanthropically, in addition to other philanthropy encouragements.
Cost of a first-world child is.… checks random Google result $180,000 to get them to age 18. Cost of saving a kid in Africa from dying of Malaria is ~$1,000.
Right now having children is massively selfish, because there’s options that are more than TWO magnitudes of order more effective. It’d be like blowing up the train in order to save the deaf kids from the original post :)
Not necessarily. A full argument would consider the opportunities available to a child you raise—it’s perfectly possible for a single first-world child to be a more productive than 180 kids in Africa.
There’s also the counter-point (to my previous point) that having children discourages other people from having children, due to the forces of the market (greater demand for stuff available to children ⇒ greater costs of stuff available to children). Of course, the effect on demand is spread out to stuff other than just stuff available to children, so overall this does not cause an equal and opposite reaction.
If you successfully teach your child to be utilitarian, effective altruist, etc., though, the utility of both previous points are dwarfed by this (the second point is dwarfed because the average first-world child probably wouldn’t pick up utilitarianism, EA). I’m not sure what the probability of a child picking up stuff like that is (and it would make one heck of a difficult experiment), but my guess is that if taught properly it would be likely enough to dwarf the utility of the first two points.
Well, yes, but my point is that this is a rather unreasonable clause, since if we actually pay attention to what we can efficiently do, “have children” doesn’t even make the Top 100. So why would you possibly focus on “have children”, and treat it as a dilemma?
I interpreted that line as a cached-though / brush off, not “of course I’ve done the math, and there’s a thousand more effective things, but I still find it odd that having children can EVER be a positive act. I mean, ew, babies! Those can’t be good for the world o.o”
I suppose it’s the difference between asking “is it better to blow up the train” and asking “can it be better to blow up the train?” It’s worth noting that even if we have an obligation to create lives, our obligation to save them is easier to fulfill; but it’s still worth knowing if the two are actually equivalent.
Reading the original comment, adam does, in fact, seem to have assumed that having children would be the right choice if it mattered, so … point, I guess.
Oooh, I like that distinction, and will try to remember it in the future :)
A lot of people don’t consider failure to exist the same as dying. Of course, we need some level of procreation as long as there is death, and humanity would probably continue to expand even then.
Why? Because dying is painful? Beyond that, I see them equivalently.
Non-existing is not the same thing as ceasing to exist.
Among other reasons, if you die there will be people mourning you, whereas if you had never existed in the first place there won’t.
But the whole point of the post above is that our personal feelings are negligible next the enormity of the utilitarian consequences behind our feelings.
Caring about the world isn’t about having a gut feeling that corresponds to the amount of suffering in the world, it’s about doing the right thing anyway. Even without the feeling.
The fact that no one knows the unborn person yet doesn’t mean that she doesn’t matter.
I was using mourning synecdochically to refer to all the externalities your death would have on other people, not just their feelings.
The goal behind altruism is to improve the quality of life for the human race. The motivation for altruism maybe due to evolutionary reasons such as propogation of the species etc., but it is not the same as altruism.
This post is however about the latter as you have rightly pointed out. Nevertheless,
and therefore, the way to go about maximizing is to first ensure that all people currently alive remain alive and well taken care of. After that there’s plenty of time to go about having more babies :)
I will note that this is one of the fundamental failings of utilitarianism, the “mere addition” paradox. Basically, take a billion people who are miserable, and one million people who are very happy. If you “add up” the happiness of the billion people, they are “happier” on the whole than the million people; therefore, the billion are a better solution to use of natural resources.
The problem is that it always assumes some incorrect things:
1) It assumes all people are equal 2) It assumes that happiness is transitive 3) It assumes that you can actually quantify happiness in a meaningful way in this manner 4) It assumes the additive property for happiness—that you can add up some number of miserable people to get one happy person.
None of these assumptions are necessarily true.
Of course, all moral philosophies are going to fail at some level.
Note that, for instance, in this case there is an obvious difference: adding 50 years to one life is actually significantly better than extending 50 lives by 1 year each, as the investment to improve one person for 50 years is considerably less, and one person with 50 years can do considerably larger, longer, and grander projects.
That last one sounds like we should try to make a simple, self-improving GAI whose goal it is to tile the universe with smiley faces.
Source?
Robin Hanson. He was speaking in quotable prose but expressing his own opinion.
Interesting to choose a rich philanthropist for your analogy. Very wealthy people give less, as a percentage of their wealth and income, than people of much more limited means. For wealthy philanthropists, the value from giving may be in status from the publicity of large gifts.
Robin’s comment raises the interesting question of whether creating a new life is as good as saving one. It definitely seems to be easier to create a new one, at least at first (the long term effort is probably greater). Most people manage to create a new life or two, but probably never save any. We don’t tend to celebrate new-life creators as much as we do life-savers, perhaps because it is seen as too easy.
No. It’s way, way easier to save one. According to the Disease Control Priorities Project (http://tinyurl.com/y9wpk5e) you can save lives for about $3 per year. That’s, what, $225 for a whole life? Creating a life requires nine months of pregnancy, during which you can’t work as well, and you have to pay for food while you’re eating for two, and that’s just assuming you give the child up for adoption. You also can only do it once every nine months, and you have to be a girl, whereas you can save a life every time you earn $225.
That means it’s cheaper and possible to do in greater volume—not easier. It’s probably uncommon indeed to save lives by accident, let alone while actively trying not to, which happens in the creation department all the time. Easier certainly doesn’t mean cheaper, or people would behave differently with credit cards.
Having established that, would you say that a pregnancy (or several, since the average pregnancy produces less than one child) is easier or harder than mailing a check?
I’d distinguish here between “difficult” as in requiring discomfort and “difficult” as in requiring optional effort. By optional effort, I mean effort that one could feasibly take the null action rather than exert. None of the effort expended in carrying a baby to term is really optional at the time. If I were to get pregnant, I could at no time say to myself, “Well, I’d really rather not vomit right now, so I’ll take the null action.” Even if there were something I could have done earlier to enable the null action at that time, once it gets to that point, it’s happening whether I like it or not. Similar with labor. I don’t think anyone will perform an abortion when one is literally about to extrude an infant, practically speaking, so although labor is an immense effort, it is not an optional effort once it’s gotten to that point. Taking Plan B, going through with an abortion, and yes—mailing a check, are all optional effort.
I think Alicorn’s point was that being pregnant might be more unpleasant/expensive/”difficult” than mailing a check, but getting pregnant is much, much easier. So easy, in fact, one can do it accidentally.
I think the crucial point here is the disparity between sexes. The amount of effort required for a man to induce a pregnancy, the cost of the dating&mating game, is certainly not “easy”. I expect this is also the case for some (least-generally-attractive) women.
But getting pregnant is not enough to make a child.
Barring spontaneous miscarriage or starvation, it is the default. A woman has to refrain from taking certain actions, but, once she’s pregnant, she doesn’t actively have to do much but not starve.
Let me talk to some of my female friends and get back to you. I’ll see how many comments along the lines of “labor is easy” and “Vomiting off and on for months on end? Yeah, but it’s just a default.” it takes until I get slapped. I argue that default or not it is easier to abort a child than to carry it to term and give birth. That human instincts cry out to us to produce offspring at huge cost doesn’t mean it is easy.
I’m 99% sure you’re missing the point.
Falling down a hill is quite painful. However, once you start falling down a hill, you’re going to keep falling until you reach the bottom, unless you make a conscious and coordinated decision to stop yourself, if you are even able to. In that sense, once you start falling down a hill, it is very easy to keep falling, because it’s what happens if you don’t try to change things.
In the same sense (and I apologize for the unpleasant metaphor) pregnancy is easy; once a woman is pregnant, barring miscarriage, she’s gonna have a kid. It’s going to be painful and at times miserable, but it’s going to happen. I agree with you that there’s less disutility experienced if she has an abortion, but she has to make a conscious choice to do that, go to a clinic, and pay a bill. It may be the more pleasant way out, but, in this context, it’s not easier; it doesn’t really happen by accident (excepting spontaneous miscarriage, which is admittedly fairly common, but besides the point).
Everything you describe is something she endures; there’s no willpower to it. This is in contrast with saving a life as discussed earlier, which requires a deliberate, conscious decision, negotiating some mild bureaucracy, and writing a check. If you get pregnant and do not actively choose to do something about it, you have a kid. If you get a checkbook and do not actively choose to write checks to charities, you don’t save any lives. That’s what easy means.
ETA: You also assume that abortion has no cost beyond its monetary cost. For women who believe it is wrong for whatever reason, it may be far, far more unpleasant than having the child.
What I am going to say may be extremely unpopular,but everything you have stated about bringing a child to terms could well fall within the terminology used by Psychohistorian:
Even given all of the discomforts and difficulties you have mentioned, they are more about the Mother than the child, and as long as the puking, cramping, cranky, hormonal mother does not starve, the child should be delivered.
I think that most of what you have mentioned are just difficulties with the attempt to make certain the mother does not starve.
But, then, at this point, it’s really all just semantics / terminology that we are talking about
Good medicine does a bit more than that. If the only thing you do is “not starve” the probability is somewhere between 1⁄3 and 2⁄3 that the child will die. Quite possibly killing the mother as well.
ETA: Not doing things can also be hard, if the consequences are unpleasant enough
It doesn’t particularly bother me but you are mistaken.
Yes. All of which combined is harder than saving a life at the current margin.
Labour doesn’t have much to do with not starving.
Taboo “difficulty.”
Creating a human life doesn’t require a lot of advance planning/willpower, while saving a life does require you to think about the problem in advance and decide on an inconvenient course of action when nobody’s forcing you to do so.
The costs of creating a human life in effort, financial expense, suffering, and willpower, considered as a whole, are greater than the costs of saving a life.
We do celebrate life creators quite a bit. But we celebrate their good fortune rather than their altruism, since the parents are among the people who benefit most from their parenthood.
In a Big World, which this one appears to be on at least three counts (spatially infinite open universe, inflationary scenario in Standard Model, and Everett branches), everyone who could exist already exists with probability 1. Thus, the issue is not so much creating new people, but ensuring that good things happen to people given that they exist. Creating a new person helps when you can provide them with good outcomes, because what you’re really doing is increasing the frequency of good outcomes from that starting point.
Or at least that’s one anthropic interpretation of ethics. But it is one reason why I don’t endorse running out and creating lots of people if that lowers the average standard of living. In a Big World, it’s the average standard of living that you care about.
Tell me if I’m wrong, but doesn’t a many worlds reality mean that all possible states of those people also occur with a probability of one? How can you possibly “[increase] the frequency of good outcomes”? All the outcomes occur in some world, irrespective of our actions.
Eliezer, I hope we can agree that your conclusion is intriguing, but far from clearly true. After all, if every possible person exists, then so does every possible history for every possible person. How then could you effect any relative frequencies?
I have a paper on this problem of infinities in ethics: http://www.nickbostrom.com/ethics/infinite.pdf
It is a difficult topic.
Where does this end? If a philanthropist saves one life instead of two he is damned as any murder. Surely we in the more prosperous countries could easily save many lives by cutting back on luxuries, but we choose not to (this would no doubt apply to nearly everyone in these countries) does that make us all murderers?
Yes. We just aren’t socially condemned for it.
Eliezer, whatever it is you were getting at in your comment, it was waaay over my head. When I searched on Wikipedia for Big World, I got an album by Joe Jackson. When I looked for Everett branches, I found an intriguing article about the Piscataquog River. Could you point me to some further reading? I hate to feel left out of the loop here.
Google “Tegmark big universe” (without the quotes), or read the Quantum Physics Sequence.
Robin: And whoever creates a life that would not have otherwise existed, it is as if they saved someone who had an entire lifetime yet to live.
I have to question that comparison. When you save a life that already exists, you are delivering them from a particular existential danger, even if not from the generic existential danger they face constantly by virtue of being alive. But when you create a life, you are delivering a new “hostage to fortune” and creating an existentially endangered being where none previously existed.
I think on re-reading this that Robin’s initial comment was meant to be ironic, or at least a provocative extension of Eliezer’s ideas.
As far as Eliezer’s point, I would imagine that rabbis and other moral philosophers would agree that saving two lives is better than saving one. Beyond that the calculus of human lives is a difficult problem. Many people would say we should not sacrifice one to save two. There is this distinction between active and passive actions, which are judged very differently. It’s all something of a mess.
Utilitarianism to the rescue, then.
Utilitarianism is unlikely to rescue anyone from the conundrum (unless it’s applied in the most mindless way—in which case, you might as well not think about it).
There’s an obvious social benefit to being secure against being randomly sacrificed for the benefit of others. You’re not going to be able to quantify the utility of providing everyone in society this benefit as a general social principle, and weigh the benefit of consistency on that point against the benefit of violating the principle in any given instance, any more easily than you could have decided the issue without any attempt at quantification.
Charles, you might want to read some of Peter Singer’s writings on this point.
Robin, it’s clear that relative frequencies exist and matter somehow, even though it might seem like they shouldn’t (e.g. because of the ordering problem described in Dr. Bostrom’s paper). We observe random events with nonuniform distributions to occur according to the distribution, as opposed to uniformly. We don’t live in an extremely bizarre, acausal world even though there are an infinite number throughout spacetime, because the laws of physics are such as to make bizarre worlds rarer than normal ones (even though there are many more possible bizarre worlds than normal ones). “Difficult topic” is probably an understatement.
Robin, we can definitely agree that my notion about relative conditional frequencies is not at all clearly true. This is one of those rare, rare issues that still confuses even me. As such—this is an important general principle, that I’d like to emphasize—when you try to model things that are deeply confusing and mysterious to you, you should not be very confident in your judgments about them.
If infinite people exist, how do our subjective probabilities come out right—why don’t we always see every possible die roll with probability 1⁄6, even when the dice are loaded? How is computation possible, when every if statement always branches both ways? I seriously don’t know. Maybe the numbers are finite but just very large. But, if for whatever reason it is possible to flip a biased coin and indeed see mostly heads, then we can try to shape the outcomes of people’s lives so that their futures are mostly happy. I don’t claim to be sure of this. It is just my attempt to make things add up to normality.
Jeremy, see Nick Bostrom’s paper.
and then stop, calmly sipping a Diet Pepsi as the train bears down on the second child.
But what if, while sipping this diet coke, you’re fiddling on your laptop computer to organise the shipment of HIV drugs to Africa? If, at the moment the second child’s skull gets crushed by the onrushing train, you manage to secure governmental support for a deal which will save thousands of people? If you wipe the splattered blood from your suit, as you rush off to open a new orphanage in India?
Something still feels wrong. I think our intellectual urge to save lives is at war with our heuristics in the same direction. And those heuristics certainly predate statistics, and urge: save what you visibly can. The child in front of you is a sure bet—you can save him, you’re certain of that. More distant lives feel much more uncertain.
There’s also the issue of immediacy.
Your organisation of HIV drugs is likely something that could wait a minute, especially with an excuse as universally acceptable as saving a child from being run over by the train.
Thus, it is nigh certain that you could achieve both goals.
As for the philanthropist, I think the relevant heuristic is that we approve of anyone who saves lives, to socially reinforce the urge for others to do so. If our instincts developed in a tribal environment, then saving a life, or a small group of lives, is the best that anyone can realistically do, so we had no need to scale our admiration to a larger scale.
But if we are to become less biased, and disdain the philanthropist who spends his life-saving money inefficiently, we should be totally consistent about it, and disdain far, far more those who could spend money to save lives and don’t (unfortunately, that probably includes most of us).
In a Big World, which this one appears to be on at least three counts (spatially infinite open universe, inflationary scenario in Standard Model, and Everett branches), everyone who could exist already exists with probability 1.
Infinite reasoning has many issues. Since we can only have finite amounts of evidence, then there will always be an “X” such that the probability of “the universe is of size X or bigger” becomes tiny—swamped by uncertainties in our reasoning, even the possible uncertainties in our logic (by the way, if this is wrong, and actual infinities can be demonstrated from finite evidence, please do let me know!).
It seems that the best is to simply take our decision based on the finite evidence that we can see.
See addendum.
I am rather surprised that no one is questioning the unspoken presupposition that all human lives are of equal value.
They certainly aren’t in my estimation.
Acksiom, yes, I find it strange as well. Certainly, people in our immediate community are more valuable than people we have never met in other continents. However, I don’t think “community” should include beyond those who we actually interact with. It shouldn’t include abstract groupings such as “state” or “nation”. Supporting your high-school sports team is fine.
I don’t see why they should be more valuable. From a selfish perspective, it might feel worse to lose someone you know, but from a charitable perspective, I don’t value someone merely because I am familiar with them.
You can be more about what actions are likely to save a life than about what actions are likely to save many lives.
What if, as we approach the Singularity, it is provably or near-provably necessary to do unethical things like killing a few people or letting them die to avoid the worst of Singularity outcomes?
(I am not referring here to whether we may create non-Friendly AGI. I am referring to scenarios even before the AGI “takes over.”)
Such scenarios seem not impossible, and creates ethical dilemmas along the lines of what Yudkowsky mentions here.
Certainly, people in our immediate community are more valuable than people we have never met in other continents.
On a personal level, of course. But morally and ethically, and especially if you are looking for universal ethical values, this is most definitely not the case.
I am rather surprised that no one is questioning the unspoken presupposition that all human lives are of equal value.
That presupposition is an unjustified bias, but I feel a practical one. We’ve seen in the past what happens when human lives were openly valued at different levels, and the consequences were unpleasant, to say the least. Also seeing as trying to calculate “the value of a human life” is something stupendously open to bias, I feel that there are practical and anti-bias reasons to accept the nul hypothesis:
I’m not sure it makes for bad universal ethics. “Every man for himself, if he is able” is a very sturdy sentiment, not prone to abuse. Similarly, every individual, family, social circle, city, state, nation, is responsible first for their own well-being, and only then for that of others. Certainly, you could do more overall good by helping those whose needs are most cost effective, or more concentrated good by helping those whose needs are most severe—but that is not an evolutionary stable strategy. Not that I’m a big fan of evolution, especially when it comes to ethics, but let me put it this way—it is very rare to see someone who works overtime so that they can donate money to save strangers.
If you’re trying to figure out a universal code of ethics, it would probably help if lots of people, or at least yourself, are willing to follow it. If not, it might still have some use if people shift their morality at least a little toward the optimum, and a lot of value if it can be implemented in an AI. But for now, it would be more useful to have a decently good but popular code of ethics, and that probably means valuing “our own” more than “others”.
Thought experiment: what’s the social status of someone who follows a benefit for community member fund raising drive, reminding people how many lives in Africa could be saved with that same money?
Also, human lives can have different instrumental value but the same inherent value, such that (for instance) a researcher in area X that has the potential to save many, many lives is worth more instrumentally than a random man-on-the-street.
Joshua, what kind of scenarios could those be? (But I would do a straightforward expected-lives-saved calculation, keeping in mind the uncertainty of whether it would actually move the Singularity forward, and whether bad PR and having the police on my tail could delay the Singularity. The actual action would depend quite sensitively on the circumstances.)
Stuart, while this isn’t exactly my topic, I believe some models of physics require the universe to be infinite, and so if one of those models is true with high probability then the universe is infinite with high probability. As Eliezer said, there are at least three reasons (spatially infinite universe, chaotic inflation, many-worlds interpretation) to believe this. I have a hard time believing they’re all wrong.
“Certainly, people in our immediate community are more valuable than people we have never met in other continents. However, I don’t think “community” should include beyond those who we actually interact with. It shouldn’t include abstract groupings such as “state” or “nation”. Supporting your high-school sports team is fine.”
Yeah, wouldn’t the world be a great place if everyone thought like this… screw helping the world… let’s just help ourselves and those whom we interact with. Oh yeah, and while we are at it, why don’t we completely cut off the less fortunate from our community so that they don’t count as people we interact with.… then we can give even more money to our high school sports teams.
You’re not making your point. It’s prisoner’s dilemma. You can’t control how the other party acts.
The choice between an “averagist” and “totalist” model of optimal human welfare is a tough one. The averagist wants to maximize average happiness (or some such measure of welfare); the totalist wants to maximize total happiness. Both lead to unfortunate reductio arguments. Average human welfare can be improved by eliminating everyone who is below average. This process can be repeated successively until we have only one person left, the happiest man in the world. The totalist would proceed by increasing the population to the very edge of carrying capacity, so that everyone was just a small increment above being so unhappy that they would commit suicide.
Neither model seems to match with our intuitions. As Eliezer has frequently warned, anyone who may accidentally or intentionally create a super-intelligent Artificial Intelligence that might take over the world had better beware of it adopting one of these extremes, even if the intention is to program it to be beneficent.
I’m sure philosophers have considered these questions for centuries. I’d be curious to know if there is a principled model for optimal human happiness which does not conflict so violently with our moral instincts. Not to privilege our instincts unjustifiably; it’s possible that the AI might be right to adopt one of the views above, and we are wrong, with our muddled human thinking. I would feel better if there were a nice, consistent and relatively simple model which did not lead to a seemingly horrific outcome.
“Obvious” counterargument: If you kill everyone else, the happiest man in the world will become less happy.
Except that if they still believe their lives are worth living, then you are causing them disutility by violating their preference to survive. It also causes everyone everyone else disutility because they don’t want other people killed, and because they become worried about themselves or their families dying if they become unhappy. It also eliminates the future possibility of the killed people’s lives improving.
I believe some models of physics require the universe to be infinite
These are dependent on certain assumptions, the most general of which is the fact that the laws of physics be the same everywhere (the universe is seen as “isotropic and homogeneous”). But those sort of principles arise from observation.
And we can never be entirely sure that they are true. Now, normally this doesn’t matter—the probability of them being false is so tiny that we can consider them true. But infinity is nasty. Let’s put an probability estimate on “There are more stars than X in the universe”, and let X grow. We expand the universe beyond the visible domain, beyond that where we have any information. And we have to assume that the universe remains “isotropic and homogeneous” beyond our ability to measure that it is so. The uncertainty in that will grow, and swamp our estimate.
More mathematically, I distribute a bayesian prior over all the finite and infinite universes that can exist. Unless we can somehow put a nice measure on this “space of possible universes”, then finite amounts of observations will always leave many radically different possibilities for the universe. More worryingly, the prior will determine our results for us; the estimate will not converge to comparable result for different priors (if the space of possible universes is sufficiently pathological, then it will not converge on comparable results even after infinitely many observations).
For this reason I feel that philosophical arguments that depend on infinite universes should be generally avoided. But if someone knows a way round this issue, I’d be pleased to hear it—I want my universe to be infinite. So much more elegant that way.
The idea that the universe is “isotropic and homogeneous” does not require it to be infinite. For example, the universe could be shaped like a sphere’s surface, which is closed and finite. The size and shape of the universe, and its ultimate fate, are answered by the question “What is the sum of the angles of a very large triangle?” (this turns out to be equivalent to measuring Omega, the density parameter of the universe).
If Omega > 1, then the universe is closed, shaped like a sphere, finite, and will collapse in a Big Crunch. If Omega = 1, then the universe is flat (but could still be finite, eg doughnut-shaped, or infinite, like the Cartesian plane), and end in heat death. If Omega < 1, then the universe is open, shaped like an infinite saddle, and will end in a Big Rip. To the best of our knowledge, Omega appears to be 1, plus or minus Omega might actually be bigger or smaller than 1.
Shape of the Universe
Fate of the Universe
I’d be curious to know if there is a principled model for optimal human happiness which does not conflict so violently with our moral instincts.
Seems we need to take “creating” and “destroying” humans out of the equation—total or average happiness can work fine in a fixed population (and indeed are the same). We can tweak the conditions maybe, and count the dead and the unborn as having a certain level of happiness—but it will still lead to assumptions that violate our instincts; there will always be moments where creating a new life while making everyone unhappy or killing off someone to raise average happiness will be the right thing for the model to do.
I think we need to deal with “creating” and “destroying” people with other principles than happiness.
Stuart, as far as the infinities go, I can imagine arguments that suggest that an infinite universe is more likely than a finite one, especially a finite one that is extremely large. For example, if the laws of physics were to turn out to be much simpler for an infinite universe, given our observations, that would be evidence in that direction. Conceptually, infinity is a simpler concept than particular very large numbers, so Occam’s razor might lead us to choose infinity.
In fact I would argue that if your prior has a non-zero probability for infinite size, then there must be a size N such that once you have observed the universe to be bigger than N, you must conclude that the universe is more likely to be infinite than finite in size. For any such prior there must be such an N. So unless you are positive a priori that infinite universes are impossible, evidence could eventually convince you that it is likely.
Joe, that wasn’t my point. I believe ethical theories can and should try to capture the spirit of morality. A truer way to appreciate the value of a stranger’s life is to understand that to many people close to her, she is not merely a stranger. I was mainly giving a possible reason for not DRASTICALLY sacrificing your money for donation.
And I fully agree with Stuart that it should an exception and not a rule.
But (I call back) I already saved one child from the train tracks, and thus I am “unimaginably” far ahead on points. Whether I save the second child, or not, I will still be credited with an “unimaginably” good deed. Thus, I have no further motive to act. Doesn’t sound right, does it?
This isn’t a problem with the claim that a human life is of infinite value as such. It’s a problem with the claim that it’s morally appropriate to attach the concept of comparable value to human lives at all. It’s what happens when you start taking most utilitarians seriously. (For an overview of some of the creepy results you get when you start really applying utilitarianism, check out Derek Parfit’s Reasons and Persons.)
A Kantian would have an easy answer to your infinite moral points. That answer would go something like this: “there’s no such thing as moral points, and it’s not about how good you feel about yourself. There’s a duty to save the kid, so go do it.” And that position is based on the same intuition about the infinite value of persons as the one you quote, but instead of infinity as a mathematical concept, it’s about lexical priority: saving humans (or, generally, treating them as ends, etc.) is simply lexically ahead of all other values. And we can understand that as being what the Talmud really meant.
Paul, since my background is in AI, it is natural for me to ask how a “duty” gets cashed out computationally, if not as a contribution to expected utility. If I’m not using some kind of moral points, how do I calculate what my “duty” is?
How should I weigh a 10% chance of saving 20 lives against a 90% chance of saving one life?
If saving life takes lexical priority, should I weigh a 1/googleplex (or 1/Graham’s Number) chance of saving one life equally with a certainty of making a billion people very unhappy for fifty years?
Such questions form the base of some pretty strong theorems showing that consistent preferences must cash out as some kind of expected utility maximization.
Eliezer: that’s a good point as far as it goes. But the answer many contemporary deontologists would give is that you can’t expect to be able to computationally cash out all decision problems, and particularly moral decision problems. (Who said morality was easy?) In hard cases, it seems to me that the most plausible principles of morality don’t provide cookie-cutter determinate answers. What fills in the void? Several things kick in under various versions of various theories. For example, some duties are understood as optional rather than necessary, which gives the agent enough room to make either decision (as long as s/he’s acting from moral motivations). Similarly, some decisions can be made by the agent’s consideration of their own character—what sort of a person am I? What are my values? Those sorts of things can often fill in the gaps in practical reason left by non-computational moral theories.
how a “duty” gets cashed out computationally, if not as a contribution to expected utility. If I’m not using some kind of moral points, how do I calculate what my “duty” is?
We humans don’t seem to act as if we’re cashing out an expected utility. Instead we act as if we had a patchwork of lexically distinct moral codes for different situations, and problems come when they overlap.
Since current AI is far from being intelligent, we probably shouldn’t see it as compelling argument for how humans do or should behave.
Such questions form the base of some pretty strong theorems showing that consistent preferences must cash out as some kind of expected utility maximization. Sounds right. The more reliable information I get about the world, the more my moral preferences start resembling a utility function. Out of interest, do you have a link to those theorems?
But the consistency assumption is not present in humans, even morally well-rounded ones. We are always learning, intellectually and morally. The moral decisions we make affect our moral values as well as the other way round (this post touched on similar ideas). Seeing morality as a learning process may bring it closer to Paul’s queries: what sort of a person am I? What are my values?
Except here the answers to the questions come as a result of the moral action, rather than before it.
I don’t see the relevancy of Mr. Burrows’ statement (correct, of course) that “Very wealthy people give less, as a percentage of their wealth and income, than people of much more limited means. For wealthy philanthropists, the value from giving may be in status from the publicity of large gifts.”
This is certainly of concern if our goal is to maximize the virtue of rich people. If it is to maximize general welfare, it is of no concern at all. The recipients of charity don’t need a percentage’s worth of food, but a certain absolute amount.
Is there anyone else who reads this and thinks, “but my altruism is ultimately grounded in the emotional effect that altruism has on myself; it cannot be otherwise. I’m only delusion myself to think that more lives are better, since from my perspective, they feel the same (and I’m trapped in my perceptive. My perspective is the only one that can matter in my decision making).” That is, I don’t actually try an maximize utility generally, just my own utility. It just so happens that the primary way to maximize my utility in most situations is to help others, to grab the “altruism high.” If the high between two choices is equal, then I’m indifferent between them, whatever my “multiplying brain” says.
Is this horrible?
Maybe.
Is it true?
Maybe. Tell me why I’m wrong.
There is no need for morality to be grounded in emotional effects alone. After all, there is also a part of you that thinks that there is, or might be, something “horrible” about this, and that part also has input into your decision-making process.
Similarly, I’d be wary of your point about utility maximisation. You’re not really a simple utility-maximising agent, so it’s not like there’s any simple concept that corresponds to “your utility”. Also, the concept of maximising “utility generally” doesn’t really make sense; there is no canonical way of adding your own utility function together with everyone else’s.
Nonetheless, if you were to cash out your concepts of what things are worth and how things ought to be, then in principle it should be possible to turn them into a utility function. However, there is a priori no reason that that utility function has to only be defined over your own feelings and emotions.
If you could obtain the altruistic high without doing any of the actual altruism, would it still be just as worthwhile?
The high is a mechanism by which values are established. Reward or punishment in the past but not necessarily in the present is sufficient for making you value something in the present. Because of our limited memories introspection is pretty useless for figuring out whether you value something because of the high or not.
If you have the values already and you don’t have any reason to believe the values themselves could be problematic, does it matter how you got them?
It may be that an altruistic high in the past has led you to value altruism in the present, but what matters in the present is whether you value the altruism itself over and above the high.
The idea is that valuing a life as that important is what guides the HOW to save the nation. The how is with utmost regard for all people’s existence—especially their exposure to suffering.
By valuing people, in this case human life, to that great a degree, it establishes respectful acknowledgement of the great forces which were set in motion to create such a marvel as a human being.
Plus, there IS always the accountability for having devalued a human life when that is the beginning of the end of good policy, behavior, ethics, decency. To value one human so much? Makes you valuable to humans. Etc..
I know I’m way behind for this comment, but still: this point of view makes sense on a level, that saving additional people is always(?) virtuous and you don’t hit a ceiling of utility. But, and this is a big one, this is mostly a very simplistic model of virtue calculous, and the things it neglected turn out to have a huge and dangerous impact.
First case in point: can a surgeon harvest organs from a healthy innocent bystander to save the lives of five people in dire need of those organs? Assuming they match and there is no there donor, an unfortunately likely incident. According to this, we must say that they not only can, but should, since the surgeon is damned as a murderer either way, so at least stack the lower number of bodies. I hope I don’t need to explain that this goes south. This teaches us that there must be some distinction between taking negative action and avoiding a (net) positive one.
Another case: suppose I’m in a position to save lives on a daily basis, e.g. an ER doctor. Then if a life not saved is a life lost, then every hour that I rest, or you know, have fun, is another dead body on my scoreboard. Same goes for anyone doing their best to save lives, but in any way other than the single optimal one with the maximal expected number of lives. This one optimal route, if we’re not allowed to rest, leads to burnout very quickly and loses lives on the long run. So we must find (immediately!) the One Best Way, or be doomed to be perpetual mass murderers.
As Zach Weinersmith (and probably others) once said, “the deep lesson to learn from opportunity cost is that we’re all living every second of our lives suboptimally”. We’re not very efficient accuracy engines, and most likely not physically able to carry out any particular plan to perfection (or even close), so almost all of the time we’ll get things at least somewhat wrong. So we’ll definitely be mass murderers by way of failing to save lives, but… Then… Aren’t we better off dead? And then are lives lost really that bad...?
And you can’t really patch this neatly. You can’t say that it’s only murder if you know how to save them, because then the ethical thing would be to be very stupid and unable to determine how to save anyone. This is also related to a problem I have with the Rationalist Scoreboard of log(p) that Laplace runs at the Great Spreadsheet Above.
And even if you try to fix this by allowing that we maintain ourselves to save more lives in the long run, we 1) don’t know exactly how much this should be, and 2) doing our best attempt at this is going to end up with everyone being miserable, just trying to maximize lives but not actually living them, since pain/harm is typically much easier to produce and more intense than pleasure.
And, of course, all of this is before we consider human biases and social dynamics. If we condemn the millionaire who saves lives inefficiently, we’re probably drawing attention from the many others who don’t even do that. Since it’s much easier to be exposed to criticism than earn praise in this avenue (and this in the broad sense is a strong motivation for people to try and be virtuous), many people would see this and give up altogether.
The list goes on, but my rant can only go so long, and I hope that some of the holes in this approach are now more transparent.