Self-sacrifice is a scarce resource
“I just solved the trolley problem.… See, the trolley problem forces you to choose between two versions of letting other people die, but the actual solution is very simple. You sacrifice yourself.”
- The Good Place
High school: Naive morality
When I was a teenager, I had a very simple, naive view of morality. I thought that the right thing to do was to make others happy. I also had a naive view of how this was accomplished—I spent my time digging through trash cans to sort out the recyclables, picking up litter on the streets, reading the Communist Manifesto, going to protests, you know, high school kind of stuff. I also poured my heart into my dance group which was almost entirely comprised of disadvantaged students—mainly poor, developmentally disabled, or severely depressed, though we had all sorts. They were good people, for the most part, and I liked many of them simply as friends, but I probably also had some sort of intelligentsia savior complex going on with the amount of effort I put into that group.
The moment of reckoning for my naive morality came when I started dating a depressed, traumatized, and unbelievably pedantic boy with a superiority complex even bigger than his voice was loud. I didn’t like him. I think there was a time when I thought I loved him, but I always knew I didn’t like him. He was deeply unkind, and it was like there was nothing real inside of him. But my naive morality told me that dating him was the right thing to do, because he liked me, and because maybe if I gave enough of myself I could fix him, and then he would be kind to others like he was to me. Needless to say this did not work. I am much worse off for the choices I made at that time, with one effect being that I have trouble distinguishing between giving too much of myself and just giving basic human decency.
And even if it were true that pouring all of my love and goodness out for a broken person could make them whole again, what good would it be? There are millions of sad people in the world, and with that method I would only be able to save a few at most (or in reality, one, because of how badly pouring kindness into a black hole burns you out). If you really want to make people’s lives better, that is, if you really care about human flourishing, you can’t give your whole self to save one person. You only have one self to give.
Effective altruism, my early days
When I first moved to the Bay, right after college, I lived with five other people in what could perhaps practically but certainly not legally be called a four-bedroom apartment. Four of the others were my age, and three of us (including me) were vegan. The previous tenants had left behind a large box of oatmeal and a gallon of cinnamon, so that was most of what I ate, though I sometimes bought a jar of peanut butter to spice things up or mooched food off of our one adult housemate. I was pretty young and pretty new to EA and I didn’t think it was morally permissible to spend money, and many of my housemates seemed to think likewise. Crazy-burnout-guy work was basically the only thing we did—variously for CEA, CHAI, GiveWell, LessWrong, and an EA startup. My roommate would be gone when I woke up and not back from work yet when I fell asleep, and there was work happening at basically all hours. One time my roommate and I asked Habryka if he wanted to read Luke’s report on consciousness with us on Friday night and he told us he would be busy; when we asked with what he said he’d be working.
One day I met some Australian guys who had been there in the really early days of EA, who told us about eating out of the garbage (really!) and sleeping seven to a hallway or something ridiculous like that, so that they could donate fully 100% of their earnings to global poverty. And then I felt bad about myself, because even though I was vegan, living in a tenement, half-starving myself, and working for an EA org, I could have been doing more.
It was a long and complex process to get from there to where I am now, but suffice it to say I now realize that being miserable and half-starving is not an ideal way to set oneself up for any kind of productive work, world-saving or otherwise.
You can’t make a policy out of self-sacrifice
I want to circle back to the quote at the beginning of this post. (Don’t worry, there won’t be any spoilers for The Good Place). It’s supposed to be a touching moment, and in some ways it is, but it’s also frustrating. Whether or not self-sacrifice was correct in that situation misses the point; the problem is that self-sacrifice cannot be the answer to the trolley problem.
Let’s say, for simplicity’s sake, that me jumping in front of the trolley will stop it. So I do that, and boom, six lives saved. But if the trolley problem is a metaphor for any real-world problem, there are millions of trolleys hurtling down millions of tracks, and whether you jump in front of one of those trolleys yourself or not, millions of people are still going to die. You still need to come up with a policy-level answer for the problem, and the fact remains that the policy that will result in the fewest deaths is switching tracks to kill one person instead of five. You can’t jump in front of a million trolleys.
There may be times when self-sacrifice is the best of several bad options. Like, if you’re in a crashing airplane with Eliezer Yudkowsky and Scott Alexander (or substitute your morally important figures of choice) and there are only two parachutes, then sure, there’s probably a good argument to be made for letting them have the parachutes. But the point I want to make is, you can’t make a policy out of self-sacrifice. Because there’s only one of you, and there’s only so much of you that can be given, and it’s not nearly commensurate with the amount of ill in the world.
Clarification
I am not attempting to argue that, in doing your best to do the right thing, you will never have to make decisions that are painful for you. I know many a person working on AI safety who, if the world were different, would have loved nothing more than to be a physicist. I’m glad for my work in the Bay, but I also regret not living nearer to my parents as they grow older. We all make sacrifices at the altar of opportunity cost, but that’s true for everyone, whether they’re trying to do the right thing or not.
The key thing is that those AI safety researchers are not making themselves miserable with their choices, and neither am I. We enjoy our work and our lives, even if there are other things we might have enjoyed that we’ve had to give up for various reasons. Choosing the path of least regret doesn’t mean you’ll have no regrets on the path you go down.
The difference, as I see it, is that the “self-sacrifices” I talked about earlier in the post made my life strictly worse. I would have been strictly better off if I hadn’t poured kindness into someone I hated, or if I hadn’t lived in a dark converted cafe with a nightmare shower and tried to subsist off of stale oatmeal with no salt.
You’ll most likely have to make sacrifices if you’re aiming at anything worthwhile, but be careful not to follow policies that deplete the core of yourself. You won’t be very good at achieving your goals if you’re burnt out, traumatized, or dead. Self-sacrifice is generally thought of as virtuous, in the colloquial sense of the word, but moralities that advocate it are unlikely to lead you where you want to go.
Self-sacrifice is a scarce resource.
I frame it a little differently. “Self” is the scarce resource. Self-sacrifice can be evaluated just like spending/losing (sacrificing) any other scarce and valuable resource. Is the benefit/impact greater than the next-best thing you could do with that resource?
As you point out in your examples, the answer is mostly “no”. You’re usually better off accumulating more self (becoming stronger), and then leveraging that to get more result with less sacrifice. The balance may change as you age, and the future rewards of self-preservation get smaller as your expected future self-hours decrease. But even toward end-of-life, the things often visible as self-sacrifice remain low-impact and don’t rise above the alternate uses of self.
Taking this from a Kantian-ish perspective: what would actually happen if many people adopted this policy? From third-person perspective, this policy would translate to: “The proper way to solve ethical problem is to kill those people who take ethics most seriously.” I can imagine some long-term problems with this, such as running out of ethical people rather quickly. If ethics means something else than virtue signaling, it should not be self-defeating.
This reminds me of something that happened when I joined the Bay Area rationalist community. A number of us were hanging out and decided to pile in a car to go somewhere, I don’t remember where. Unfortunately there were more people than seatbelts. The group decided that one of us, who was widely recognized as an Important High-Impact Person, would definitely get a seatbelt; I ended up without a seatbelt.
I now regret going on that car ride. Not because of the danger; it was a short drive and traffic was light. But the self-signaling was unhealthy. I should have stayed behind, to demonstrate to myself that my safety is important. I needed to tell myself “the world will lose something precious if I die, and I have a duty to protect myself, just as these people are protecting the Important High-Impact Person”.
Everyone involved in this story has grown a lot since then (me included!) and I don’t have any hard feelings. I bring it up because offhand comments or jokes about sacrificing one’s life for an Important High-Impact Person sound a bit off to me; they possibly reveal an unhealthy attitude towards self-sacrifice.
(If someone actually does find themselves in a situation where they must give their life to save another, I won’t judge their choice.)
This is super important, and I’m curious what your process of change was like.
(I’m working on an analogous change- I’ve been terrified of letting people down for my whole adult life.)
If you find yourself doing too much self-sacrifice, injecting a dose of normative and meta-normative uncertainty might help. (I’ve never had this problem, and I attribute it to my own normative/meta-normative uncertainty. :) Not sure which arguments you heard that made you extremely self-sacrificial, but try Shut Up and Divide? if it was “Shut Up and Multiply”, or Is the potential astronomical waste in our universe too small to care about? if it was “Astronomical Waste”.
I greatly appreciate this post! Although Yudkowsky clearly supports the consideration of emotions in rational decision making (see Feeling Rational), I find that a lot of the posts here idealize logic in the absence of emotion. Or at the very least, they are written in a style that has the same quirks and particularities as my intelligent friends who struggle more with emotional intelligence, emotional self-awareness, and more broadly interacting with other people because they don’t find these important (or perhaps they have the causation backwards—they don’t find these things “important” because they struggle to master them, and it is more comfortable to believe that where they excel is where it’s important to excel, and where they struggle is where it doesn’t matter anyway).
This post beautifully shows how applying overly stringent “rational” ideas can be incredibly harmful to those who do experience emotions unusually strongly. It would seem that mingyuan has emotions that are stronger than the average person. It takes a strong love of better outcomes for humanity to apply effective altruism so extremely; it takes a strong sense of guilt to want to take such a lifestyle to even greater extremes when already living off of stale oatmeal in a far too crowded apartment and overexerting oneself at a job. Most people, from what I have observed, are not able to feel such passion about the idea of helping people who they don’t personally know.
I think the post is a beautiful and vivid illustration of how the psychology of emotional well-being is too little discussed in many rationalist communities, even though understanding emotional well-being and one’s own limits is extremely important when trying to make effective, sustainable rational choices about how to live. Thank you so much for the thoughtful post.
If millions of trolleys are about and millions of people self-sacrifice to fix them then suicidal fixing can be a valid policy. Baneling ants exist and are selected for.
The impulse to value self-sacrife might come form the default position that people are very good at looking after their own interest. So at a coarse level any “self-detrimenal” effect is likely to come from a complicated or abstract moral reasoning. But then there is the identity blind kind of reasoning. If you think that people that help others should not be tired all the time, if person A helps others and is tired you should arrange for their relaxation. This remains true if person A is yourself. But the basic instinct is to favour giving yourself a break becuase it is hedonistically pleasing. But the reasoning of persons in your position should arrange their stuff in a certain way is a kind of “cold” basis for possibly the same outcome.
A policy that good people should suicide just becuase they are good is very terrible policy. But the flip side is that some bad people will pay unspeakble costs to gain real survivalship percentages. People have a right to life even in an extended “smaller things than life-and-death” way. But life can be overvalued and most real actions carry a slight chance of death.
Then there is the issue of private matters versus public matters. If there is a community of 1000 people that has one shared issue involving the life and death of 100 people and each has private matter involving 1 different person, then via one logic everybody sticking to their own business saves 1000 people vs 100 and via another way a person doing public work over private work saves 100 people vs 1 person. However if 100 persons do public work at the cost of their private work then it is a choice of between 100 vs 100 people. Each of those can think they are being super efficient 100:1 heroes. And those that choose a select few close ones can seem like super ineffcient 1:100 ones.
Your last paragraph doesn’t make much sense to me. I think you need to specify how much needs to be done in order to resolve that one shared issue. If it requires the same investment from all 1000 people as they’d have put into saving those single individual lives, then it’s 1000 people versus 100 people and they should do the individual thing. If it requires just one person to do it, then (provided there’s some way of selecting that person) it’s 1 person versus 100 people and someone should do the shared thing. If it requires 100 people to do it, then as you say it’s a choice of 100 versus 100 and other considerations besides “how many people saved?” will dominate. But none of this is really about private versus public, and whether someone’s being efficient or inefficient in making a particular choice depends completely on that how-much-needs-to-be-done question that you left unspecified.
(There are public-versus-private issues, and once you nail down how much public effort it takes to resolve the shared issue then they become relevant. Coordination is hard! Public work is more visible and may motivate others! People care more about people close to them! Etc., etc.)
Why is it mandatory? What happens if I don’t specify?
I wrote it as weighing the importance but I had an incling it is more of a progress about how much is done. If one has access to accurate effort information then utilitarian calculus is easy. However sometimes there are uncertainties about them and some logics do not require or access this information. Or like you know exactly how cool it would be to be on the moon but you don’t have an idea whether it is expensive or super duper expensive and you need to undertake a research program during which the costs clear up. Or you could improve healthcare or increase equanimity of justice. So does that mean that because cost are harder to estimate in one field vs other fields, predictable costs get selected over more nebulous ones? Decisions under big cost uncertainty and difficulty in comparing values are not super rare. But still a principle of “if you use a lot of resources for something it better be laudable in some sense” survives.
For example in the case that an effective selection mechanism is not found there is danger that 1 person actually does the job, 1 tries to help but is only half effective and 98 people stand and watch as the two try to struggle. In the other direction high probablity of being a useless bystander might make that 0 people attempt the job. If everybody just treated jobs as jobs without distintion on how many others might try it the jobs with most “visiblity” will likely be overcrowded or overcrowded relative to their otherwise importance. In a way what has sometimes been described as a “bias” dilution of responcibility can be seen as a hack / heuristic to solve the situation. It tries to balance so that in a typical size crowd the expected amount of people taking action is a small finite number, by raising the bar to action according to how big a crowd you are in. It is a primitive kind of coordination but even that helps a lot.
Overtly sacrifical behaviour could be analysed as giving way too much more importance to other peoples worries, that is removing the dilution of responciblity without replacing it with anything more advanced. Somebody that tries to help everybody in a village will as a small detail spend a lot of time salesmanning across the village and the transit time alone might cut into the efficiency even before considering factors like greater epistemological distance (you spend a lot of time interviewing people whether they are fine or not) and not being fit for every kind of need (you might be good at carpentry but that one requires masonry). Taking these somewhat arbitrary effects effectively into account you could limit yourself to a small geographical area (less travelling), do stuff only upon request (people need to know what their needs are) or only do stuff you know how to do (do the carpentry for the whole country but no masonry for anyone). All move into the direction that a need somebody has will go unaddressed by you personally.
Mandatory? It’s not mandatory. But if you don’t specify then you’re making an argument with vital bits missing.
I agree that utilitarian decision making (or indeed any decision making) is harder when you don’t have all the information about e.g. how much effort something takes.
I also agree that in practice we likely get more efficiency if people care more about themselves and others near to them than about random people further away.
Welll the specification would be “jobs of roughly equal effort” which I guess I left implicit in a bad way.
I think you are arguing that the essence will depend on the efficiency ratios but I think the shared vs not-shared property will overwhelm efficiency considerations. That is if job efficiency varies between 0.1 and 10 and the populations are around 10000 and 100000 then 1000 public effort lives at typical bad efficiency will seem comparable to 1 private life at good efficiency while at population level doing the private option at bad efficiency would be comparable to getting the public option done. Thus any issue affecting the “whole” community will overwhelm any private option.
It is crucial that the public task is finite and shared. If you could start up independent “benefit all” extra projects (and get them done alone) the calculus would be right. One could try point ot the error also via “marginal result” in that yes it is an issue of 1000 lives but if your participation doesn’t make or break the project then it is of zero impact. So one should be indifferent rather than thinking it is the utmost importance. If it can partially succeed then the impact is the increase in success not the total success. Yet when you think stuff like “hungry people in africa” your mind probably refers to the total issue/success.
If I am asking what is the circumference of a circle at lot of people would accept pi as the answer. Somebody could insist that I tell the radius as essential information to determine how long the circumference would be. Efficiency is not essential to the phenomenon that I try to point out.
Curated. I resonate with many of the examples in this, and have made a lot of similar mistakes (including before I met the rationalist and the EA communities). This essay described those thinking patterns and their pathologies pretty starkly and helps me look at them directly. I expect to reference this post in future conversations when people I know are making big decisions, especially where I feel they’re not understanding how much they’re sacrificing for this one decision, with so many decisions still ahead (i.e. your framing about policies vs one-shot).
One hesitation I have is that, while I strongly inside-view connect to this post, perhaps I am typical-minding on how much other people share these thinking patterns, and people might find it a bit uncomfortable to read. But I do think a lot of people I respect have thoughts like this, so expect it will strongly help a lot of people to read it. (Also it’s a well-written essay and quite readable.)
How did you meet your roommates? I would like to surround myself with similar people (perhaps less extreme).
I think there is one important negative of of self-sacrifice that you are missing here, or at least of self-sacrifice that is apparent to anyone but yourself.
Even though it’s a cliche quote, Zarathustra puts it best:
It is extremely hard to criticize the choices of someone that seems to be sacrificing a lot, or at least who seems to have that impression of themselves and whom others have that impression of. For you are afraid of disturbing whatever “holiness” lead the them there, and even if not, you are afraid of other people seeing it that way and thus shunning you for the criticism given.
I think perhaps, as humans, we want morality and happiness to overlap when this is rarely the case. Self-sacrifice is definitely a limited resource, but if most people believed it to be a moral duty, the human race would likely be better off. The problem with the self-sacrificial strategy is the problem of defection in any game.
If we could convince a sufficient amount of people to sacrifice their personal resources and time, then the average cost of self-sacrifice could go down enough that more people would be willing to do it and we would all be better off. But there will always be those that defect for personal gain. In the modern world, we have little incentive to give more than a small amount.
Even if we’re good people we have to choose whether to maximize happiness and optimize goodness or the reverse. I think the key advantage of maximizing happiness and optimizing morality is that we can still do good, though less than we might have otherwise, while having an attractive enough life for others to want to do the same.
I think that the most effective strategies in altruism are those that can coerce systems into rewarding those that would have otherwise defected—like somehow making good people cooler, richer, or happier. So, perhaps it is those strategies that make you happy while helping others at the same time that are the most likely to do the most good in the long run.
So, essentially, I’m agreeing with you, but from a slightly different perspective.
At the risk of stating very much the very obvious:
Trolley problem (or the fat man variant) is a wrong metaphor for near any ethical decision, anyway, as there are very few real life ethical dilemmas that are as visceral and require immediate action from very few limited set of options and whose consequences are nevertheless as clear.
Here is a couple of a bit more realistic matter of life and death. There are many stories (probably I could find factual accounts, but I am too lazy to search for sources) of soldiers who make the snap decision to save the lives of rest of their squad by jumping on a thrown hand grenade. Yet I doubt very few would cast much blame on anyone who had a chance of taking cover, and did that instead. (I wouldn’t.) Moreover, the generals who demand prisoners (or agitate impressionable recruits) to clear a minefield without proper training or equipment are to be much frowned upon. And of course, there are untold possibilities to commit a dumb self-sacrifice that achieves nothing.
It general, a military force can not be very effective without people willing to put themselves in danger: if one finds oneself agreement with existence of states and armies, some amount of self-sacrifice follows naturally. For this reason, there are acts of valor who are viewed positively and to be cultivated. Yet, there are also common Western moral sentiments which dictate that it is questionable or outright wrong to require the unreasonable of other people, especially if the benefactors or the people doing the requiring are contributing relatively little themselves (sentiment demonstrated here by Blackadder Goes Forth). And in some cases drawing a judgement is generally considered difficult.
(What one should make of the Charge of the Light Brigade? I am not a military historian, but going by the popular account, the order to charge was stupid, negligent, mistake, or all of the three. Yet to some people, there is something inspirational in the foolishness of soldiers fulfilling the order; others would see such vies as abhorrent legend-building propaganda that devalues human life.)
In summary, I have not much concrete conclusions to offer, and anyway, details from one context (here, military) do not translate necessarily very well into other aspects of life. In some situations, (some amount of) self-sacrifice may be a good option, maybe even the best or only option for obtaining some outcomes, and it can be good thing to have around. On the other hand, in many situations it is wrong or contentious to require large sacrifices from others, and people who do so (including also extreme persuasion leading to voluntary self-sacrifice) are condemned as taking unjust advantage of others. Much depends on the framing.
As reader may notice, I am not arguing from any particular systematic theory of ethics, but rehashing my moral intuitions what is considered acceptable in West, assuming there is some signal of ethics in there.
This really got me thinking about something. You mentioned that sacrificing yourself will result in the train being stopped with one casualty and six people saved. The most common decision made in the Trolley problem is to sacrifice one person to save the other five. Any one of these two scenarios (either sacrificing yourself or the person on the track) will result in exactly one casualty. However, the number of people saved will be different. Sacrificing yourself (or any outside person really) will save more people than pulling the lever. I feel confident in saying that most people will prefer one casualty and six saved over one casualty and five saved (simple math right?).
This would imply that the “solution” to the Trolley problem (from a very simple and utilitaristic standpoint) is not to pull the lever, nor to just stand there, but to sacrifice a completely innocent bystander...
This also fits into to question of how an AI would deal with the Trolley problem. If, for example, an AI is given the instruction to “save the most people”, it’s not unreasonable to assume it could make a very different decision than an “ethical” human being, or something even scarier; A decision we didn’t even consider.
I’m very interested in your perspective. Mostly, because I find it so alien from my own. A little background- I work in law enforcement, and before that served in the American military. In these backgrounds I have come to see human suffering as the norm, and not a problem to be fixed. If I were to make a big-picture worldview of things, I would say that the ‘natural’ state of the universe is randomness and chaos. Human beings are a thing that make shaky structures-sometimes literally, but most often some sort of society that breeds more citizens then it can either feed or make use of. A very few of these shaky structures are sturdier than others. Classical virtue ethics have lasted a while. The sort of altruism/communal-ism found in Christ or the Buddha still makes an impact. But chaos is always there, and inescapable.
My reaction to this has been either to protect the perimeter of the things/people/ideas I care about, or to tend the garden of the place I want to be at. I do not want to give money to cure malaria, because I see no evidence at all that curing malaria will help anyone I might actually meet in the first world nation I live in, and I do not care about the lives of the animals I eat from the factory, other than how it might affect my personal health.
I find the idea that rationality can effect altruistic belief a fantasy that seems mostly to be shared by the sorts of people who have won at the current rules of meritocracy. This sounds much harsher than I intend it to- intelligent rational people indulging in flights of whimsy can produce wonderful things! But I do not believe this sort of thought is anything but that. Your story seems like a journey from foolishness to experience.