Balancing Costs to Ourselves with Benefits to Others
Once you accept the idea that we have some obligation to try to help other people you are faced with the question of how to trade off costs for yourself against benefits for others. Questions like how much should I give?, should I avoid slave-created chocolate?, and should I become a doctor? are facets of this broader question of “how much should I give up to help others?”
The simplest approach to this, which comes pretty naturally to me and to most people, is to compartmentalize. You figure out how much you can afford to donate, whether there are foods you’re willing to give up, how often to give blood or other body parts, and how much time you can spend volunteering. Within each category you do the best you can to find the right balance. For example, Alexander Berger writes:
I have a policy on how much money I give to charity, and I decided to donate a kidney, but both decisions depended on the specific circumstances of what would be asked of me.
The problem with this approach is that sometimes you can do better, both for yourself and for others, by trading off between categories. Say the positive effect of giving $1200 to the AMF is about the same as of donating a kidney. Perhaps if I looked at things in terms of having a policy for monetary donation and another for organ donation I might decide to give $X and also donate a kidney, but I’m really not that excited about donating the kidney. I might be happier if I gave $X+$1200 but kept my kidney, which would be neutral from the perspective of benefit to others.
I’m think the right framework for thinking about this sort of thing is to decide that there’s a certain amount of happiness you’re willing to forgo for the sake of others, and then do the most good you can within that bound. [1] This doesn’t have to be even a very large amount of happiness; you can do a lot of good by giving future income raises to effective charity.
(This still doesn’t answer the question of how much you should be willing to sacrifice for others. I don’t have a good answer for that yet.)
I also posted this on my blog.
[1] Technically, this is the knapsack problem, which is NP-hard. But in practice the actually difficult bit is getting good estimates for the value of all the competing good choices and their likely effects on your happiness.
- Magical Healing Powers by 12 Aug 2012 3:19 UTC; 2 points) (
- Prioritizing Happiness by 6 Jul 2013 16:01 UTC; 0 points) (
- 22 Jun 2012 22:36 UTC; 0 points) 's comment on Altruistic Kidney Donation by (
Disagree. How much you’re willing to give up should depend on how efficiently the sacrifice would create good for others. I would give up having a computer to save a billion lives, but not to give two people access to computers. I do not first decide whether or not to give up access to a computer and then try to figure out the best way to funnel the resources into helping others.
When a new TV and a human life are the same cost, I’m simply not evil enough to be a preference utilitarian in the first world.
Can you explain what you mean here? I’m reading it like this:
But I’m not confident that that’s what you meant.
That’s about the size of it, yeah.
To be more explicit: if you truly value all life equally, then trying to act on those beliefs by optimizing the world as efficiently as possible more or less demands that you live in a state of abject, near-starvation poverty, while donating as much as possible to charity. Anything else would mean that you value (for example) a new television over a human life, which most people would call evil if it happened more explicitly. So, your options are either to admit that preference, drop out of the first world by doing the charity thing, or refuse to acknowledge that you are able to change the world to reflect your preferences. I’m saying, somewhat facetiously, that I took the easy way out by simply saying I’m not a preference utilitarian.
Two economists walking down the street toward an ice cream shop. The first economist turns to the second and says, “I really want an ice cream cone.” They keep walking. They walk past the ice cream shop. Halfway down the block, the second economist turns to the first and says “I guess not.”
A lot of people like to make talking noises obviously factually incorrect about their own preferences. “I value everyone’s happiness equally.” What manifest piffle. It’s strange how many people think the world is made a better place by lying to others, and yourself, and what you value.
Strange?
Lying to others about what I value is an important part of this nutritious negotiating strategy.
Lying to myself lets me lie to others without feeling, or seeming, dishonest. (Indeed, most people will argue that I’m not actually lying at all in this case.)
What’s strange about it?
I try to value all life equally, but I also realize that if “living in a state of abject, near-starvation poverty, while donating as much as possible to charity” wouldn’t actually be that helpful. I would probably break down and give up on all this trying to help other people if I took it too far. Even ignoring that, I would probably be much less effective at earning money to donate or convincing other people to do similarly if I were to overly restrict my own consumption.
I admit it would be better if I were to spend less on myself and more on effective charity, but perfect is the enemy of good here. If you’re shocked that people would value TVs over human lives let that motivate you to give what you can, not give up in disgust at our failings.
Is that really an acceptable excuse?
I mean, you’re clearly living charitably by choosing Wesleyan levels of consumption, but I don’t think that can be stretched to ‘value all life equally.’ It is just so much easier to turn money into happiness / life / any other ‘fungible’ value in the developing world than the developed world than any reduction in your happiness will probably be more than paid for in happiness increases elsewhere.
(The conclusion I would recommend, of course, is not to abandon your charity or live less well, but to alter your justification to something you actually would endorse taking all the way.)
There is a point at which further reduction in happiness as a result of giving more means that in the long run I am able to give less (I burn out, lose my job, get promoted slower).
I’m not at this point, but I think it’s morally the best place for to be. I’d estimate that my current place, where I spend more on myself and those around me than I should but still give as much as I feel like I can, is about 80% of the way to where I should be [1].
[1] My guess is that the point where further reduction in self-spending also yields reductions in giving ability comes when Julia and I are trying to live on $10K instead of $20K. This would have moved our 2009 giving from $45K to $55K, a 20% improvement.
Yeah, but I’m definitely not anywhere near the point where a mental breakdown is a risk. Hell, I don’t even recycle.
And, sure, you should give what you can (what a phrase, hah!), because that’s better than doing nothing, but in that simple moral light, that doesn’t actually mean we’re not evil. We’re just choosing to ignore it. For our health.
EDIT:
Would just like to point out that this is a false dichotomy. You could restrict your consumption a lot (at least an order of magnitude) without impairing your ability to help others to a significant degree.
Julia and I live on about $22K and give about $45K (more). An order of magnitude would be going down to ~$2K. I wouldn’t be able to keep my job, which would cut my donations a lot.
I interpreted you as saying you ignored the need to help others when now it sounds like you try to ignore that we’re evil not to be doing more. These aren’t the same thing, and I think the second one is a somewhat better way to try to resolve the internal conflict. As long as you don’t resolve it away to the point you care only about the happiness of yourself and people around you.
First, my apologies. I assumed you were significantly closer to the mean than you are.
Second: Well, yes, my expressed preferences are still that I care about other people. My concern is that, based on my behavior, I clearly do not. Or, at least, I care about myself and my loved ones at least dozens-if-not-hundreds of times more.
There are people here who take ideas seriously even when this brings them to unsual places. Lesswrong is a strange place.
As much as I understand I should value the joy and suffering of all people equally, I can’t fully act on it. The happiness of my family and friends, of people around me, feels unavoidably important on a really deep level. I set aside money for my much more generous wife to spend on herself, money that can’t be given away, so that she can have some spending money she doesn’t feel guilty about. I buy presents for my sisters. I pay to go to contra dances. This is only “revealed preference”, however, in as much as it reveals me to be a human, with all the biologically based irrationalities that brings. I would be a better person if I could bring myself to spend all that money on people who need it more, but I don’t let angst over my imperfection keep me from doing my best to help others.
Hm, I don’t find it helpful to analyze whether I’m an evil person. I do think we’d get outcomes I like if we all gave more to effective causes. So I set aside an amount to give, I live on the rest, and I try not to angst about it for the rest of the year. This is a better outcome than having an ugh field around the topic so strong that I end up doing nothing, which seems to be what you’re describing.
I’m going to assume the living in near-starvation and poverty thing is just an example, since that’s almost certainly not the best way to save the most lives (well-nourished humans are more capable humans), and I’ll assume your point was more along the lines of do-as-much-good-as-you-possibly-can-all-the-time.
I think you need to take into account the fact that you’re human. Just because you do something which would seem to imply some weird or evil preference doesn’t mean you need to accept that as your real preference. We are made of faulty hardware.
Faulty as compared to what? I mean, yes, if you assume our expressed preferences are what we really want, then we’re awfully (even spectacularly) bad at achieving them. If you assume that what we really want is survival, comfort, sex, food, and other things that contribute to our own genetic replication, then we’re not faulty at all. We’re actually quite good at optimizing for our actual preferences, even if we do sometimes become convinced that our preferences are something they aren’t.
EDIT: I’m not trying to bring everyone down with first-world angst. This just troubles me. I may simply have to accept that, under my own defintion of the term, I’m just not a particularly good person.
What about this: Just because we have some desires, it does not automatically mean we are good at following them. And vice versa, just because we are not at doing something, it does not mean that we really don’t want it.
For example if I want to eat pizza, but I can’t cook pizza, I can’t convince anyone to cook pizza for me, I cannot find a pizzeria, and I am too dumb to use internet to find anything of this… does it mean that I really don’t want the pizza… or does it just mean that I am bad at getting it in this environment, but in some other environment (where I have a pizza cookbook at home, and a pizzeria is across the street) I could be more successful?
We do not live in our natural environment. I want to help other people, but my evolutionary algorithms suppose that those people live near to me, their needs are transparent to me, and I see an immediate feedback of my help. Without these conditions, my algorithms start breaking.
Maybe what you need is not a new you, but a new definition.
This explains what I mean.
Actually, we are, since we don’t go after even those things very effectively.
I can get a new TV for under $300. What cause are you giving to?
There are TV sets much cheaper than that, especially if you buy them second-hand.
Pretty sure you can buy a new TV with much less than $1600.
… you have probably made a mistake already.
I’m a failed egoist. I’ve been a social worker and an educator most of my working life. You’d think I’d keep busy crushing my enemies, driving them before me, and hearing the lamentation of their women. But instead I prefer to help some people, some of the time. I, too, balance out the cost of myself with benefits to others. I do it out of preference and not obligation, but perhaps these three things I’ve figured out will still be useful to you.
If you don’t set any limitations you will do harm as well as good guaranteed. Perhaps more harm than good. Leave savior behavior to invisible monsters that live in the sky and to comic book characters.
When you’re helping directly, you can set any limitations you want. Learn from your mistakes. When you’re working within a system (a social work agency, an education system), you should follow the limitations they set. Learn from their mistakes.
If it isn’t helping, stop doing it. Maybe try something else, maybe just stop.
(I’m excited to see another social worker here!)
I find that trying to help people directly vs. indirectly have pretty different effects on me. E.g. I’m not willing to work 60 hours a week, which many people are, but I’m willing to give a lot more of my cash than most people are. Part of what I’m interested in now is how to apply what helping professionals have learned about burnout to other kinds of helping. If you want to support an important cause not just in the fervor of youth but for decades, I think some of the same anti-burnout measures help.
Most people seem to have a problem with #3, egoist or otherwise.
I guess it depends on whether your purpose is to help people, to be seen as helping people (signaling), or to alleviate guilt.
Or fourth option, which I personally espouse, to make your personal living environment as pleasant as possible. Even ineffectual donations can serve this fourth purpose; sometimes supporting values which are important to you is as important as serving those values.
Egoists can purchase warm fuzzies too.
Elaborate?
If you’re doing extensive utilitarian calculations, you are probably purchasing warm fuzzies pretty badly.
au contraire, extensive calculations demonstrating correctitude are part of the reward.
In as much as I do extensive utilitarian calculations it’s to figure out the best ways to purchase utilons. It’s also something I enjoy, but it’s not how I get warm fuzzies.
I approach warm fuzzies as just a specific kind of happiness, and I don’t try to do calculations to estimate it. I just do what I think will make me the most happy. It works pretty well.
Unless I still misunderstand you?
I was having a discussion elsewhere about cryonics. Someone pointed out that it was selfish to subscribe to cryonics because I could instead use that money to provide medical support to third-world countries, pointing to a $50 medical procedure that could save lives.
I spend about $2,000 a year on cryonics for myself. I have a lot more than that in disposable income. I could forgo cryonics to save 40 people per year. I could forgo some amusements to save two or three times that.
I don’t.
In order for that argument to dissuade me from cryonics, I would have to be moved by it sufficiently to give up my disposable income first.
I came to the conclusion that I effectively don’t care about people in distant countries dying of things that won’t impact me.
This realization was a bit disquieting, to say the least.
Unless that someone was spending as much or more on charity or similar extenuating factors, it’s worth noting that
applies even moreso to them!
Why? Do you have overpopulation as one of your meta-values? Do you like to think of yourself as a preference rather than hedonic utilitarian? Do you know what your terminal values are?
If you are not feeling genuine satisfaction from donating part of your income, the way JuliaWise does, then there is no point in feeling guilty about not donating. If you do feel guilty, consider digging deeper into your value system to reconcile what you really want and what you think you should want. The metaethics sequence could be of help.
Those aren’t my real objection to spending money on the relevant charities. My real objection is spending money that I could otherwise spend on something I like, or save for future use (retiring sooner). It feels like throwing away money, almost. I have a deficient warm fuzzy receptor and low levels of empathy.
That is what I’m asking. Why do you seem to feel guilty about it (and blame it on some deficiency)?
It conflicts with my previous self-image and what I perceive as...not societal norms, I suppose, but what society thinks its norms should be, perhaps.
I am not feeling guilt. I do not feel unhappy or upset about any of this. It’s just odd and takes a bit of adjusting.
I have an impression of people around me having more empathy and reacting more to warm fuzzies than I do. That’s all I meant by saying I’m deficient in these areas. It was a dispassionate observation.
Ah, OK, makes sense.
The knapsack is NP-complete. It’s technically also NP-hard, but NP-complete is more specific.
Edit: Never mind. It’s only NP-complete if you’re just trying to figure out if it’s possible. Optimizing is NP-hard.
Once you actually finish the optimisation problem shouldn’t you then have the best possible solution? That is, either your result fits or it doesn’t. How can finding the best possible solution be less difficult than just finding out whether it is possible to have a solution?
NP-hard is harder than NP-complete. Finding the best possible solution is the harder one.
This makes me wonder why you said “It’s technically also NP-hard, but NP-complete is more specific”. That is a statement that I took to infer that the NP-complete ones were more difficult.
NP-hard = harder or equal to, not harder, than NP-complete. He was likely just demonstrating mastery of the subject :)
First, suppose your problem is NP-hard. This means “at least as hard as one of the NP-complete problems, e.g. 3SAT”, in that if you have a poly-time solution to your problem, it can be used to make a poly-time solution to the NP-complete problem.
Now, becoming more specific: what if a P-time solution to one of the NP-complete problems would yield a P-time solution to your problem? Then your problem joins the class of NP-complete problems, which are all P-time reducible to one another.
NP-hard is “at least as hard as NP-complete”.
A curse on the ambiguity inherent in English language.
The only time that it is NP-complete is if you’re just trying to figure out if it’s possible.
It’s merely NP-complete if you’re just trying to figure out if it’s possible.
This makes much more sense now that I’ve spotted the intended meaning.