Though I’ve read everything you’ve said, I don’t have a clear intuitive sense for where you’re coming from and why this topic is important to you.
I don’t want to see the human species stop doing things like e.g. Make-A-Wish does. I feel that the kind of urge that motivates people to do such things is a large part of why humanity is worth protecting in the first place. Although I agree that saving lives is typically more important than a particular other cause, and that it’s usually what you should do if you have to choose, I think we should if at all possible avoid compromising high-level values—such as by discouraging other forms of altruism—in order to do so.
To put this in a broader context, I have a strong aversion to the mentality expressed in the second paragraph of this post. I fear that if we don’t allocate some of our caring to particular humans in their individual capacities, people will come to be seen as dispensable—and then, one day, I might be discarded, too. Since I greatly value my existence on good days and my autonomy even on the worst days, this is a nightmare scenario. I’m afraid of someone being tortured for 50 years to save 3^^^3 people the inconvenience of a dust speck. Yes, it may be the better option, if those are the only two choices, but that doesn’t make it good.
Given that this is how I feel even when we’re talking about existential risk—saving the whole human species and its future—I hope you can understand why similar-sounding arguments against small-scale fuzzy personal altruism in favor of anything less than existential risk reduction leave an especially bad taste in my mouth.
I’m the type of person who highly values fuzzies, to such an extent that I value others’ valuing of fuzzies, and I don’t want to push the culture in a direction toward hostility to valuing fuzzies. I think it’s great if we can learn to be more rational in the pursuit of our goals, but anyone whose goals include trips to Disneyland for cancer patients doesn’t have anything more to be ashamed of than someone whose goals include a new pair of shoes.
I don’t want to push the culture in a direction toward hostility to valuing fuzzies. I think it’s great if we can learn to be more rational in the pursuit of our goals, but anyone whose goals include trips to Disneyland for cancer patients doesn’t have anything more to be ashamed of than someone whose goals include a new pair of shoes.
Several points here:
I agree with Holden’s posting Nothing wrong with selfish giving—just don’t call it philanthropy (though I find the negative connotation of ‘selfish’ attached to the phrase ‘selfish giving’ unfortunate). I think that people who are interested in making the world a better place should allocate some of their resources with an eye toward maximizing their positive impact.
As I’ve said elsewhere, I think that there’s a fair amount to the points that Yvain makes in his Doing Your Good Deed For the Day and do think that it sometimes happens that people’s willingness to help others is diminished by their existing charitable activities.
I think that people who are interested in making the world a better place should allocate some of their resources with an eye toward maximizing their positive impact.
Agreed, of course.
As I’ve said elsewhere, I think that there’s a fair amount to the points that Yvain makes in his Doing Your Good Deed For the Day and do think that it sometimes happens that people’s willingness to help others is diminished by their existing charitable activities.
Yes, I regard this as definitely a bug and not a feature.
I’m all for people feeling more fuzzies.
Glad to hear it. :-)
I’ll take some time to reflect on the nature and extent of our apparent disagreement.
You are making the perfect (people donating to x-risks charities instead of buying personal luxuries) the enemy of the good (people donating to save lives instead of donating to provide trips to Disneyland).
If you know how to convince people (not LW regulars) to contribute to x-risk reduction, instead of buying shoes, then please do so. If not, it doesn’t make sense to complain about efforts that can convince people to make immediate positive changes in their behavior while planting the seeds towards convincing them to more generally maximize expected utility.
You are making the perfect (people donating to x-risks charities instead of buying personal luxuries) the enemy of the good (people donating to save lives instead of donating to provide trips to Disneyland).
My preference ordering is:
(people donating to x-risks charities instead of buying personal luxuries) > (people donating to save lives instead of buying personal luxuries)>(people donating to to provide trips to Disneyland instead of buying personal luxuries)>(people donating to x-risks charities instead of donating to provide trips to Disneyland)>(people donating to save lives instead of donating to provide trips to Disneyland).
EDIT: No, this is wrong; see below. Attention should be focused on the grandparent.
(people donating to to provide trips to Disneyland instead of buying personal luxuries)
This has incredibly marginal utility. It is effectively trading your luxury for the fuzzy feeling of providing luxury to another.
(people donating to x-risks charities instead of donating to provide trips to Disneyland)
This has more utility. In fact, it bears a strong resemblance to
(people donating to x-risks charities instead of buying personal luxuries)
given that “providing trips to Disneyland” looks more like a luxury than charity.
I don’t understand how you can prefer A>C but C>A*, unless you think that “preventing the purchase of personal luxuries” is worth more utility than preventing existential risk (A, A*) or saving lives (B, B*).
You’re right. The penultimate item is too low; it should in fact be second.
All I really wanted to point out was the abundance of items between the first and the last, and the fact that (people donating to save lives instead of buying personal luxuries) is higher than (people donating to save lives instead of donating to provide trips to Disneyland).
I don’t want to see the human species stop doing things like e.g. Make-A-Wish does. I feel that the kind of urge that motivates people to do such things is a large part of why humanity is worth protecting in the first place. Although I agree that saving lives is typically more important than a particular other cause, and that it’s usually what you should do if you have to choose, I think we should if at all possible avoid compromising high-level values—such as by discouraging other forms of altruism—in order to do so.
To put this in a broader context, I have a strong aversion to the mentality expressed in the second paragraph of this post. I fear that if we don’t allocate some of our caring to particular humans in their individual capacities, people will come to be seen as dispensable—and then, one day, I might be discarded, too. Since I greatly value my existence on good days and my autonomy even on the worst days, this is a nightmare scenario. I’m afraid of someone being tortured for 50 years to save 3^^^3 people the inconvenience of a dust speck. Yes, it may be the better option, if those are the only two choices, but that doesn’t make it good.
Given that this is how I feel even when we’re talking about existential risk—saving the whole human species and its future—I hope you can understand why similar-sounding arguments against small-scale fuzzy personal altruism in favor of anything less than existential risk reduction leave an especially bad taste in my mouth.
I’m the type of person who highly values fuzzies, to such an extent that I value others’ valuing of fuzzies, and I don’t want to push the culture in a direction toward hostility to valuing fuzzies. I think it’s great if we can learn to be more rational in the pursuit of our goals, but anyone whose goals include trips to Disneyland for cancer patients doesn’t have anything more to be ashamed of than someone whose goals include a new pair of shoes.
Upvoted, thanks for clarifying. I agree with
Several points here:
I agree with Holden’s posting Nothing wrong with selfish giving—just don’t call it philanthropy (though I find the negative connotation of ‘selfish’ attached to the phrase ‘selfish giving’ unfortunate). I think that people who are interested in making the world a better place should allocate some of their resources with an eye toward maximizing their positive impact.
As I’ve said elsewhere, I think that there’s a fair amount to the points that Yvain makes in his Doing Your Good Deed For the Day and do think that it sometimes happens that people’s willingness to help others is diminished by their existing charitable activities.
I’m all for people feeling more fuzzies.
Agreed, of course.
Yes, I regard this as definitely a bug and not a feature.
Glad to hear it. :-)
I’ll take some time to reflect on the nature and extent of our apparent disagreement.
You are making the perfect (people donating to x-risks charities instead of buying personal luxuries) the enemy of the good (people donating to save lives instead of donating to provide trips to Disneyland).
If you know how to convince people (not LW regulars) to contribute to x-risk reduction, instead of buying shoes, then please do so. If not, it doesn’t make sense to complain about efforts that can convince people to make immediate positive changes in their behavior while planting the seeds towards convincing them to more generally maximize expected utility.
My preference ordering is:
(people donating to x-risks charities instead of buying personal luxuries) > (people donating to save lives instead of buying personal luxuries)>(people donating to to provide trips to Disneyland instead of buying personal luxuries)>(people donating to x-risks charities instead of donating to provide trips to Disneyland)>(people donating to save lives instead of donating to provide trips to Disneyland).
EDIT: No, this is wrong; see below. Attention should be focused on the grandparent.
This has incredibly marginal utility. It is effectively trading your luxury for the fuzzy feeling of providing luxury to another.
This has more utility. In fact, it bears a strong resemblance to
given that “providing trips to Disneyland” looks more like a luxury than charity.
I don’t understand how you can prefer A>C but C>A*, unless you think that “preventing the purchase of personal luxuries” is worth more utility than preventing existential risk (A, A*) or saving lives (B, B*).
Yes, never mind—see my reply to JGWeissman.
Your ordering raises the possibility that your preferences are nontransitive! :-)
I don’t see the nontransitivity, but it does seem to imply:
which, while not inconsistent, seems to undervalue x-risk reduction relative to trips to Disneyland for cancer patients.
You’re right. The penultimate item is too low; it should in fact be second.
All I really wanted to point out was the abundance of items between the first and the last, and the fact that (people donating to save lives instead of buying personal luxuries) is higher than (people donating to save lives instead of donating to provide trips to Disneyland).
Where does the status quo fit into your preference ordering?