So would you say that there is no reason to care about people unqualifiedly? If you wouldn’t be willing to say this, what reason would you give for unqualified altruism?
I’m not sure exactly what you’re getting at. Unqualified altruism is good for other people and bad for yourself. If you care about other people a lot more than you care about yourself then you have reasons for unqualified altruism. Cares are not something we need reasons for, they’re emotions we have.
I guess I got to thinking about this after reading Lukeprog’s EA post. There he mentioned that EAists care about people regardless of how ‘far away’ (in space, time, political association, etc.) they are. And Singer’s wonderful pond argument likewise involves the premise that if you ought to help a child in immediate danger right in front of you, you ought to help a child in immediate danger in Africa.
I suppose it struck me that, at least for Singer, the move from local altruism (caring about the drowning child) to unqualified altruism (caring about sapient beings everywhere and when) is a premise in an argument. Should I really conclude that this move is not one that I can make on the basis of reasons?
I don’t understand what you mean by a move being made “on the basis of reasons.” Are you familiar with the distinction between instrumental and terminal values? In that language, I would say that the question you seem to be asking is a type error: you’re asking something like “why should unqualified altruism be one of my terminal values?” but the definition of “should” just refers to your terminal values (or maybe your extrapolated terminal values, or whatever).
This is all assuming that you’re highly confident that unqualified altruism is not in fact one of your terminal values. It’s possible that you have some uncertainty about this and that that uncertainty is what you’re really trying to resolve.
I could be asking one of two things, depending on where someone arguing for unqualified altruism (as, say, Singer does) stands. Singer’s argument has the form ‘If you consider yourself obligated to save the local child, then you should consider yourself obligated to save the non-local one.’ He could be arguing that unqualified altruism is in fact my terminal value, given that local altruism is, and I should realize that the restrictions involved in the qualification ‘local’ are irrational. Or he could be arguing that unqualified altruism is a significant or perhaps necessary instrumental value, given what you can read about my terminal values off my commitment to local altruism.
I’m not sure which he thinks, though I would guess that his argument is intended to be the former one. I realize you might not endorse Singer’s argument, but these are two ways to hear my question: ‘Is unqualified altruism a terminal value of mine, given that local altruism is a value of mine?’ and ‘Is ethical altruism an instrumental value of mine, given what we can know about my terminal values on the assumption that local altruism is also an instrumental value of mine?’
I’m not entirely sure which applies to me. I don’t think I have any terminal values so specific as ‘unqualifed/local altruism’, but I may be reflecting badly.
So would you say that there is no reason to care about people unqualifiedly? If you wouldn’t be willing to say this, what reason would you give for unqualified altruism?
I’m not sure exactly what you’re getting at. Unqualified altruism is good for other people and bad for yourself. If you care about other people a lot more than you care about yourself then you have reasons for unqualified altruism. Cares are not something we need reasons for, they’re emotions we have.
I guess I got to thinking about this after reading Lukeprog’s EA post. There he mentioned that EAists care about people regardless of how ‘far away’ (in space, time, political association, etc.) they are. And Singer’s wonderful pond argument likewise involves the premise that if you ought to help a child in immediate danger right in front of you, you ought to help a child in immediate danger in Africa.
I suppose it struck me that, at least for Singer, the move from local altruism (caring about the drowning child) to unqualified altruism (caring about sapient beings everywhere and when) is a premise in an argument. Should I really conclude that this move is not one that I can make on the basis of reasons?
It sounds like you would benefit from (re?)reading the metaethics sequence.
I don’t understand what you mean by a move being made “on the basis of reasons.” Are you familiar with the distinction between instrumental and terminal values? In that language, I would say that the question you seem to be asking is a type error: you’re asking something like “why should unqualified altruism be one of my terminal values?” but the definition of “should” just refers to your terminal values (or maybe your extrapolated terminal values, or whatever).
This is all assuming that you’re highly confident that unqualified altruism is not in fact one of your terminal values. It’s possible that you have some uncertainty about this and that that uncertainty is what you’re really trying to resolve.
I could be asking one of two things, depending on where someone arguing for unqualified altruism (as, say, Singer does) stands. Singer’s argument has the form ‘If you consider yourself obligated to save the local child, then you should consider yourself obligated to save the non-local one.’ He could be arguing that unqualified altruism is in fact my terminal value, given that local altruism is, and I should realize that the restrictions involved in the qualification ‘local’ are irrational. Or he could be arguing that unqualified altruism is a significant or perhaps necessary instrumental value, given what you can read about my terminal values off my commitment to local altruism.
I’m not sure which he thinks, though I would guess that his argument is intended to be the former one. I realize you might not endorse Singer’s argument, but these are two ways to hear my question: ‘Is unqualified altruism a terminal value of mine, given that local altruism is a value of mine?’ and ‘Is ethical altruism an instrumental value of mine, given what we can know about my terminal values on the assumption that local altruism is also an instrumental value of mine?’
I’m not entirely sure which applies to me. I don’t think I have any terminal values so specific as ‘unqualifed/local altruism’, but I may be reflecting badly.