I have a strong, visceral negative reaction to this.
I’ll point out that it seems contradictory, for one thing. “Humanity” is made up of humans; concern for humanity should be approximately the result of adding together one’s concern for individual humans. If utility is multiplicative, then it’s also divisible—in which case an increased concern for humanity cannot be accompanied by a decreased concern for individuals without a significant increase in the population size (beyond what has happened in our lifetimes).
Of course, I’m not sure if the above is the true reason for my negative reaction. But it’s darn well worth considering all the same.
Put it this way: I had concern level H for humanity, and h for a given individual. However, H was very far from being 6 billion times h. Now, this is closer to being the case; for this to happen, H has gone up while h has gone down.
Why would you say that when you have no idea what his H or his h were in the first place?
Well, I don’t have “no idea”—I have a probability distribution informed by experience.
Having too much concern for an individual is theoretically possible I suppose, but it’s not a problem anyone is terribly likely to suffer from. The reason most people don’t care about most other people is not the fact that the human population is large; it’s the fact that most of that large population isn’t psychologically close enough for them to care.
It’s possible that utilitarian calculations could argue for downgrading one’s level of concern for e.g. Amanda Knox—but I’m far more inclined to suspect rationalization of pre-existing natural indifference on the part of someone who makes a claim like that.
Actually, h has increased on average; it’s just that h has decreased for the immediately available examples. i.e. I care much less about Amanda Fox or a single salient example, but more about general, systematic effects that might cause great harm to people that I don’t hear about.
Also, do you really care less about (i.e. assign less utility to the welfare of) someone like Amanda than previously, or is it just that you try to avoid strong emotional reactions to such individual cases?
Let’s look at it this way: if I had cash to hand, and was given the option: pay X to solve this particular salient injustice, then I’d be less inclined to do it than before.
On the other hand, if I was given the option: pay X to solve this particular class of injustices, then I’d be more inclined to do it than before.
I have a strong, visceral negative reaction to this.
I’ll point out that it seems contradictory, for one thing. “Humanity” is made up of humans; concern for humanity should be approximately the result of adding together one’s concern for individual humans. If utility is multiplicative, then it’s also divisible—in which case an increased concern for humanity cannot be accompanied by a decreased concern for individuals without a significant increase in the population size (beyond what has happened in our lifetimes).
Of course, I’m not sure if the above is the true reason for my negative reaction. But it’s darn well worth considering all the same.
Put it this way: I had concern level H for humanity, and h for a given individual. However, H was very far from being 6 billion times h. Now, this is closer to being the case; for this to happen, H has gone up while h has gone down.
This still bothers me; I feel like you should have just increased H without decreasing h.
Why would you say that when you have no idea what his H or his h were in the first place?
It’s intuitively difficult for us to accept, or at least to say, that having too much concern for a person is as possible as having too little.
Well, I don’t have “no idea”—I have a probability distribution informed by experience.
Having too much concern for an individual is theoretically possible I suppose, but it’s not a problem anyone is terribly likely to suffer from. The reason most people don’t care about most other people is not the fact that the human population is large; it’s the fact that most of that large population isn’t psychologically close enough for them to care.
It’s possible that utilitarian calculations could argue for downgrading one’s level of concern for e.g. Amanda Knox—but I’m far more inclined to suspect rationalization of pre-existing natural indifference on the part of someone who makes a claim like that.
Actually, h has increased on average; it’s just that h has decreased for the immediately available examples. i.e. I care much less about Amanda Fox or a single salient example, but more about general, systematic effects that might cause great harm to people that I don’t hear about.
I assume you mean Amanda Knox.
Also, do you really care less about (i.e. assign less utility to the welfare of) someone like Amanda than previously, or is it just that you try to avoid strong emotional reactions to such individual cases?
Let’s look at it this way: if I had cash to hand, and was given the option: pay X to solve this particular salient injustice, then I’d be less inclined to do it than before.
On the other hand, if I was given the option: pay X to solve this particular class of injustices, then I’d be more inclined to do it than before.
Emotional involvement follows a similar trend.