A good portion of LessWrong is unreadable for me as it is based on some kind of altruistic axiom. Personally, I care about myself, my immediate family and a few friends. I will feel a pang of suffering when I see people suffering but I do not feel that pang when I hear about people I don’t know suffering, so I conclude that I don’t care about other people beyond some abstract measure of proximity and their economic utility for me.
So if there were a button you could press that would make one of your close friends happier but would kill someone you haven’t met, you would be totally ok pressing it?
I wouldn’t, but that’s more because of superrationality reasons (if I could sign a contract with everybody else in the world committing to never press such a button, I totally would sign it) than because I don’t really care about my friend that much more than about the stranger.
Oh so many variations to this experiment to test the intution behind my position.
Your version? Depends on how much happier this friend gets. If it is the equivalent to having a cup of coffe I’d just get them that and live on knowing that I am not a murderer. If it is eternal bliss this friend gets, then I wouldn’t do it either as I’d get jealous and had to live with that and the fact that I am a murderer.
I’d be willing to press the button for personal gain though. Not for a cup of coffee, but a higher threshold.
What I’d be willing though is to press a button that prevents a person from being born, as long as that is not one of my potential heirs or of my friends.
I care about (read: have vested interest in) people that can influence my wellbeing and choices. Because all human beings have the potential to do this, I have care about them to some degree, great or small. Because I cannot physically empathize with seven billion humans at once on an equal or appropriate level, I use a general altruistic axiom to determine how to act towards people I do not have the resources to physically care about.
That’s my reason, at least, for having an altruistic axiom, explained in a terribly simple manner. I’m sure there are other, better explainations for working off altruistic axioms. I’m not making a case for the axiom, just explaining what I see as my reasons for having it.
This thing is turning into a tautology. I care about people to the degree that they are useful to me. My friends and family are incredibly useful in the great state of mind they put me in. A person living in extreme poverty I have never met, not so much. They could be useful were they highly educated and had access to sufficient capital to leverage their knowledge complementary to my skills, but the initial investment far exceeds the potential gain.
What irks me is not the statement above but the tradeoff being made in utilitarianism: That the pain of other people should count as much as my pain. It simply does not.
If everyone (or just most people) think like you, then seeing people suffer makes them suffer as well. And that makes their friends suffer, and so on. So, by transitivity, you should expect to suffer at least a little bit when people who you don’t know directly are suffering.
But I don’t think it is about the feeling. I also don’t really feel anything when I hear about some number of people dying in a far away place. Still, I believe that the world would be a better place if people were not dying there. If I am in a position to help people, I believe that in the long run the result is better if I just shut up and multiply and help many far away people, rather than caring mostly about a few friends and neighbors.
If we’d all just cooperate maybe this would be a better world. But we don’t and it is not.
I have yet to see a calculation that shows that my gift to some far away people instead of a fine dinner with my friends will give me a return on my money in the long run. Assume that all people do this to avoid freerider arguments.
A good portion of LessWrong is unreadable for me as it is based on some kind of altruistic axiom. Personally, I care about myself, my immediate family and a few friends. I will feel a pang of suffering when I see people suffering but I do not feel that pang when I hear about people I don’t know suffering, so I conclude that I don’t care about other people beyond some abstract measure of proximity and their economic utility for me.
So if there were a button you could press that would make one of your close friends happier but would kill someone you haven’t met, you would be totally ok pressing it?
I wouldn’t, but that’s more because of superrationality reasons (if I could sign a contract with everybody else in the world committing to never press such a button, I totally would sign it) than because I don’t really care about my friend that much more than about the stranger.
Oh so many variations to this experiment to test the intution behind my position.
Your version? Depends on how much happier this friend gets. If it is the equivalent to having a cup of coffe I’d just get them that and live on knowing that I am not a murderer. If it is eternal bliss this friend gets, then I wouldn’t do it either as I’d get jealous and had to live with that and the fact that I am a murderer.
I’d be willing to press the button for personal gain though. Not for a cup of coffee, but a higher threshold.
What I’d be willing though is to press a button that prevents a person from being born, as long as that is not one of my potential heirs or of my friends.
I care about (read: have vested interest in) people that can influence my wellbeing and choices. Because all human beings have the potential to do this, I have care about them to some degree, great or small. Because I cannot physically empathize with seven billion humans at once on an equal or appropriate level, I use a general altruistic axiom to determine how to act towards people I do not have the resources to physically care about.
That’s my reason, at least, for having an altruistic axiom, explained in a terribly simple manner. I’m sure there are other, better explainations for working off altruistic axioms. I’m not making a case for the axiom, just explaining what I see as my reasons for having it.
This thing is turning into a tautology. I care about people to the degree that they are useful to me. My friends and family are incredibly useful in the great state of mind they put me in. A person living in extreme poverty I have never met, not so much. They could be useful were they highly educated and had access to sufficient capital to leverage their knowledge complementary to my skills, but the initial investment far exceeds the potential gain.
What irks me is not the statement above but the tradeoff being made in utilitarianism: That the pain of other people should count as much as my pain. It simply does not.
If everyone (or just most people) think like you, then seeing people suffer makes them suffer as well. And that makes their friends suffer, and so on. So, by transitivity, you should expect to suffer at least a little bit when people who you don’t know directly are suffering.
But I don’t think it is about the feeling. I also don’t really feel anything when I hear about some number of people dying in a far away place. Still, I believe that the world would be a better place if people were not dying there. If I am in a position to help people, I believe that in the long run the result is better if I just shut up and multiply and help many far away people, rather than caring mostly about a few friends and neighbors.
If we’d all just cooperate maybe this would be a better world. But we don’t and it is not.
I have yet to see a calculation that shows that my gift to some far away people instead of a fine dinner with my friends will give me a return on my money in the long run. Assume that all people do this to avoid freerider arguments.