I’m reading Ayn Rand’s “The Virtue of Selfishness” and it seems to me that (a part of) what she tried to say was approximately this:
Some ethical systems put false dichotomy between “doing what one wants” and “helping other people”. And then they derive an ‘ethical’ conclusion that “doing what one wants” is evil, and “helping other people” is good, by definition. Which is nonsense. Also, humans can’t psychologically completely abstain from “doing what they want” part (even after removing “helping other people” from it), but instead of realising the nonsense of such ethics, they feel guilty, which makes them easier to control.
I don’t read philosophy, so I can’t tell if someone has said it exactly like this, but it seems to me that this is not a strawman. At least it seems to me that I have heard such ideas floating around, although not expressed this clearly. (Maybe it’s not exactly what the original philosopher said; maybe it’s just a popular simplification.) There is the unspoken assumption that when people “do what they want”, that does not include caring about others; that people must be forced into pro-social behavior… and the person who says this usually suggests that some group they identify with should be given power over the evil humans to force them into doing good.
And somehow people never realize the paradox of where does the “wanting to do what seemingly no one wants to” come from. I mean, if no one really cared about X, then no one would be concerned that no one cares about X, right? If nobody cares about sorting pebbles, then nobody feels that we should create some mechanisms to force people to sort pebbles because otherwise, oh the horrors, the pebbles wouldn’t be sorted properly. So what; the pebbles won’t be sorted, no one cares. But we care about people in need not getting help. So that desire obviously comes from us. Therefore acting on that desire is not contradictory to “doing what we want”, because that’s a part of what we want.
So… now these are my thoughts, not Rand’s… one possible interpretation is that the people who created these systems of ethics actually were psychopaths. They really didn’t feel any desire to help other people. But they probably understood that other people will reward them for creating ideas about how to help others. Probably because they understood on intelectual level that without some degree of cooperation, the society would fall apart, which would be against their interest. So they approached it like a game theory problem: no one really cares about other people, but blah blah blah iterated prisonner’s dilemma or something, therefore people should act contrary to their instincts and actually help each other. And because these psychopaths were charming people, others believed their theories expressed the highest wisdom and benevolence, and felt guilty for not seeings things so clearly. (Imagine less intelligent professor Quirrell suffering from typical mind fallacy, designing rules for a society composed of his clones.)
I’ve been reading Pinker’s “Better Angels of Our Nature” and it seems to me that people don’t need to be psychopaths to have difficulty feeling empathy and concern for other people. If you’ve read HPMOR, the villagers that used to enjoy cat burning are a good example, which Pinker uses. He suggests that our feelings of empathy have increased over time, although he’s not sure for what reason. So earlier, a couple of people in their better moments might have claimed caring about others was important, but generally people were more selfish, so that the two did become out of sync.
I mean, even today when you say you care about other people, you don’t suddenly donate all of the money that isn’t keeping you alive to effective charities, because of the empathy you don’t feel with every single other person on this earth. You don’t have to be a psychopath for that happen.
“A woman of wisdom,” Brennan said, “once told me that it is wisest to regard our past selves as fools beyond redemption—to see the people we once were as idiots entire. I do not necessarily say this myself; but it is what she said to me, and there is more than a grain of truth in it. As long as we are making excuses for the past, trying to make it look better, respecting it, we cannot make a clean break. It occurs to me that the rule may be no different for human civilizations. So I tried looking back and considering the Eld scientists as simple fools.”
“Which they were not,” Jeffreyssai said.
Maybe, analogically, it would be wise to regard the former civilizations as psychopaths, although they were not. This includes religions, moral philosophies, etc. The idea is that those people didn’t know what we know now… and probably also didn’t feel what we feel now.
EDIT: To be more precise, they were capable of having the same emotions; they just connected it with different things. They had the same chemical foundation for emotions, but connected them with different states of mind. For example, they experienced fun, but instead of computer games they connected it with burning cats; etc.
(Of course there are differences in knowledge and feelings among different people now and in the past, etc. But there are some general trends, so if we speak about sufficiently educated or moral people, they may have no counterparts in the past, or at least not many of them.)
Some ethical systems put false dichotomy between “doing what one wants” and “helping other people”. And then they derive an ‘ethical’ conclusion that “doing what one wants” is evil, and “helping other people” is good, by definition.
Funny, this is a decent summary of an idea I’ve had kicking around for a while, though framed differently. A more or less independent one, I think; I’ve read Rand, but not for about a decade and a half.
I’d also add that “helping people” in this pop-culture mentality is typically built in a virtue-ethical rather than a consequential way; one is recognized as a good person by pattern-matching to preconceived notions of how a good person should behave, not by the expected results of one’s actions. Since those preconceptions are based on well-known responses to well-known problems, a pop-culture altruist can’t be too innovative or solve problems at too abstract a level; everyone remembers the guy that gave half his cloak to the beggar over the guy that pioneered a new weaving technique or produced an unusually large flax crop. Nor can one target too unfashionable a cause.
Innovators might eventually be seen as heroes, but only weakly and in retrospect. In the moment, they’re more likely to be seen neutrally or even as villains (for e.g. crowding out less efficient flax merchants, or simply for the sin of greed). Though this only seems to apply in certain domains; pure scientists for example are usually admired, even if their research isn’t directly socially useful. Same for artists.
one is recognized as a good person by pattern-matching to preconceived notions of how a good person should behave, not by the expected results of one’s actions
Yes, even when the “generally seen as good” actions are predictably failing or even making things worse, you are supposed to do them. Because that’s what good people do! And you should signal goodness, as opposed to… uhm, actually making things better, or something.
This pattern of “taking something unbelievable other people said, and imagining what would it mean if from their point of view it made complete sense literally, even if it creates an impolite ad-hominem argument against them” probably has a potential to create many surprising hypotheses.
It probably needs some nice short name, to remind people to use it more often.
I don’t read philosophy, so I can’t tell if someone has said it exactly like this, but it seems to me that this is not a strawman. At least it seems to me that I have heard such ideas floating around, although not expressed this clearly. (Maybe it’s not exactly what the original philosopher said; maybe it’s just a popular simplification.) There is the unspoken assumption that when people “do what they want”, that does not include caring about others; that people must be forced into pro-social behavior… and the person who says this usually suggests that some group they identify with should be given power over the evil humans to force them into doing good.
I do read philosophy, and this does seem like a strawman to me. I’m not aware of a single serious moral philosopher who believes there is a sharp dichotomy between “doing what you want” and “helping others”.
The only philosopher who comes close, I think, is Kant, who thought that the reasons for performing an action are morally relevant, above and beyond the action and its consequences. So, according to Kant, it is morally superior to perform an act because it is the right thing to do rather than because it is an act I want to perform for some other reason. Given this view, the ideal test case for moral character is whether a person is willing to perform an act that goes against her non-moral interests simply because it is the right thing to do. But this still differs from the claim that altruistic behavior is opposed to self-interested behavior.
I also read some philosophy, and while the dichotomy between doing what you want and helping others isn’t often stated explicitly, it’s common to assume that someone who is doing what they want is not benevolent and is likely to screw people over. Mainly it’s only the virtue ethicists who think that egoists would be benevolent.
And somehow people never realize the paradox of where does the “wanting to do what seemingly no one wants to” come from. I mean, if no one really cared about X, then no one would be concerned that no one cares about X, right? If nobody cares about sorting pebbles, then nobody feels that we should create some mechanisms to force people to sort pebbles because otherwise, oh the horrors, the pebbles wouldn’t be sorted properly.
Well, no. For example, I care very much about these pebbles right here (these represent my friends), and recognize that there are many other people who don’t care about these pebbles and instead care about totally different pebbles I don’t care either way about. And some other people I know care about some of my pebbles, but not the rest, and I care about some of theirs but not the rest.
It occurs to me that if there were a broad set of principles everyone agreed to which said that, ethically, all pebbles ought to be sorted, then everyone would care some about my pebbles, at the comparatively low cost for me of caring a little about other people’s pebbles.
Of course, from there it’s a short step to people who conclude that, ethically, it is best to disregard your own particular attachment to your personal pebbles and be an effective pebblist, taking whatever actions most effectively sort pebbles anywhere even if that means your own pebbles are less sorted than they could be if you devoted more time to them. And some people take that too far and provoke Rayn And Pebblist to promote focusing on your own pebbles to the exclusion of all else.
And somehow people never realize the paradox of where does the “wanting to do what seemingly no one wants to” come from. I mean, if no one really cared about X, then no one would be concerned that no one cares about X, right?
The way out of this paradox is that no one wants to promote X themselves, but they want other people to do it.
I’m reading Ayn Rand’s “The Virtue of Selfishness” and it seems to me that (a part of) what she tried to say was approximately this:
Some ethical systems put false dichotomy between “doing what one wants” and “helping other people”. And then they derive an ‘ethical’ conclusion that “doing what one wants” is evil, and “helping other people” is good, by definition. Which is nonsense. Also, humans can’t psychologically completely abstain from “doing what they want” part (even after removing “helping other people” from it), but instead of realising the nonsense of such ethics, they feel guilty, which makes them easier to control.
I don’t read philosophy, so I can’t tell if someone has said it exactly like this, but it seems to me that this is not a strawman. At least it seems to me that I have heard such ideas floating around, although not expressed this clearly. (Maybe it’s not exactly what the original philosopher said; maybe it’s just a popular simplification.) There is the unspoken assumption that when people “do what they want”, that does not include caring about others; that people must be forced into pro-social behavior… and the person who says this usually suggests that some group they identify with should be given power over the evil humans to force them into doing good.
And somehow people never realize the paradox of where does the “wanting to do what seemingly no one wants to” come from. I mean, if no one really cared about X, then no one would be concerned that no one cares about X, right? If nobody cares about sorting pebbles, then nobody feels that we should create some mechanisms to force people to sort pebbles because otherwise, oh the horrors, the pebbles wouldn’t be sorted properly. So what; the pebbles won’t be sorted, no one cares. But we care about people in need not getting help. So that desire obviously comes from us. Therefore acting on that desire is not contradictory to “doing what we want”, because that’s a part of what we want.
So… now these are my thoughts, not Rand’s… one possible interpretation is that the people who created these systems of ethics actually were psychopaths. They really didn’t feel any desire to help other people. But they probably understood that other people will reward them for creating ideas about how to help others. Probably because they understood on intelectual level that without some degree of cooperation, the society would fall apart, which would be against their interest. So they approached it like a game theory problem: no one really cares about other people, but blah blah blah iterated prisonner’s dilemma or something, therefore people should act contrary to their instincts and actually help each other. And because these psychopaths were charming people, others believed their theories expressed the highest wisdom and benevolence, and felt guilty for not seeings things so clearly. (Imagine less intelligent professor Quirrell suffering from typical mind fallacy, designing rules for a society composed of his clones.)
I’ve been reading Pinker’s “Better Angels of Our Nature” and it seems to me that people don’t need to be psychopaths to have difficulty feeling empathy and concern for other people. If you’ve read HPMOR, the villagers that used to enjoy cat burning are a good example, which Pinker uses. He suggests that our feelings of empathy have increased over time, although he’s not sure for what reason. So earlier, a couple of people in their better moments might have claimed caring about others was important, but generally people were more selfish, so that the two did become out of sync.
I mean, even today when you say you care about other people, you don’t suddenly donate all of the money that isn’t keeping you alive to effective charities, because of the empathy you don’t feel with every single other person on this earth. You don’t have to be a psychopath for that happen.
This reminds me of this part from “The Failures of Eld Science”:
Maybe, analogically, it would be wise to regard the former civilizations as psychopaths, although they were not. This includes religions, moral philosophies, etc. The idea is that those people didn’t know what we know now… and probably also didn’t feel what we feel now.
EDIT: To be more precise, they were capable of having the same emotions; they just connected it with different things. They had the same chemical foundation for emotions, but connected them with different states of mind. For example, they experienced fun, but instead of computer games they connected it with burning cats; etc.
(Of course there are differences in knowledge and feelings among different people now and in the past, etc. But there are some general trends, so if we speak about sufficiently educated or moral people, they may have no counterparts in the past, or at least not many of them.)
Funny, this is a decent summary of an idea I’ve had kicking around for a while, though framed differently. A more or less independent one, I think; I’ve read Rand, but not for about a decade and a half.
I’d also add that “helping people” in this pop-culture mentality is typically built in a virtue-ethical rather than a consequential way; one is recognized as a good person by pattern-matching to preconceived notions of how a good person should behave, not by the expected results of one’s actions. Since those preconceptions are based on well-known responses to well-known problems, a pop-culture altruist can’t be too innovative or solve problems at too abstract a level; everyone remembers the guy that gave half his cloak to the beggar over the guy that pioneered a new weaving technique or produced an unusually large flax crop. Nor can one target too unfashionable a cause.
Innovators might eventually be seen as heroes, but only weakly and in retrospect. In the moment, they’re more likely to be seen neutrally or even as villains (for e.g. crowding out less efficient flax merchants, or simply for the sin of greed). Though this only seems to apply in certain domains; pure scientists for example are usually admired, even if their research isn’t directly socially useful. Same for artists.
Yes, even when the “generally seen as good” actions are predictably failing or even making things worse, you are supposed to do them. Because that’s what good people do! And you should signal goodness, as opposed to… uhm, actually making things better, or something.
IOW “Typical Mind and Disbelief In Straight People” but s/straight/good/?
Exactly.
This pattern of “taking something unbelievable other people said, and imagining what would it mean if from their point of view it made complete sense literally, even if it creates an impolite ad-hominem argument against them” probably has a potential to create many surprising hypotheses.
It probably needs some nice short name, to remind people to use it more often.
I do read philosophy, and this does seem like a strawman to me. I’m not aware of a single serious moral philosopher who believes there is a sharp dichotomy between “doing what you want” and “helping others”.
The only philosopher who comes close, I think, is Kant, who thought that the reasons for performing an action are morally relevant, above and beyond the action and its consequences. So, according to Kant, it is morally superior to perform an act because it is the right thing to do rather than because it is an act I want to perform for some other reason. Given this view, the ideal test case for moral character is whether a person is willing to perform an act that goes against her non-moral interests simply because it is the right thing to do. But this still differs from the claim that altruistic behavior is opposed to self-interested behavior.
I also read some philosophy, and while the dichotomy between doing what you want and helping others isn’t often stated explicitly, it’s common to assume that someone who is doing what they want is not benevolent and is likely to screw people over. Mainly it’s only the virtue ethicists who think that egoists would be benevolent.
Well, no. For example, I care very much about these pebbles right here (these represent my friends), and recognize that there are many other people who don’t care about these pebbles and instead care about totally different pebbles I don’t care either way about. And some other people I know care about some of my pebbles, but not the rest, and I care about some of theirs but not the rest.
It occurs to me that if there were a broad set of principles everyone agreed to which said that, ethically, all pebbles ought to be sorted, then everyone would care some about my pebbles, at the comparatively low cost for me of caring a little about other people’s pebbles.
Of course, from there it’s a short step to people who conclude that, ethically, it is best to disregard your own particular attachment to your personal pebbles and be an effective pebblist, taking whatever actions most effectively sort pebbles anywhere even if that means your own pebbles are less sorted than they could be if you devoted more time to them. And some people take that too far and provoke Rayn And Pebblist to promote focusing on your own pebbles to the exclusion of all else.
The way out of this paradox is that no one wants to promote X themselves, but they want other people to do it.