Around here, we seem to have a tacit theory of ethics. If you make a statement consistent with it, you will not be questioned.
The theory is that though we tend to think that we’re selfless beings, we’re actually not, and the sole reason we act selfless at all is to make other people think we really are selfless, and the reason we think we’re selfless is because thinking we’re selfless makes it easier to convince others that we’re selfless.
The thing is, I haven’t seen much justification of this theory. I might have seen some here, some there, but I don’t recall any one big attempt at justifying this theory once and for all. Where is that justification?
I agree with khafra. If “selfish” means “pursuing things if and only if they accord with one’s own values”, then most people here would say that every value-pursuing agent is selfish by definition.
But, for that very reason (among other things), that definition is not a useful one. A useful definition of “selfish” is closer to “valuing oneself above all other things.” And this is not universally agreed to be good around here.
I might value myself a great deal, but it’s highly unlikely that I would, upon reflection, value myself over all other things. If I had to choose between destroying either myself or the entire rest of the universe (beyond the bare minimum that I need to stay alive), I would obliterate myself in an instant. I expect that most people here would make the same choice in the same situation.
I think the general view is more nuanced. If there is a LW theory of selflessness/selfishness, Robin Hanson would be able to articulate it far better than I; but here’s my shot:
“Selflessness” is an incoherent concept. When you think of being selfless, you think of actions to make other people better off by your own value system. Your own value system may dictate that fulfilling other people’s value systems makes them better off, or yours may say that changing others’ value systems to “believing in Jesus is good” makes them better off.
The latter concept is actually more coherent than the first, because if one of those other systems includes a very high utility for “everyone else dies,” you cannot make everyone better off.
Many LW members place a high value on altruism, but they don’t call themselves selfless; they understand that they’re fulfilling a value system which places a high utility on, for lack of a better word, universal eudaimonia.
Agreed. If “selfish” means “pursuing things if and only if they accord with one’s own values”, then most people here would say that every value-pursuing agent is selfish by definition.
But, for that very reason (among other things), that definition is not a useful one. A useful definition of “selfish” is closer to “valuing oneself above all other things.” And this is not universally agreed to be good around here.
I might value myself a great deal, but it’s highly unlikely that I would, upon reflection, value myself over all other things. If I had to choose between destroying either myself or the entire rest of the universe (beyond the bare minimum that I need to stay alive), I would obliterate myself in an instant. I expect that most people here would make the same choice in the same situation.
the sole reason we act selfless at all is to make other people think we really are selfless
That doesn’t describe me. I sometimes act in ways that are detrimental to me and beneficial to others, out of a broader conception of my own self-interest: I figure that those actions are beneficial to my own projects, properly conceived.
I most specifically don’t want people to think I am exploitable (which is one interpretation of “selfless”). I do want people to think of me as someone with whom it is desirable to cooperate.
I don’t think that’s the tacit theory of ethics around here.
Genes may be selfish, but primates survived better who had other related primates looking out for them, or who showed that they were caring. It could well be that some simple mutations led to primates that showed they were caring because they actually were caring. (Edit: It seems to me that this must be the case for at least part of our value system. )
but the benefits to the genes can just as easily come from more subtle situational differences, and assistance by related others, rather than a major status change and change in attitudes.
One would be hard-pressed to find a more perfect example of doublethink than the popular notion of selflessness.
Selflessness is supposed to be praiseworthy, but if we try to clarify the meaning of “selfless person” we either get
A person who’s greatest (or only) satisfaction comes from helping others, or
A person who derives no pleasure at all from helping others (not even anticipated indirect future pleasure), but does it anyway
Neither of these are generally considered praiseworthy: (1) is clearly someone acting for purely selfish reasons, and (2) is just a robotic servant. Yet somehow a sort of “quantum superposition” of these two is held to be both possible and praiseworthy*.
*The common usage of “selfish” is an analogous kind of doublethink/newspeak
ETA: I, and probably many others, consider (1) praiseworthy, but if that’s the definition of selfless then the standard LW argument you mentioned applies to it.
I don’t think that people think they are selfless. They usually think they’re more selfless than they actually are, though.
The theory is that though we tend to think that we’re selfless beings, we’re actually not, and the sole reason we act selfless at all is to make other people think we really are selfless, and the reason we think we’re selfless is because thinking we’re selfless makes it easier to convince others that we’re selfless.
I suspect most people at Less Wrong have more a complex view than this description. People also behave selflessly for reasons of inclusive fitness and reciprocal altruism. People also engage in “selfless” behavior for the same reason a “forgiving” tit-for-tat strategy wins in iterated prisoner’s dilemmas.
This seems obviously true, except that there are certain regimes where genuine cooperation isn’t ruled out by selfish genes (typically requiring a sort of altruistic willingness to undertake costly detection and punishment of cheaters). So I would not at all rule out instances of genuine altruism if a case can be made that it’s positive-sum enough and widespread enough.
Around here, we seem to have a tacit theory of ethics. If you make a statement consistent with it, you will not be questioned.
The theory is that though we tend to think that we’re selfless beings, we’re actually not, and the sole reason we act selfless at all is to make other people think we really are selfless, and the reason we think we’re selfless is because thinking we’re selfless makes it easier to convince others that we’re selfless.
The thing is, I haven’t seen much justification of this theory. I might have seen some here, some there, but I don’t recall any one big attempt at justifying this theory once and for all. Where is that justification?
I agree with khafra. If “selfish” means “pursuing things if and only if they accord with one’s own values”, then most people here would say that every value-pursuing agent is selfish by definition.
But, for that very reason (among other things), that definition is not a useful one. A useful definition of “selfish” is closer to “valuing oneself above all other things.” And this is not universally agreed to be good around here.
I might value myself a great deal, but it’s highly unlikely that I would, upon reflection, value myself over all other things. If I had to choose between destroying either myself or the entire rest of the universe (beyond the bare minimum that I need to stay alive), I would obliterate myself in an instant. I expect that most people here would make the same choice in the same situation.
I think the general view is more nuanced. If there is a LW theory of selflessness/selfishness, Robin Hanson would be able to articulate it far better than I; but here’s my shot:
“Selflessness” is an incoherent concept. When you think of being selfless, you think of actions to make other people better off by your own value system. Your own value system may dictate that fulfilling other people’s value systems makes them better off, or yours may say that changing others’ value systems to “believing in Jesus is good” makes them better off.
The latter concept is actually more coherent than the first, because if one of those other systems includes a very high utility for “everyone else dies,” you cannot make everyone better off.
Many LW members place a high value on altruism, but they don’t call themselves selfless; they understand that they’re fulfilling a value system which places a high utility on, for lack of a better word, universal eudaimonia.
Agreed. If “selfish” means “pursuing things if and only if they accord with one’s own values”, then most people here would say that every value-pursuing agent is selfish by definition.
But, for that very reason (among other things), that definition is not a useful one. A useful definition of “selfish” is closer to “valuing oneself above all other things.” And this is not universally agreed to be good around here.
I might value myself a great deal, but it’s highly unlikely that I would, upon reflection, value myself over all other things. If I had to choose between destroying either myself or the entire rest of the universe (beyond the bare minimum that I need to stay alive), I would obliterate myself in an instant. I expect that most people here would make the same choice in the same situation.
That’s news to me.
That doesn’t describe me. I sometimes act in ways that are detrimental to me and beneficial to others, out of a broader conception of my own self-interest: I figure that those actions are beneficial to my own projects, properly conceived.
I most specifically don’t want people to think I am exploitable (which is one interpretation of “selfless”). I do want people to think of me as someone with whom it is desirable to cooperate.
I don’t think that’s the tacit theory of ethics around here.
Genes may be selfish, but primates survived better who had other related primates looking out for them, or who showed that they were caring. It could well be that some simple mutations led to primates that showed they were caring because they actually were caring. (Edit: It seems to me that this must be the case for at least part of our value system. )
This is relevant:
http://lesswrong.com/lw/uu/why_does_power_corrupt/
but the benefits to the genes can just as easily come from more subtle situational differences, and assistance by related others, rather than a major status change and change in attitudes.
One would be hard-pressed to find a more perfect example of doublethink than the popular notion of selflessness.
Selflessness is supposed to be praiseworthy, but if we try to clarify the meaning of “selfless person” we either get
A person who’s greatest (or only) satisfaction comes from helping others, or
A person who derives no pleasure at all from helping others (not even anticipated indirect future pleasure), but does it anyway
Neither of these are generally considered praiseworthy: (1) is clearly someone acting for purely selfish reasons, and (2) is just a robotic servant. Yet somehow a sort of “quantum superposition” of these two is held to be both possible and praiseworthy*.
*The common usage of “selfish” is an analogous kind of doublethink/newspeak
ETA: I, and probably many others, consider (1) praiseworthy, but if that’s the definition of selfless then the standard LW argument you mentioned applies to it.
I don’t think that people think they are selfless. They usually think they’re more selfless than they actually are, though.
I suspect most people at Less Wrong have more a complex view than this description. People also behave selflessly for reasons of inclusive fitness and reciprocal altruism. People also engage in “selfless” behavior for the same reason a “forgiving” tit-for-tat strategy wins in iterated prisoner’s dilemmas.
ISTM that any other theory would be the one that requires justification. How do your genes selfishly reproduce if you’re genuinely selfish?
This seems obviously true, except that there are certain regimes where genuine cooperation isn’t ruled out by selfish genes (typically requiring a sort of altruistic willingness to undertake costly detection and punishment of cheaters). So I would not at all rule out instances of genuine altruism if a case can be made that it’s positive-sum enough and widespread enough.