[T]here can be no way of justifying the substantive assumption that all forms of altruism, solidarity and sacrifice really are ultra-subtle forms of self-interest, except by the trivializing gambit of arguing that people have concern for others because they want to avoid being distressed by their distress. And even this gambit […] is open to the objection that rational distress-minimizers could often use more efficient means than helping others.
Rational distress-minimizers would behave differently from rational atruists. (Real people are somewhere in the middle and seem to tend toward greater altruism and less distress-minimization when taught ‘rationality’ by altruists.)
Well, for me, believing myself to be a type of person I don’t like causes me great cognitive dissonance. The more I know about how I might be fooling myself, the more I have to actually adjust to achieve that belief.
For instance, it used to be enough for me that I treat my in-group well. But once I understood that that was what I was doing, I wasn’t satisfied with it. I now follow a utilitarian ethics that’s much more materially expensive.
Perhaps this training simply focuses attention on the distress to be alleviated by altruism. Learning that your efforts at altruism aren’t very effective might be pretty distressing.
I guess I don’t see the problem with the trivializing gambit. If it explains altruism without needing to invent a new kind of motivation why not use it?
I meant that everyone I’ve discussed the subject with believes that self-interest exists as a motivating force, so maybe “additional” would have been a better descriptor than “new.”
Hrm… But “self-interest” is itself a fairly broad category, including many sub categories like emotional state, survival, fulfillment of curiosity, self determination, etc… Seems like it wouldn’t be that hard a step, given the evolutionary pressures there have been toward cooperation and such, for it to be implemented via actually caring about the other person’s well being, instead of it secretly being just a concern for your own. It’d perhaps be simpler to implement that way. It might be partly implemented by the same emotional reinforcement system, but that’s not the same thing as saying that the only think you care about is your own reinforcement system.
Well, the trivializing gambit here would be to just say that “caring about another person” just means that your empathy circuitry causes you to feel pain when you observe someone in an unfortunate situation and so your desire to help is triggered ultimately by the desire to remove this source of distress.
I’m not sure how concern for anothers well being would actually be implemented in a system that only has a mechanism for caring solely about its own well-being (ie how the mechanism would evolve). The push for cooperation probably came about more because we developed the ability to model other the internal states of critters like ourselves so that we could be mount a better offense or defense. The simplest mechanism just being to use a facial expression or posture to cause us to feel a toned down version of what we would normally feel when we had the same expression or posture (you’re looking for information not to literally feel the same thing at the same intensity—when the biggest member of your pack is aggressing at you you probably want the desire to run away or submit to override the empathetic aggression).
It’s worth noting (for me) that this doesn’t diminish the importance of empathy and it doesn’t mean that I don’t really care about others. I think that caring for others is ultimately rooted in self-centeredness but like depth perception is probably a pre-installed circuit in our brains (a type I system) that we can’t really remove totally without radically modifying the hardware. Caring about another person is as much a part of me as being able to recognize their face. The specific mechanism is only important when you’re trying to do something specific with your caring circuits (or trying to figure out how to emulate them).
It may not matter pragmatically but it still matters scientifically. Just as you want to have a correct explanation of rainbows, regardless of whether this explanation has any effects on our aesthetic appreciation of them, so too you want to have a factually accurate account of apparently altruistic behavior, independently of whether this matters from a moral perspective.
There’s the alternative “gambit” of describing it in terms of signaling. There’s the alternative “gambit” of describing it in terms of wanting to live in the best possible universe. There’s the alternative “gambit” of ascribing altruism to the emotional response it invokes in the altruistic individual.
I find the quote false on its face, in addition to being an appeal to distaste.
There’s the alternative “gambit” of ascribing altruism to the emotional response it invokes in the altruistic individual.
Careful, there are some tricky conceptual waters here. Strictly, anything I want to do can be ascribed to my emotional response to it, because that’s how nature made us pursue goals. “They did it because of the emotional response it invoked” is roughly analogous to “They did it because their brain made them do it.”
The cynical claim would be that if people could get the emotional high without the altruistic act (say, by taking a pill that made them think they did it), they’d just do that. I don’t think most altruists would, though. There are cynical explanations for that fact, too (“signalling to yourself leads to better signalling to others”) but they begin to lose their air of streetwise wisdom and sound like epicycles.
Are you suggesting emotions are necessary to goal-oriented behavior?
There should be some evidence for that claim; we have people with diminished emotional capacity in wide range of forms. Do individuals with alexithymia demonstrate impaired goal-oriented behaviors?
I think there’s more to emotion as a motive system than the brain as a motive force. People can certainly choose to stop taking certain drugs which induce emotional highs. 10% of people who start taking heroin are able to keep their consumption levels “moderate” or lower, as compared to 90% for something like tobacco, according to one random and hardly authoritative internet site—the precise numbers aren’t terribly important. Perhaps such altruists, like most people, deliberately avoid drugs like heroin for this reason?
Jon Elster
Even if altruism turns out to be a really subtle form of self-interest, what does it matter? An unwoven rainbow still has all its colors.
Rational distress-minimizers would behave differently from rational atruists. (Real people are somewhere in the middle and seem to tend toward greater altruism and less distress-minimization when taught ‘rationality’ by altruists.)
That could be because rationality decreases the effectiveness of distress minimisation techniques other than altruism.
..because it makes you try to see reality as it is?
In me, it’s also had the effect of reducing empathy. (Helps me not go crazy.)
Well, for me, believing myself to be a type of person I don’t like causes me great cognitive dissonance. The more I know about how I might be fooling myself, the more I have to actually adjust to achieve that belief.
For instance, it used to be enough for me that I treat my in-group well. But once I understood that that was what I was doing, I wasn’t satisfied with it. I now follow a utilitarian ethics that’s much more materially expensive.
Are they being taught ‘rationality’ by altruists or ‘altruism’ by rationalists? Or ‘rational altruism’ by rational altruists?
Shouldn’t the methods of rationality be orthogonal to the goal you are trying to achieve?
Perhaps this training simply focuses attention on the distress to be alleviated by altruism. Learning that your efforts at altruism aren’t very effective might be pretty distressing.
That seems to verge on the trivializing gambit, though.
I guess I don’t see the problem with the trivializing gambit. If it explains altruism without needing to invent a new kind of motivation why not use it?
Why would actual altruism be a “new kind” of motivation? What makes it a “newer kind” than self interest?
I meant that everyone I’ve discussed the subject with believes that self-interest exists as a motivating force, so maybe “additional” would have been a better descriptor than “new.”
Hrm… But “self-interest” is itself a fairly broad category, including many sub categories like emotional state, survival, fulfillment of curiosity, self determination, etc… Seems like it wouldn’t be that hard a step, given the evolutionary pressures there have been toward cooperation and such, for it to be implemented via actually caring about the other person’s well being, instead of it secretly being just a concern for your own. It’d perhaps be simpler to implement that way. It might be partly implemented by the same emotional reinforcement system, but that’s not the same thing as saying that the only think you care about is your own reinforcement system.
Well, the trivializing gambit here would be to just say that “caring about another person” just means that your empathy circuitry causes you to feel pain when you observe someone in an unfortunate situation and so your desire to help is triggered ultimately by the desire to remove this source of distress.
I’m not sure how concern for anothers well being would actually be implemented in a system that only has a mechanism for caring solely about its own well-being (ie how the mechanism would evolve). The push for cooperation probably came about more because we developed the ability to model other the internal states of critters like ourselves so that we could be mount a better offense or defense. The simplest mechanism just being to use a facial expression or posture to cause us to feel a toned down version of what we would normally feel when we had the same expression or posture (you’re looking for information not to literally feel the same thing at the same intensity—when the biggest member of your pack is aggressing at you you probably want the desire to run away or submit to override the empathetic aggression).
It’s worth noting (for me) that this doesn’t diminish the importance of empathy and it doesn’t mean that I don’t really care about others. I think that caring for others is ultimately rooted in self-centeredness but like depth perception is probably a pre-installed circuit in our brains (a type I system) that we can’t really remove totally without radically modifying the hardware. Caring about another person is as much a part of me as being able to recognize their face. The specific mechanism is only important when you’re trying to do something specific with your caring circuits (or trying to figure out how to emulate them).
It may not matter pragmatically but it still matters scientifically. Just as you want to have a correct explanation of rainbows, regardless of whether this explanation has any effects on our aesthetic appreciation of them, so too you want to have a factually accurate account of apparently altruistic behavior, independently of whether this matters from a moral perspective.
Things is about predicting things not about explaning them. If a theory has no additional predictive value than it’s not scientifically valuable.
In this case I don’t the the added predictive value.
There’s the alternative “gambit” of describing it in terms of signaling. There’s the alternative “gambit” of describing it in terms of wanting to live in the best possible universe. There’s the alternative “gambit” of ascribing altruism to the emotional response it invokes in the altruistic individual.
I find the quote false on its face, in addition to being an appeal to distaste.
Careful, there are some tricky conceptual waters here. Strictly, anything I want to do can be ascribed to my emotional response to it, because that’s how nature made us pursue goals. “They did it because of the emotional response it invoked” is roughly analogous to “They did it because their brain made them do it.”
The cynical claim would be that if people could get the emotional high without the altruistic act (say, by taking a pill that made them think they did it), they’d just do that. I don’t think most altruists would, though. There are cynical explanations for that fact, too (“signalling to yourself leads to better signalling to others”) but they begin to lose their air of streetwise wisdom and sound like epicycles.
Are you suggesting emotions are necessary to goal-oriented behavior?
There should be some evidence for that claim; we have people with diminished emotional capacity in wide range of forms. Do individuals with alexithymia demonstrate impaired goal-oriented behaviors?
I think there’s more to emotion as a motive system than the brain as a motive force. People can certainly choose to stop taking certain drugs which induce emotional highs. 10% of people who start taking heroin are able to keep their consumption levels “moderate” or lower, as compared to 90% for something like tobacco, according to one random and hardly authoritative internet site—the precise numbers aren’t terribly important. Perhaps such altruists, like most people, deliberately avoid drugs like heroin for this reason?