Rational distress-minimizers would behave differently from rational atruists. (Real people are somewhere in the middle and seem to tend toward greater altruism and less distress-minimization when taught ‘rationality’ by altruists.)
Well, for me, believing myself to be a type of person I don’t like causes me great cognitive dissonance. The more I know about how I might be fooling myself, the more I have to actually adjust to achieve that belief.
For instance, it used to be enough for me that I treat my in-group well. But once I understood that that was what I was doing, I wasn’t satisfied with it. I now follow a utilitarian ethics that’s much more materially expensive.
Perhaps this training simply focuses attention on the distress to be alleviated by altruism. Learning that your efforts at altruism aren’t very effective might be pretty distressing.
I guess I don’t see the problem with the trivializing gambit. If it explains altruism without needing to invent a new kind of motivation why not use it?
I meant that everyone I’ve discussed the subject with believes that self-interest exists as a motivating force, so maybe “additional” would have been a better descriptor than “new.”
Hrm… But “self-interest” is itself a fairly broad category, including many sub categories like emotional state, survival, fulfillment of curiosity, self determination, etc… Seems like it wouldn’t be that hard a step, given the evolutionary pressures there have been toward cooperation and such, for it to be implemented via actually caring about the other person’s well being, instead of it secretly being just a concern for your own. It’d perhaps be simpler to implement that way. It might be partly implemented by the same emotional reinforcement system, but that’s not the same thing as saying that the only think you care about is your own reinforcement system.
Well, the trivializing gambit here would be to just say that “caring about another person” just means that your empathy circuitry causes you to feel pain when you observe someone in an unfortunate situation and so your desire to help is triggered ultimately by the desire to remove this source of distress.
I’m not sure how concern for anothers well being would actually be implemented in a system that only has a mechanism for caring solely about its own well-being (ie how the mechanism would evolve). The push for cooperation probably came about more because we developed the ability to model other the internal states of critters like ourselves so that we could be mount a better offense or defense. The simplest mechanism just being to use a facial expression or posture to cause us to feel a toned down version of what we would normally feel when we had the same expression or posture (you’re looking for information not to literally feel the same thing at the same intensity—when the biggest member of your pack is aggressing at you you probably want the desire to run away or submit to override the empathetic aggression).
It’s worth noting (for me) that this doesn’t diminish the importance of empathy and it doesn’t mean that I don’t really care about others. I think that caring for others is ultimately rooted in self-centeredness but like depth perception is probably a pre-installed circuit in our brains (a type I system) that we can’t really remove totally without radically modifying the hardware. Caring about another person is as much a part of me as being able to recognize their face. The specific mechanism is only important when you’re trying to do something specific with your caring circuits (or trying to figure out how to emulate them).
It may not matter pragmatically but it still matters scientifically. Just as you want to have a correct explanation of rainbows, regardless of whether this explanation has any effects on our aesthetic appreciation of them, so too you want to have a factually accurate account of apparently altruistic behavior, independently of whether this matters from a moral perspective.
Even if altruism turns out to be a really subtle form of self-interest, what does it matter? An unwoven rainbow still has all its colors.
Rational distress-minimizers would behave differently from rational atruists. (Real people are somewhere in the middle and seem to tend toward greater altruism and less distress-minimization when taught ‘rationality’ by altruists.)
That could be because rationality decreases the effectiveness of distress minimisation techniques other than altruism.
..because it makes you try to see reality as it is?
In me, it’s also had the effect of reducing empathy. (Helps me not go crazy.)
Well, for me, believing myself to be a type of person I don’t like causes me great cognitive dissonance. The more I know about how I might be fooling myself, the more I have to actually adjust to achieve that belief.
For instance, it used to be enough for me that I treat my in-group well. But once I understood that that was what I was doing, I wasn’t satisfied with it. I now follow a utilitarian ethics that’s much more materially expensive.
Are they being taught ‘rationality’ by altruists or ‘altruism’ by rationalists? Or ‘rational altruism’ by rational altruists?
Shouldn’t the methods of rationality be orthogonal to the goal you are trying to achieve?
Perhaps this training simply focuses attention on the distress to be alleviated by altruism. Learning that your efforts at altruism aren’t very effective might be pretty distressing.
That seems to verge on the trivializing gambit, though.
I guess I don’t see the problem with the trivializing gambit. If it explains altruism without needing to invent a new kind of motivation why not use it?
Why would actual altruism be a “new kind” of motivation? What makes it a “newer kind” than self interest?
I meant that everyone I’ve discussed the subject with believes that self-interest exists as a motivating force, so maybe “additional” would have been a better descriptor than “new.”
Hrm… But “self-interest” is itself a fairly broad category, including many sub categories like emotional state, survival, fulfillment of curiosity, self determination, etc… Seems like it wouldn’t be that hard a step, given the evolutionary pressures there have been toward cooperation and such, for it to be implemented via actually caring about the other person’s well being, instead of it secretly being just a concern for your own. It’d perhaps be simpler to implement that way. It might be partly implemented by the same emotional reinforcement system, but that’s not the same thing as saying that the only think you care about is your own reinforcement system.
Well, the trivializing gambit here would be to just say that “caring about another person” just means that your empathy circuitry causes you to feel pain when you observe someone in an unfortunate situation and so your desire to help is triggered ultimately by the desire to remove this source of distress.
I’m not sure how concern for anothers well being would actually be implemented in a system that only has a mechanism for caring solely about its own well-being (ie how the mechanism would evolve). The push for cooperation probably came about more because we developed the ability to model other the internal states of critters like ourselves so that we could be mount a better offense or defense. The simplest mechanism just being to use a facial expression or posture to cause us to feel a toned down version of what we would normally feel when we had the same expression or posture (you’re looking for information not to literally feel the same thing at the same intensity—when the biggest member of your pack is aggressing at you you probably want the desire to run away or submit to override the empathetic aggression).
It’s worth noting (for me) that this doesn’t diminish the importance of empathy and it doesn’t mean that I don’t really care about others. I think that caring for others is ultimately rooted in self-centeredness but like depth perception is probably a pre-installed circuit in our brains (a type I system) that we can’t really remove totally without radically modifying the hardware. Caring about another person is as much a part of me as being able to recognize their face. The specific mechanism is only important when you’re trying to do something specific with your caring circuits (or trying to figure out how to emulate them).
It may not matter pragmatically but it still matters scientifically. Just as you want to have a correct explanation of rainbows, regardless of whether this explanation has any effects on our aesthetic appreciation of them, so too you want to have a factually accurate account of apparently altruistic behavior, independently of whether this matters from a moral perspective.
Things is about predicting things not about explaning them. If a theory has no additional predictive value than it’s not scientifically valuable.
In this case I don’t the the added predictive value.