Perhaps this training simply focuses attention on the distress to be alleviated by altruism. Learning that your efforts at altruism aren’t very effective might be pretty distressing.
I guess I don’t see the problem with the trivializing gambit. If it explains altruism without needing to invent a new kind of motivation why not use it?
I meant that everyone I’ve discussed the subject with believes that self-interest exists as a motivating force, so maybe “additional” would have been a better descriptor than “new.”
Hrm… But “self-interest” is itself a fairly broad category, including many sub categories like emotional state, survival, fulfillment of curiosity, self determination, etc… Seems like it wouldn’t be that hard a step, given the evolutionary pressures there have been toward cooperation and such, for it to be implemented via actually caring about the other person’s well being, instead of it secretly being just a concern for your own. It’d perhaps be simpler to implement that way. It might be partly implemented by the same emotional reinforcement system, but that’s not the same thing as saying that the only think you care about is your own reinforcement system.
Well, the trivializing gambit here would be to just say that “caring about another person” just means that your empathy circuitry causes you to feel pain when you observe someone in an unfortunate situation and so your desire to help is triggered ultimately by the desire to remove this source of distress.
I’m not sure how concern for anothers well being would actually be implemented in a system that only has a mechanism for caring solely about its own well-being (ie how the mechanism would evolve). The push for cooperation probably came about more because we developed the ability to model other the internal states of critters like ourselves so that we could be mount a better offense or defense. The simplest mechanism just being to use a facial expression or posture to cause us to feel a toned down version of what we would normally feel when we had the same expression or posture (you’re looking for information not to literally feel the same thing at the same intensity—when the biggest member of your pack is aggressing at you you probably want the desire to run away or submit to override the empathetic aggression).
It’s worth noting (for me) that this doesn’t diminish the importance of empathy and it doesn’t mean that I don’t really care about others. I think that caring for others is ultimately rooted in self-centeredness but like depth perception is probably a pre-installed circuit in our brains (a type I system) that we can’t really remove totally without radically modifying the hardware. Caring about another person is as much a part of me as being able to recognize their face. The specific mechanism is only important when you’re trying to do something specific with your caring circuits (or trying to figure out how to emulate them).
Perhaps this training simply focuses attention on the distress to be alleviated by altruism. Learning that your efforts at altruism aren’t very effective might be pretty distressing.
That seems to verge on the trivializing gambit, though.
I guess I don’t see the problem with the trivializing gambit. If it explains altruism without needing to invent a new kind of motivation why not use it?
Why would actual altruism be a “new kind” of motivation? What makes it a “newer kind” than self interest?
I meant that everyone I’ve discussed the subject with believes that self-interest exists as a motivating force, so maybe “additional” would have been a better descriptor than “new.”
Hrm… But “self-interest” is itself a fairly broad category, including many sub categories like emotional state, survival, fulfillment of curiosity, self determination, etc… Seems like it wouldn’t be that hard a step, given the evolutionary pressures there have been toward cooperation and such, for it to be implemented via actually caring about the other person’s well being, instead of it secretly being just a concern for your own. It’d perhaps be simpler to implement that way. It might be partly implemented by the same emotional reinforcement system, but that’s not the same thing as saying that the only think you care about is your own reinforcement system.
Well, the trivializing gambit here would be to just say that “caring about another person” just means that your empathy circuitry causes you to feel pain when you observe someone in an unfortunate situation and so your desire to help is triggered ultimately by the desire to remove this source of distress.
I’m not sure how concern for anothers well being would actually be implemented in a system that only has a mechanism for caring solely about its own well-being (ie how the mechanism would evolve). The push for cooperation probably came about more because we developed the ability to model other the internal states of critters like ourselves so that we could be mount a better offense or defense. The simplest mechanism just being to use a facial expression or posture to cause us to feel a toned down version of what we would normally feel when we had the same expression or posture (you’re looking for information not to literally feel the same thing at the same intensity—when the biggest member of your pack is aggressing at you you probably want the desire to run away or submit to override the empathetic aggression).
It’s worth noting (for me) that this doesn’t diminish the importance of empathy and it doesn’t mean that I don’t really care about others. I think that caring for others is ultimately rooted in self-centeredness but like depth perception is probably a pre-installed circuit in our brains (a type I system) that we can’t really remove totally without radically modifying the hardware. Caring about another person is as much a part of me as being able to recognize their face. The specific mechanism is only important when you’re trying to do something specific with your caring circuits (or trying to figure out how to emulate them).