I think an important part of why people are distrustful of people who accomplish altruistic ends acting on self-serving motivations is that it’s definitely plausible that these other motivations will act against the interest of the altruistic end at some point during the implementation phase.
To use your example, if someone managed to cure malaria and make a million dollars doing it, and the cure was available to everyone or it effectively eradicated the disease from everywhere, that would definitely be creating more net altruistic utility than if someone made a million dollars selling video games (I like video games, but agree that their actual utility for most people’s preferences/needs is pretty low compared to curing malaria). I would be less inclined to believe this if the person who cured malaria made their money by keeping the cure secret and charging enough for it that any number of people who needed it were unable to access it, with the loss in net altruism quantified by the number of people who were in this way prevented from alleviating their malaria.
Furthermore, if this hypothetical self-interested malaria curer were also to patent the cure and litigate aggressively (or threaten to) against other cures, or otherwise somehow intentionally prevent other people from producing a cure, and they are effective in doing so, the net utility of coming up with the cure could drop below zero, since they may well have prevented someone else who is more “purely” altruistic from coming up with a cure independently and helping more people than they did.
These are pretty plausible scenarios, exactly because the actions demanded by optimizing the non-altruistic motivators can easily diverge from the actions demanded by optimizing the altruistic end, even if the original intent was supposedly the latter. It’s particularly plausible in the case of profit motive, because although it is not always the case that the best way to turn a profit is anti-altruistic, often the most obvious and easy-to-implement ways to do so are, as is the case with the example I gave.
That’s not to say we should intrinsically be wary of people who manage to benefit themselves and others simultaneously, nor is it to say that a solution that isn’t maximizing altruistic utility can’t still be a net good, but the less-than-zero utility case is, I would argue, common enough that it’s worth mentioning. People don’t solely distrust selfishly-motivated actors for archaic or irrational reasons.
It seems to me that people are also skeptical of those who claim to accomplish altruistic ends acting on self-serving motivations because of a common “rule of thumb”:
Benefits to charity are taken directly from the potential benefits of the donor.
Religion may be the main promoter of this belief. For example, the crucifixion of Jesus teaches the lesson that the greatest good came from the greatest sacrifice.
This assumption can only exist if charity payoffs from all (or a portion of) potential actions are unknown. If we can quantify these payoffs, perhaps we can eliminate the core uncertainty that spawned this rule of thumb, and, therefore, encourage optimal allocation of charity resources. That’s a big IF, I know.
I think an important part of why people are distrustful of people who accomplish altruistic ends acting on self-serving motivations is that it’s definitely plausible that these other motivations will act against the interest of the altruistic end at some point during the implementation phase.
To use your example, if someone managed to cure malaria and make a million dollars doing it, and the cure was available to everyone or it effectively eradicated the disease from everywhere, that would definitely be creating more net altruistic utility than if someone made a million dollars selling video games (I like video games, but agree that their actual utility for most people’s preferences/needs is pretty low compared to curing malaria). I would be less inclined to believe this if the person who cured malaria made their money by keeping the cure secret and charging enough for it that any number of people who needed it were unable to access it, with the loss in net altruism quantified by the number of people who were in this way prevented from alleviating their malaria.
Furthermore, if this hypothetical self-interested malaria curer were also to patent the cure and litigate aggressively (or threaten to) against other cures, or otherwise somehow intentionally prevent other people from producing a cure, and they are effective in doing so, the net utility of coming up with the cure could drop below zero, since they may well have prevented someone else who is more “purely” altruistic from coming up with a cure independently and helping more people than they did.
These are pretty plausible scenarios, exactly because the actions demanded by optimizing the non-altruistic motivators can easily diverge from the actions demanded by optimizing the altruistic end, even if the original intent was supposedly the latter. It’s particularly plausible in the case of profit motive, because although it is not always the case that the best way to turn a profit is anti-altruistic, often the most obvious and easy-to-implement ways to do so are, as is the case with the example I gave.
That’s not to say we should intrinsically be wary of people who manage to benefit themselves and others simultaneously, nor is it to say that a solution that isn’t maximizing altruistic utility can’t still be a net good, but the less-than-zero utility case is, I would argue, common enough that it’s worth mentioning. People don’t solely distrust selfishly-motivated actors for archaic or irrational reasons.
Good points.
It seems to me that people are also skeptical of those who claim to accomplish altruistic ends acting on self-serving motivations because of a common “rule of thumb”:
Benefits to charity are taken directly from the potential benefits of the donor.
Religion may be the main promoter of this belief. For example, the crucifixion of Jesus teaches the lesson that the greatest good came from the greatest sacrifice.
This assumption can only exist if charity payoffs from all (or a portion of) potential actions are unknown. If we can quantify these payoffs, perhaps we can eliminate the core uncertainty that spawned this rule of thumb, and, therefore, encourage optimal allocation of charity resources. That’s a big IF, I know.