Altruism is almost always put in opposition to egoism.
If you think that contradicts what I was saying, then I fear you have misunderstood my point. Altruism is (according to what I think was Comte’s usage) the opposite of egoism in the same way as loving is the opposite of hating: they point in opposite directions but the same person can do both—even, in unusual cases, both at once.
A single action will rarely be both altruistic and egoistic, just as a single action is rarely both loving and hating. But “altruism” doesn’t mean “never thinking about your own interests” any more than “loving” means “never hating anyone”. A typical person will be altruistic sometimes and egoistic sometimes; a typical person will sometimes be moved by love and sometimes by hate.
But you are praised [...] and condemned [...]
There are probably people who hold that everyone should be as completely altruistic and non-egoistic as possible. Perhaps Auguste Comte was one of them. That’s an entirely separate question from whether “altruism” implies the total absence of egoism; still more is it separate from whether “altruism” means anything like “reversed survival instinct”, which you might recall is the claim I was originally arguing against and which no one seems at all inclined to defend so far.
It’s not how much happiness you produce in others, it’s how much happiness it costs you that matters.
There may be people who believe that, but it certainly isn’t part of the meaning of “altruism”. And the example you give doesn’t support that very strong claim. If you do something with the purpose of making millions happier and not out of considering your own welfare then (at least in my book) that is an altruistic action whether it happens to help you or harm you. If people are reluctant to apply the term “altruistic” to actions that benefit the agent, I suggest that’s just because it’s hard to be sure something was done for the sake of others when self-interest is a credible alternative explanation.
Altruism is (according to what I think was Comte’s usage) the opposite of egoism in the same way as loving is the opposite of hating:
Bad analogy. Loving and hating are different emotions with different qualities, while egoism and altruism are different in the objects of their intent, not the quality of the intent. The intent is to serve the interests of the object—whose interests are to be served is what is at issue. Basically, it’s whose love matters to you, your own, or the other guys?
And your continued disavowal of absolute Altruism as the meaning of Altruism is self contradictory—Altruism is what it is, and allowing people to be less than 100% does not change the quality that we’re measuring in percentages.
More altruistic means more willing to sacrifice your interests for the interests of others. It’s the balance of the trade off that matters. The more you lose, the more altruistic you are. The smaller the gain to others for what you lose, the more altruistic you are. The more you hate the beneficiary, the more altruistic you are. It’s the ratio of marginal cost to yourself (including actually caring for the beneficiary) versus marginal benefit to the beneficiary.
Of course, one should not just waste value inefficiently, destroying your own values to jack up the cost to yourself, or minimizing the value you create for others to minimize the benefit to others. But as you maximize net total weighted utils, it’s the relative weight you assign to your utils and their utils that matters.
But I admittedly said this poorly
It’s not how much happiness you produce in others, it’s how much happiness it costs you that matters.
I was just trying to get at the issue of the trade off here. Setting myself on fire willy nilly is not necessarily altruistic, it’s only altruistic if it’s done as an intended and efficient tradeoff for the benefit of others.
If you do something with the purpose of making millions happier and not out of considering your own welfare then (at least in my book) that is an altruistic action whether it happens to help you or harm you.
I wrote:
If you do exactly as you please but thereby still make millions happier, you are not an altruist by the usual calculations.
You’ve changed the scenario. In mine, You did exactly as you pleased and it happened to make others happier. You changed it to “with the purpose of making millions happier”. That was not the purpose. Satisfying yourself was the purpose.
So, in my scenario, are you altruistic according to you, or not?
I don’t see why. I was trying to point out a feature of the logical structure. If the difference between love/hate and egoism/altruism that you point out invalidates that, I’m not currently seeing why.
your continued disavowal of absolute Altruism as the meaning of Altruism is self contradictory
If (as I think is the case) your objection is simply that generally optimizing for one thing gets you suboptimal results by any other standard, so that e.g. if you optimize for others’ wellbeing then usually you end up worse off yourself, then of course I agree with that.
We seem to be agreed that (1) whatever the exact definition of “altruism”, it is possible to say coherently that a person, or an individual action, is somewhat altruistic and somewhat egoistic, and (2) altruism doesn’t mean actively preferring worse outcomes for oneself. In which case, I think we are in fact agreed about everything I was trying to say.
You’ve changed the scenario.
Yes; that was the whole point. Your scenario was relevant to the question “is altruism about intentions or about outcomes?”, but we never had any disagreement about that; of course it’s about intentions. I was aiming at the question “is altruism about acting for others or about suppressing one’s own interests?”. Though I’m not sure my scenario actually addresses that very well, and I suspect it’s almost impossible to give clear-cut examples of. (Because in most circumstances there’s no observable difference between the results of caring more for others, and those of caring less for oneself.)
As I maintained, a crucial part of altruism is the trade off between your interests and the interests of others. The more you’ve sacrificed of your interests to others, the more altruistic you are. If nothing else, there is always an opportunity cost associated with pursuing the interests of others over yourself.
I think we may be at cross purposes about #2, but there’s a related point I want to attend to first.
You have made a few times an argument that I’ll paraphrase thus: “If A is willing to sacrifice more of his own interests than B is for a given amount of gain for others, then A is more altruistic than B. Therefore altruism is all about how much you hurt yourself, not how much you help others”.
This argument addresses the question of what counts as being more altruistic, but not the question of at what point altruism begins. And that (purely terminological) question matters in this discussion, for the following reason. Objectivists, so I understand, say that altruism is a Bad Thing. But depending on where one draws that terminological line that could mean anything from “making huge personal sacrifices in exchange for tiny gains to others is a Bad Thing” to “making tiny personal sacrifices in exchange for huge gains to others is a Bad Thing”. You’ll get a lot more agreement with the first of those than with the second.
So. Suppose I’m considering my own welfare and that of some other person or people similar enough to me that we can compare utilities meaningfully between persons. (At least for the tradeoffs under consideration here.) For each of the following, (a) would you consider it altruistic, (b) would you approve, and (c) would you expect Ayn Rand to have approved?
I choose (X+10 utils for others, Y-1 utils for me) over (X for others, Y for me).
I choose (X+1.1 for others, Y-1 for me) over (X for others, Y for me).
I choose (X+1 for others, Y-1.1 for me) over (X for others, Y for me).
I choose (X+1 utils for others, Y-10 for me) over (X for others, Y for me).
My own answer: I would consider all of those altruistic, because in every case my motivation seems clearly to be benefit to others. I would certainly approve of #1, would want to look at the rest of the context for #2 and #3, and would think #4 usually a stupid thing to do. My impression is that Ayn Rand would have disapproved heartily of all four, but I am not an Ayn Rand expert.
Now, back to your argument. If the only point it seeks to make is that generally different people’s interests aren’t perfectly aligned and therefore caring more about others will lead to getting less benefits for oneself, then of course I agree and indeed I’ve already said so. But if you’re making the stronger claim expressed in my paraphrase then I disagree. The statement “A is more altruistic if he’s willing to accept more personal loss for a given gain to others” is exactly equivalent to “A is more altruistic if he’s willing to accept less gain to others for a given personal loss”, and if the first of these shows that altruism is all about embracing personal loss then the second shows that it’s all about seeking gain for others.
And, finally, back to issue 2 from the parent and grandparent comments. “Actively preferring worse outcomes for oneself” can mean two things, and I think you’ve taken a different meaning from the one I intended. What I meant by #2 was that altruism doesn’t mean actually preferring, other things equal, worse outcomes for oneself. Of course it does mean being prepared to accept, in some cases, worse outcomes for oneself in exchange for better outcomes for others.
I like books. I buy quite a lot of them. They cost money, and as a result I have less money than if I bought fewer books. That doesn’t mean I actively prefer having less money; it means that in some cases I value a book more than the money it costs me.
I care about other people. Sometimes I do things to help them. That costs money or time or opportunity to benefit myself in other ways, and as a result I am sometimes worse off personally than I’d be if I didn’t care about other people. That doesn’t mean I actively prefer worse outcomes for myself; it means that in some cases I value a benefit to others more than what it costs me.
Does the Objectivist objection to “altruism”, as you understand it, extend to all instances of the schema in the foregoing paragraph? That is, does it advise me never to let any benefit to others, however great, outweigh any loss to myself, however small?
Objectivists, so I understand, say that altruism is a Bad Thing.
The analysis of Objectivism is further complicated by Rand’s act essentialism. As I would characterize her view, it’s the principle of the act, the intent of a policy involved, not the particular consequences that matter.
Just as life wasn’t your living and breathing, but life “qua man”, altruism for her would be an intended policy of sacrificing your values for the values of others, which is just what Comtean altruists suggest as the moral policy.
I care about other people.
Per Rand, your feelings are not the standard of morality. Acting because you feel like it is “whim worshiping subjectivism”, per Rand. Me, I’m a whim worshiping subjectivist, so if you care about people and want to help them, great, knock yourself out. Where I part company with most altruists is on the belief in a duty to be altruistic. I don’t condemn people who aren’t altruistic, but instead have other values they wish to pursue, as long as they aren’t infringing on what I consider to be the rights of others.
Does the Objectivist objection to “altruism”, as you understand it, extend to all instances of the schema in the foregoing paragraph?
You have a prior problem with Rand here. You have not defined a moral code based on principles, but are making ad hoc evaluations of preference. You unprincipled, whim worshiping subjectivist, you.
That is, does it advise me never to let any benefit to others, however great, outweigh any loss to myself, however small?
You’re analyzing in a different schema than she does. You’re analyzing the particular concrete, while she analyzes the “essentials” of the act. The practical answer is no. Sometimes the correct moral code will seem to be a sacrifice of your interests to others in a particular situation. For example, she would be against stealing even when you’re “sure” you will get away with it.
But if your intent is to sacrifice your values to the values of others, if that is the standard by which you judge the morality of the act, then you’re acting on the basis of an evil moral code.
Per Rand, your feelings are not the standard of morality.
I wasn’t suggesting that they are. Per Rand, my feelings are the standard of whether I’m being “altruistic” or not, and my question was about that.
You have a prior problem [...] You have not defined a moral code based on principles, but are making ad hoc evaluations of preference.
I don’t see how you infer from what I wrote that I “have not defined a moral code based on principles”.
if your intent is to sacrifice your values to the values of others, if that is the standard by which you judge [...]
It seems obvious to me (perhaps this makes me a whim-worshipping subjectivist) that neither “always sacrifice your interests to those of others” nor “always sacrifice your interests to those of others” is remotely a sane policy. (I’ve put “interests” in place of your “values” because I don’t think anyone’s really talking about sacrificing values.)
Suppose I propose the following policy: “Consider your own interests and those of others as of equal weight”. Does Rand, and do Objectivists generally, consider that policy “evil”?
What about “Consider your own interests as weighing, so far as one can quantify them, 100x more than those of strangers and some intermediate amount for family, friends, etc.”? Note that living according to this policy will sometimes lead you to act in a way that furthers your own interests less than you could have done in favour of the interests of others; even of strangers.
If you think that contradicts what I was saying, then I fear you have misunderstood my point. Altruism is (according to what I think was Comte’s usage) the opposite of egoism in the same way as loving is the opposite of hating: they point in opposite directions but the same person can do both—even, in unusual cases, both at once.
A single action will rarely be both altruistic and egoistic, just as a single action is rarely both loving and hating. But “altruism” doesn’t mean “never thinking about your own interests” any more than “loving” means “never hating anyone”. A typical person will be altruistic sometimes and egoistic sometimes; a typical person will sometimes be moved by love and sometimes by hate.
There are probably people who hold that everyone should be as completely altruistic and non-egoistic as possible. Perhaps Auguste Comte was one of them. That’s an entirely separate question from whether “altruism” implies the total absence of egoism; still more is it separate from whether “altruism” means anything like “reversed survival instinct”, which you might recall is the claim I was originally arguing against and which no one seems at all inclined to defend so far.
There may be people who believe that, but it certainly isn’t part of the meaning of “altruism”. And the example you give doesn’t support that very strong claim. If you do something with the purpose of making millions happier and not out of considering your own welfare then (at least in my book) that is an altruistic action whether it happens to help you or harm you. If people are reluctant to apply the term “altruistic” to actions that benefit the agent, I suggest that’s just because it’s hard to be sure something was done for the sake of others when self-interest is a credible alternative explanation.
Bad analogy. Loving and hating are different emotions with different qualities, while egoism and altruism are different in the objects of their intent, not the quality of the intent. The intent is to serve the interests of the object—whose interests are to be served is what is at issue. Basically, it’s whose love matters to you, your own, or the other guys?
And your continued disavowal of absolute Altruism as the meaning of Altruism is self contradictory—Altruism is what it is, and allowing people to be less than 100% does not change the quality that we’re measuring in percentages.
More altruistic means more willing to sacrifice your interests for the interests of others. It’s the balance of the trade off that matters. The more you lose, the more altruistic you are. The smaller the gain to others for what you lose, the more altruistic you are. The more you hate the beneficiary, the more altruistic you are. It’s the ratio of marginal cost to yourself (including actually caring for the beneficiary) versus marginal benefit to the beneficiary.
Of course, one should not just waste value inefficiently, destroying your own values to jack up the cost to yourself, or minimizing the value you create for others to minimize the benefit to others. But as you maximize net total weighted utils, it’s the relative weight you assign to your utils and their utils that matters.
But I admittedly said this poorly
I was just trying to get at the issue of the trade off here. Setting myself on fire willy nilly is not necessarily altruistic, it’s only altruistic if it’s done as an intended and efficient tradeoff for the benefit of others.
I wrote:
You’ve changed the scenario. In mine, You did exactly as you pleased and it happened to make others happier. You changed it to “with the purpose of making millions happier”. That was not the purpose. Satisfying yourself was the purpose.
So, in my scenario, are you altruistic according to you, or not?
I don’t see why. I was trying to point out a feature of the logical structure. If the difference between love/hate and egoism/altruism that you point out invalidates that, I’m not currently seeing why.
If (as I think is the case) your objection is simply that generally optimizing for one thing gets you suboptimal results by any other standard, so that e.g. if you optimize for others’ wellbeing then usually you end up worse off yourself, then of course I agree with that.
We seem to be agreed that (1) whatever the exact definition of “altruism”, it is possible to say coherently that a person, or an individual action, is somewhat altruistic and somewhat egoistic, and (2) altruism doesn’t mean actively preferring worse outcomes for oneself. In which case, I think we are in fact agreed about everything I was trying to say.
Yes; that was the whole point. Your scenario was relevant to the question “is altruism about intentions or about outcomes?”, but we never had any disagreement about that; of course it’s about intentions. I was aiming at the question “is altruism about acting for others or about suppressing one’s own interests?”. Though I’m not sure my scenario actually addresses that very well, and I suspect it’s almost impossible to give clear-cut examples of. (Because in most circumstances there’s no observable difference between the results of caring more for others, and those of caring less for oneself.)
We’re agreed on 1 but not on 2.
As I maintained, a crucial part of altruism is the trade off between your interests and the interests of others. The more you’ve sacrificed of your interests to others, the more altruistic you are. If nothing else, there is always an opportunity cost associated with pursuing the interests of others over yourself.
I think we may be at cross purposes about #2, but there’s a related point I want to attend to first.
You have made a few times an argument that I’ll paraphrase thus: “If A is willing to sacrifice more of his own interests than B is for a given amount of gain for others, then A is more altruistic than B. Therefore altruism is all about how much you hurt yourself, not how much you help others”.
This argument addresses the question of what counts as being more altruistic, but not the question of at what point altruism begins. And that (purely terminological) question matters in this discussion, for the following reason. Objectivists, so I understand, say that altruism is a Bad Thing. But depending on where one draws that terminological line that could mean anything from “making huge personal sacrifices in exchange for tiny gains to others is a Bad Thing” to “making tiny personal sacrifices in exchange for huge gains to others is a Bad Thing”. You’ll get a lot more agreement with the first of those than with the second.
So. Suppose I’m considering my own welfare and that of some other person or people similar enough to me that we can compare utilities meaningfully between persons. (At least for the tradeoffs under consideration here.) For each of the following, (a) would you consider it altruistic, (b) would you approve, and (c) would you expect Ayn Rand to have approved?
I choose (X+10 utils for others, Y-1 utils for me) over (X for others, Y for me).
I choose (X+1.1 for others, Y-1 for me) over (X for others, Y for me).
I choose (X+1 for others, Y-1.1 for me) over (X for others, Y for me).
I choose (X+1 utils for others, Y-10 for me) over (X for others, Y for me).
My own answer: I would consider all of those altruistic, because in every case my motivation seems clearly to be benefit to others. I would certainly approve of #1, would want to look at the rest of the context for #2 and #3, and would think #4 usually a stupid thing to do. My impression is that Ayn Rand would have disapproved heartily of all four, but I am not an Ayn Rand expert.
Now, back to your argument. If the only point it seeks to make is that generally different people’s interests aren’t perfectly aligned and therefore caring more about others will lead to getting less benefits for oneself, then of course I agree and indeed I’ve already said so. But if you’re making the stronger claim expressed in my paraphrase then I disagree. The statement “A is more altruistic if he’s willing to accept more personal loss for a given gain to others” is exactly equivalent to “A is more altruistic if he’s willing to accept less gain to others for a given personal loss”, and if the first of these shows that altruism is all about embracing personal loss then the second shows that it’s all about seeking gain for others.
And, finally, back to issue 2 from the parent and grandparent comments. “Actively preferring worse outcomes for oneself” can mean two things, and I think you’ve taken a different meaning from the one I intended. What I meant by #2 was that altruism doesn’t mean actually preferring, other things equal, worse outcomes for oneself. Of course it does mean being prepared to accept, in some cases, worse outcomes for oneself in exchange for better outcomes for others.
I like books. I buy quite a lot of them. They cost money, and as a result I have less money than if I bought fewer books. That doesn’t mean I actively prefer having less money; it means that in some cases I value a book more than the money it costs me.
I care about other people. Sometimes I do things to help them. That costs money or time or opportunity to benefit myself in other ways, and as a result I am sometimes worse off personally than I’d be if I didn’t care about other people. That doesn’t mean I actively prefer worse outcomes for myself; it means that in some cases I value a benefit to others more than what it costs me.
Does the Objectivist objection to “altruism”, as you understand it, extend to all instances of the schema in the foregoing paragraph? That is, does it advise me never to let any benefit to others, however great, outweigh any loss to myself, however small?
The analysis of Objectivism is further complicated by Rand’s act essentialism. As I would characterize her view, it’s the principle of the act, the intent of a policy involved, not the particular consequences that matter.
Just as life wasn’t your living and breathing, but life “qua man”, altruism for her would be an intended policy of sacrificing your values for the values of others, which is just what Comtean altruists suggest as the moral policy.
Per Rand, your feelings are not the standard of morality. Acting because you feel like it is “whim worshiping subjectivism”, per Rand. Me, I’m a whim worshiping subjectivist, so if you care about people and want to help them, great, knock yourself out. Where I part company with most altruists is on the belief in a duty to be altruistic. I don’t condemn people who aren’t altruistic, but instead have other values they wish to pursue, as long as they aren’t infringing on what I consider to be the rights of others.
You have a prior problem with Rand here. You have not defined a moral code based on principles, but are making ad hoc evaluations of preference. You unprincipled, whim worshiping subjectivist, you.
You’re analyzing in a different schema than she does. You’re analyzing the particular concrete, while she analyzes the “essentials” of the act. The practical answer is no. Sometimes the correct moral code will seem to be a sacrifice of your interests to others in a particular situation. For example, she would be against stealing even when you’re “sure” you will get away with it.
But if your intent is to sacrifice your values to the values of others, if that is the standard by which you judge the morality of the act, then you’re acting on the basis of an evil moral code.
I wasn’t suggesting that they are. Per Rand, my feelings are the standard of whether I’m being “altruistic” or not, and my question was about that.
I don’t see how you infer from what I wrote that I “have not defined a moral code based on principles”.
It seems obvious to me (perhaps this makes me a whim-worshipping subjectivist) that neither “always sacrifice your interests to those of others” nor “always sacrifice your interests to those of others” is remotely a sane policy. (I’ve put “interests” in place of your “values” because I don’t think anyone’s really talking about sacrificing values.)
Suppose I propose the following policy: “Consider your own interests and those of others as of equal weight”. Does Rand, and do Objectivists generally, consider that policy “evil”?
What about “Consider your own interests as weighing, so far as one can quantify them, 100x more than those of strangers and some intermediate amount for family, friends, etc.”? Note that living according to this policy will sometimes lead you to act in a way that furthers your own interests less than you could have done in favour of the interests of others; even of strangers.