I don’t understand why consequentialism and egoism would be mutually exclusive, which you seem to imply by conflating consequentialism and utilitarianism.
I’m not conflating the two. There are non-utilitarian moral consequentialisms. I’m not sure egoism qualifies, since egoism (like paperclip maximization) might not bear a sufficient family resemblance to the things we call ‘morality’. But that’s just a terminological issue.
If an egoist did choose to adopt moral terminology like ‘ought’ and ‘good’, and to cash those terms out using egoism, then the egoist would agree with my claim ″It’s better for people to have preferences that cause them to do better things.′ But the egoist would mean by that ‘It better fits the goals of my form of egoism for people to have preferences that cause them to do things that make me personally happy’, whereas what I mean by the sentence is something more like ‘It better fits the goals of my form of altruism for people to have preferences that cause them to do things that improve the psychological welfare and preference-satisfaction of all agents’.
I don’t think I voluntarily do bad things according to my values, ever.
Interesting! Then your usage of ‘bad’ is very unusual. (Or your preferences and general psychological makeup is very unusual.) Most people think themselves capable of making voluntary mistakes, acting against their own better judgment, regretting their decisions, making normative progress, etc.
Connotatively comparing them to murderers
Sorry, I don’t think I was clear about why I drew this comparison. ‘Murder’ just means ‘bad killing’. It’s trivial to say that murder is bad. I was saying that it’s nearly as trivial to say that preferences that lead to bad outcomes are bad. But it would be bizarre for anyone to suggest that every suboptimal decision is as bad as murder! I clearly should have been more careful in picking my comparison, but I just didn’t think anyone would think I was honestly saying something almost unsurpassably silly.
I find such talk damaging to them.
What do you think is the best strategy for endorsing maximization as a good thing without seeming to endorse ‘you should feel horribly guilty and hate yourself if you haven’t 100% maximized your impact’? Or should we drop the idea that maximization is even a good thing?
Having ideals is fine if you make it absolutely clear that’s all that they are.
I don’t know what you mean by ‘that’s all they are’. Core preferences, ideals, values, goals… I’m using all these terms to pick out pretty much the same thing. I’m not using ‘ideal’ in any sense in which ideals are mere. They’re an encoding of the most important things in human life, by reference to optima.
Egoism is usually not the claim that everyone should act in the egoist’s self-interest, but that everyone should act in their own self-interest, i.e. “It better fits the goal of my egoism for people to have preferences that cause them do to things that make them happy”.
That’s true in the philosophical literature. But consequentialist egoism is a complicated, confusing, very hard to justify, and very hard to motivate view, since when I say ‘I endorse egoism’ in that sense I’m really endorsing two contradictory goals, not a single goal: (1) An overarching goal to have my personal desires met; (2) An overarching goal that every person act in whatever way ey expects to meet eir desires. The former ‘goal’ is the truer one, in that it’s the one that actually guides my actions to the extent I’m a ‘good’ egoist; the latter goal is a weird hanger-on that doesn’t seem to be action-guiding. If the two goals come in conflict, then the really important and valuable bit (from my perspective, as a hypothetical egoist) is that people satisfy my values, not that they satisfy their own; possibly the two goals don’t come into conflict that often, but it’s clear which one is more important when they do.
This is also useful because it sets up a starker contrast with utilitarianism; moral egoism as the SEP talks about it is a lot closer to descriptive egoism, and could well arise from utilitarianism plus a confused view of human psychology.
when I say ‘I endorse egoism’ in that sense I’m really endorsing two contradictory goals, not a single goal: (1) An overarching goal to have my personal desires met; (2) An overarching goal that every person act in whatever way ey expects to meet eir desires
The two goals don’t conflict, or, more precisely, (2) isn’t a goal, it’s a decision rule. There is no conflict in having the goal of having your personal desires met and believing that the correct decision rule is to do whatever maximizes the fulfillment of one’s own desires. It’s similar to how in the prisoner’s dilemma, each prisoner wants the other to cooperate, but doesn’t believe that the other prisoner should cooperate.
There is no conflict in having the goal of having your personal desires met and believing that the correct decision rule is to do whatever maximizes the fulfillment of one’s own desires.
I think it depends on what’s meant by ‘correct decision rule’. Suppose I came up to you and said that intuitionistic mathematics is ‘correct’, and conventional mathematics is ‘incorrect’; but not in virtue of correspondence to any non-physical mathematical facts; and conventional mathematics is what I want people to use; and using conventional mathematics, and treating it as correct, furthers other everyone else’s goals more too; and there is no deeper underlying rule that rationally commits anyone to saying that intuitionistic mathematics is correct. What then is the content of saying that intuitionistic mathematics is right and conventional is wrong?
It’s similar to how in the prisoner’s dilemma, each prisoner wants the other to cooperate, but doesn’t believe that the other prisoner should cooperate.
I don’t think the other player will cooperate, if I think the other player is best modeled as a rational agent. I don’t know what it means to add to that that the other player ‘shouldn’t cooperate. If I get into a PD with a non-sentient Paperclip Maximizer, I might predict that it will defect, but there’s no normative demand that it do so. I don’t think that it should maximize paperclips, and if a bolt of lightning suddenly melted part of its brain and made it better at helping humans than at making paperclips, I wouldn’t conclude that this was a bad or wrong or ‘incorrect’ thing, though it might be a thing that makes my mental model of the erstwhile paperclipper more complicated.
Sorry, I don’t know much about the philosophy of mathematics, so your analogy goes over my head.
I don’t know what it means to add to that that the other player ’shouldn’t cooperate.
It means that it is optimal for the other player to defect, from the other player’s point of view, if they’re following the same decision rule that you’re following. Given that you’ve endorsed this decision rule to yourself, you have no grounds on which to say that others shouldn’t use it as well. If the other player chooses to cooperate, I would be happy because my preferences would have been fulfilled more than they would have been had he defected, but I would also judge that he had acted suboptimally, i.e. in a way he shouldn’t have.
I’m not sure egoism qualifies, since egoism (like paperclip maximization) might not bear a sufficient family resemblance to the things we call ‘morality’. But that’s just a terminological issue.
I’d have no problem calling Clippy a consequentialist, but a polar bear would probably lack the sufficient introspection. You have to have some inkling about what your values are to have morality. You’re right it’s a terminology issue and difficult one at that.
It’s better for people to have preferences that cause them to do better things.′ But the egoist would mean by that ’It better fits the goals of my form of egoism for people to have preferences that cause them to do things that make me personally happy
Disclaimer: I use “pleasure” as an umbrella term for various forms of experiential goodness. Say there’s some utility cap in my brain that limits the amount of pleasure I can get from a single activity. One of these activities is helping other people, and the amount of pleasure I get from this activity is capped in a way that I can only get under 50 % of the maximum possible pleasure from altruism. Necessarily this will make me look for sources of pleasure elsewhere. What exactly does this make me? If I can’t call myself an egoist, then I’m at a loss here. Perhaps “egoism” is a reputation hit anyway and I should ditch the word, huh?
Actually, the reason why EA ideas appeal to me is that the pleasure I can get by using the money on myself seems to be already capped, I’m making much more money than I use, and I’m looking for other sources. Since I learned about fuzzies, being actually effective seems to be the only way to get any pleasure from this altruism thing.
Then your usage of ‘bad’ is very unusual.
Most people don’t do much introspection, so I would expect that. However you saying this surprises me, since I didn’t expect to be unusual in this crowd.
mistakes, acting against their own better judgment, regretting their decisions, making normative progress, etc.
These are all bad only in retrospect and explicable by having insufficient information or different values compared to now, except “normative progress” I don’t understand. Acting bad voluntarily would mean I make a choice which I expect to have bad consequences. It might help your understanding to know what part of my decision process I usually identify with.
This brings up another terminological problem. See, I totally understand I better use the word “bad” in a way that other people understand me, but if I used it while I’m describing my own decision process, that would lead me to scold myself unnecessarily. I don’t think I voluntarily do anything bad in my brain, but it makes sense for other people to ascribe voluntary action to some of my mistakes, since they don’t really have access to my decision processes. I also have very different private and public meanings for the word “I”. In my private considerations, the role of “I” in my brain is very limited.
I just didn’t think anyone would think I was honestly saying something almost unsurpassably silly.
I probably should have just asked what you meant since my brain came up with only the silly interpretation. I think the reason why I got angry at the murder example was the perceived social cost of my actions being associated with murder. Toe stubbing is trivially bad too you know, bad scales. I made a mistake, but only in retrospect. I’ll make a different mistake next time.
What do you think is the best strategy for endorsing maximization as a good thing without seeming to endorse ‘you should feel horribly guilty and hate yourself if you haven’t 100% maximized your impact’? Or should we drop the idea that maximization is even a good thing?
When I first learned how little a life costs, my reaction wasn’t guilt, at least not for long. This lead me to think “wow, apparently I care about people suffering much less than I previously thought, wonder why that is”, not “I must be mistaken about my values and should feel horrible guilt for not maximizing my actual values”.
As I previously described, motivation for altruism is purely positive for me, and I’m pretty sure that if I associated EA with guilt, that would make me ditch the idea altogether and look for sources of pleasure elsewhere. I get depressed easily, which makes any negative motivation very costly.
I’m not motivated by the idea of maximization in itself, but it helps my happiness to know how much my money can buy. Your idea of motivational can be another person’s idea of demotivational. I think we should try to identify our audience to maximize impact. As a default I’d still try to motivate people positively, not to associate crappy feelings with the important ideas. Human brains are predictably irrational and there’s a difference in saying you can save several lives in a month and be a superhero by donating compared to saying you can be a serial killer by spending the money on yourself.
I’m not conflating the two. There are non-utilitarian moral consequentialisms. I’m not sure egoism qualifies, since egoism (like paperclip maximization) might not bear a sufficient family resemblance to the things we call ‘morality’. But that’s just a terminological issue.
If an egoist did choose to adopt moral terminology like ‘ought’ and ‘good’, and to cash those terms out using egoism, then the egoist would agree with my claim ″It’s better for people to have preferences that cause them to do better things.′ But the egoist would mean by that ‘It better fits the goals of my form of egoism for people to have preferences that cause them to do things that make me personally happy’, whereas what I mean by the sentence is something more like ‘It better fits the goals of my form of altruism for people to have preferences that cause them to do things that improve the psychological welfare and preference-satisfaction of all agents’.
Interesting! Then your usage of ‘bad’ is very unusual. (Or your preferences and general psychological makeup is very unusual.) Most people think themselves capable of making voluntary mistakes, acting against their own better judgment, regretting their decisions, making normative progress, etc.
Sorry, I don’t think I was clear about why I drew this comparison. ‘Murder’ just means ‘bad killing’. It’s trivial to say that murder is bad. I was saying that it’s nearly as trivial to say that preferences that lead to bad outcomes are bad. But it would be bizarre for anyone to suggest that every suboptimal decision is as bad as murder! I clearly should have been more careful in picking my comparison, but I just didn’t think anyone would think I was honestly saying something almost unsurpassably silly.
What do you think is the best strategy for endorsing maximization as a good thing without seeming to endorse ‘you should feel horribly guilty and hate yourself if you haven’t 100% maximized your impact’? Or should we drop the idea that maximization is even a good thing?
I don’t know what you mean by ‘that’s all they are’. Core preferences, ideals, values, goals… I’m using all these terms to pick out pretty much the same thing. I’m not using ‘ideal’ in any sense in which ideals are mere. They’re an encoding of the most important things in human life, by reference to optima.
Egoism is usually not the claim that everyone should act in the egoist’s self-interest, but that everyone should act in their own self-interest, i.e. “It better fits the goal of my egoism for people to have preferences that cause them do to things that make them happy”.
That’s true in the philosophical literature. But consequentialist egoism is a complicated, confusing, very hard to justify, and very hard to motivate view, since when I say ‘I endorse egoism’ in that sense I’m really endorsing two contradictory goals, not a single goal: (1) An overarching goal to have my personal desires met; (2) An overarching goal that every person act in whatever way ey expects to meet eir desires. The former ‘goal’ is the truer one, in that it’s the one that actually guides my actions to the extent I’m a ‘good’ egoist; the latter goal is a weird hanger-on that doesn’t seem to be action-guiding. If the two goals come in conflict, then the really important and valuable bit (from my perspective, as a hypothetical egoist) is that people satisfy my values, not that they satisfy their own; possibly the two goals don’t come into conflict that often, but it’s clear which one is more important when they do.
This is also useful because it sets up a starker contrast with utilitarianism; moral egoism as the SEP talks about it is a lot closer to descriptive egoism, and could well arise from utilitarianism plus a confused view of human psychology.
The two goals don’t conflict, or, more precisely, (2) isn’t a goal, it’s a decision rule. There is no conflict in having the goal of having your personal desires met and believing that the correct decision rule is to do whatever maximizes the fulfillment of one’s own desires. It’s similar to how in the prisoner’s dilemma, each prisoner wants the other to cooperate, but doesn’t believe that the other prisoner should cooperate.
I think it depends on what’s meant by ‘correct decision rule’. Suppose I came up to you and said that intuitionistic mathematics is ‘correct’, and conventional mathematics is ‘incorrect’; but not in virtue of correspondence to any non-physical mathematical facts; and conventional mathematics is what I want people to use; and using conventional mathematics, and treating it as correct, furthers other everyone else’s goals more too; and there is no deeper underlying rule that rationally commits anyone to saying that intuitionistic mathematics is correct. What then is the content of saying that intuitionistic mathematics is right and conventional is wrong?
I don’t think the other player will cooperate, if I think the other player is best modeled as a rational agent. I don’t know what it means to add to that that the other player ‘shouldn’t cooperate. If I get into a PD with a non-sentient Paperclip Maximizer, I might predict that it will defect, but there’s no normative demand that it do so. I don’t think that it should maximize paperclips, and if a bolt of lightning suddenly melted part of its brain and made it better at helping humans than at making paperclips, I wouldn’t conclude that this was a bad or wrong or ‘incorrect’ thing, though it might be a thing that makes my mental model of the erstwhile paperclipper more complicated.
Sorry, I don’t know much about the philosophy of mathematics, so your analogy goes over my head.
It means that it is optimal for the other player to defect, from the other player’s point of view, if they’re following the same decision rule that you’re following. Given that you’ve endorsed this decision rule to yourself, you have no grounds on which to say that others shouldn’t use it as well. If the other player chooses to cooperate, I would be happy because my preferences would have been fulfilled more than they would have been had he defected, but I would also judge that he had acted suboptimally, i.e. in a way he shouldn’t have.
It seems various things are meant by egoism.
Begins with “Egoism can be a descriptive or a normative position.”
It’s also a common attack term :-/
I better stop using it. In fact, I better stop using any label for my value system.
I’d have no problem calling Clippy a consequentialist, but a polar bear would probably lack the sufficient introspection. You have to have some inkling about what your values are to have morality. You’re right it’s a terminology issue and difficult one at that.
Disclaimer: I use “pleasure” as an umbrella term for various forms of experiential goodness. Say there’s some utility cap in my brain that limits the amount of pleasure I can get from a single activity. One of these activities is helping other people, and the amount of pleasure I get from this activity is capped in a way that I can only get under 50 % of the maximum possible pleasure from altruism. Necessarily this will make me look for sources of pleasure elsewhere. What exactly does this make me? If I can’t call myself an egoist, then I’m at a loss here. Perhaps “egoism” is a reputation hit anyway and I should ditch the word, huh?
Actually, the reason why EA ideas appeal to me is that the pleasure I can get by using the money on myself seems to be already capped, I’m making much more money than I use, and I’m looking for other sources. Since I learned about fuzzies, being actually effective seems to be the only way to get any pleasure from this altruism thing.
Most people don’t do much introspection, so I would expect that. However you saying this surprises me, since I didn’t expect to be unusual in this crowd.
These are all bad only in retrospect and explicable by having insufficient information or different values compared to now, except “normative progress” I don’t understand. Acting bad voluntarily would mean I make a choice which I expect to have bad consequences. It might help your understanding to know what part of my decision process I usually identify with.
This brings up another terminological problem. See, I totally understand I better use the word “bad” in a way that other people understand me, but if I used it while I’m describing my own decision process, that would lead me to scold myself unnecessarily. I don’t think I voluntarily do anything bad in my brain, but it makes sense for other people to ascribe voluntary action to some of my mistakes, since they don’t really have access to my decision processes. I also have very different private and public meanings for the word “I”. In my private considerations, the role of “I” in my brain is very limited.
I probably should have just asked what you meant since my brain came up with only the silly interpretation. I think the reason why I got angry at the murder example was the perceived social cost of my actions being associated with murder. Toe stubbing is trivially bad too you know, bad scales. I made a mistake, but only in retrospect. I’ll make a different mistake next time.
When I first learned how little a life costs, my reaction wasn’t guilt, at least not for long. This lead me to think “wow, apparently I care about people suffering much less than I previously thought, wonder why that is”, not “I must be mistaken about my values and should feel horrible guilt for not maximizing my actual values”.
As I previously described, motivation for altruism is purely positive for me, and I’m pretty sure that if I associated EA with guilt, that would make me ditch the idea altogether and look for sources of pleasure elsewhere. I get depressed easily, which makes any negative motivation very costly.
I’m not motivated by the idea of maximization in itself, but it helps my happiness to know how much my money can buy. Your idea of motivational can be another person’s idea of demotivational. I think we should try to identify our audience to maximize impact. As a default I’d still try to motivate people positively, not to associate crappy feelings with the important ideas. Human brains are predictably irrational and there’s a difference in saying you can save several lives in a month and be a superhero by donating compared to saying you can be a serial killer by spending the money on yourself.