This isn’t really different than any other situation where people wish they had a different characteristic than they do.
I disagree. In most cases like this people wish they were more empathetic to their future selves, which isn’t relevant in the case of tricking yourself to do radical altruism, if your future self won’t value it more than your current self.
The best reason to change your values so that altruism feels better is because it enhances altruism, not because it enhances consistency.
This argument depends entirely on how much you value altruism in the first place, which makes it not very appealing to me.
isn’t relevant in the case of tricking yourself to do radical altruism, if your future self won’t value it more than your current self.
I don’t see the relevance. In prudential cases (e.g., getting yourself to go on a diet), the goal isn’t to feel more empathy toward your future self. The goal is to get healthy; feeling more empathy toward your future self may be a useful means to that end, but it’s not the only possible one. Similarly, in moral cases (e.g., getting yourself to donate to GiveWell), the goal isn’t to feel more empathy toward strangers. The goal is to help strangers suffer and die less.
This argument depends entirely on how much you value altruism in the first place, which makes it not very appealing to me.
Suppose you see a child drowning in your neighbor’s pool, and you can save the child without incurring risk. But, a twist: You have a fear of water.
Kaj and I aren’t saying: If you’re completely indifferent to the suffering of others, then there exists an argument so powerful that it can physically compel you to save the child. If that’s your precondition for an interesting or compelling moral argument, then you’re bound to be disappointed.
Kaj and I are saying: If you care to some extent about the suffering of others, then it makes sense for you to wish that you weren’t averse to water, because your preference not to be in the water is getting in the way of other preferences that you much more strongly prefer to hold. This is true even if you don’t care at all about your aversion to bodies of water in other contexts (e.g., you aren’t pining to join any swim teams). For the same reason, it can make sense to wish that you weren’t selfish enough to squander money on bone marrow transplants for yourself, even though you are that selfish.
the goal isn’t to feel more empathy toward your future self. The goal is to get healthy; feeling more empathy toward your future self may be a useful means to that end, but it’s not the only possible one.
Sorry, I used empathy a bit loosely. Anyways, the goal is to generate utility for my future self. Empathy is one mechanism for that, and there are others. The only reason to lose weight and get healthy at least for me is that I know for sure my future self will appreciate that. Otherwise I would just binge to satisfy my current self.
Kaj and I aren’t saying: If you’re completely indifferent to the suffering of others, then there exists an argument so powerful that it can physically compel you to save the child
What I’m saying is that if the child was random and I had a high risk of dying when trying to save them then there’s no argument that would make me take that risk although I’m probably much more altruistic than average already. If I had an irrational aversion to water that actually reflected none of my values then of course I’d like to get rid of that.
Kaj and I are saying: If you care to some extent about the suffering of others, then it makes sense for you to wish that you weren’t averse to water, because your preference not to be in the water is getting in the way of other preferences that you much more strongly prefer to hold.
It seems to me more like you’re saying that if I have even an inkling of altruism in me then I should make it a core value that overrides everything else.
For the same reason, it can make sense to wish that you weren’t selfish enough to squander money on bone marrow transplants for yourself, even though you are that selfish.
I really don’t understand. Either you are that selfish, or you aren’t. I’m that selfish, but also happily donate money. There’s no argument that could change that. I think the human ability to change core values is very limited, much more limited than the human ability to lose weight.
The only reason to lose weight and get healthy at least for me is that I know for sure my future self will appreciate that.
No. There are also important things that my present self desires be true of my future self, to some extent independently of what my future self wants. For instance, I don’t want to take a pill that will turn me into a murderer who loves that he’s a murderer, even though if I took such a pill I’d be happy I did.
if the child was random and I had a high risk of dying when trying to save them then there’s no argument that would make me take that risk
If your risk of dying is high enough, then you shouldn’t try to save the child, since if you’re sure to die the expected value may well be negative. Still, I don’t see how this is relevant to any claim that anyone else on this thread (or in the OP) is making. ‘My altruism is limited, and I’m perfectly OK with how limited it is and wouldn’t take a pill to become more altruistic if one were freely available’ is a coherent position, though it’s not one I happen to find myself in.
If I had an irrational aversion to water that actually reflected none of my values then of course I’d like to get rid of that.
Then you understand the thing you were confused about initially: “I don’t understand this values vs preferred values thing.” Whether you call hydrophobia a ‘value’ or not, it’s clearly a preference; what Kaj and I are talking about is privileging some preferences over others, having meta-preferences, etc. This is pretty ordinary, I think.
It seems to me more like you’re saying that if I have even an inkling of altruism in me then I should make it a core value that overrides everything else.
Well, of course you should; when I say the word ‘should’, I’m building in my (conception of) morality, which is vaguely utilitarian and therefore is about maximizing, not satisficing, human well-being. For me to say that you should become more moral is like my saying that you shouldn’t murder people. If you’re inclined to murder people, then it’s unlikely that my saying ‘please don’t do that, it would be a breach of your moral obligations’ is going to have a large effect in dissuading you. Yet, all the same, it is bad to kill people, by the facts on the ground and the meaning of ‘bad’ (and of ‘kill’, and of ‘to’...). And it’s bad to strongly desire to kill people; and it’s bad to be satisfied with a strong desire to kill people; etc. Acts and their consequences can be judged morally even when the actors don’t themselves adhere to the moral system being used for judging.
I really don’t understand. Either you are that selfish, or you aren’t.
People aren’t any level of selfish consistently; they exhibit more selfishness in some situations than others. Kaj’s argument is that if I prize being altruistic over being egoistic, then it’s reasonable for me to put no effort into eliminating my aversion to cryonics, even though signing up for cryonics would exhibit no more egoism than the amount of egoism revealed in a lot of my other behaviors.
‘You ate those seventeen pancakes, therefore you should eat this muffin’ shouldn’t hold sway as an argument against someone who wants to go on a diet. For the same reason, ‘You would spend thousands of dollars on heart surgery if you needed it to live, therefore you should spend comparable amounts of money on cryonics to get a chance at continued life’ shouldn’t hold sway as an argument against someone who wants above all else to optimize for the happiness of the whole human species. (And who therefore wants to want to optimize for everyone’s aggregate happiness.)
I think the human ability to change core values is very limited, much more limited than the human ability to lose weight.
I’d love to see someone try to pick units with which to compare those two values. :)
Well, of course you should; when I say the word ‘should’, I’m building in my (conception of) morality, which is vaguely utilitarian and therefore is about maximizing, not satisficing, human well-being. For me to say that you should become more moral is like my saying that you shouldn’t murder people. [...] Acts and their consequences can be judged morally even when the actors don’t themselves adhere to the moral system being used for judging.
You should be more careful when thinking of examples and judging people explicitly. A true utilitarian would probably not want to make EA look as bad as you just did there, and would also understand that allies are useful to have even if their values aren’t in perfect alignment with yours. Because of that paragraph, it’s pretty difficult for me to look at anything else you said rationally.
Here’s some discussion by another person on why the social pressure applied by some EA people might be damaging to the movement.
I’m not trying to browbeat you into changing your values. (Your own self-descriptions make it sound like that would be a waste of time, and I’m really more into the Socratic approach than the Crusader approach.) I’m making two points about the structure of utilitarian reasoning:
‘It’s better for people to have preferences that cause them to do better things.’ is nearly a tautology for consequentialists, because the goodness of things that aren’t intrinsically good is always a function of their effects. It’s not a bold or interesting claim; I could equally well have said ‘it’s good for polar bears to have preferences that cause them to do good things’. Ditto for Clippy. If any voluntary behavior can be good or bad, then the volitions causing such behavior can also be good or bad.
‘Should’ can’t be relativized to the preferences of the person being morally judged, else you will be unable to express the idea that people are capable of voluntarily doing bad things.
Do you take something about 1 or 2 to be unduly aggressive or dismissive? Maybe it would help if you said more about what your own views on these questions are.
I’ll also say (equally non-facetiously): I don’t endorse making yourself miserable with guilt, forbidding yourself to go to weddings, or obsessing over the fact that you aren’t exactly 100% the person you wish you were. Those aren’t good for personal or altruistic goals. (And I think both of those matter, even if I think altruistic goals matter more.) I don’t want to lie to you about my ideals in order to be compassionate and tolerant of the fact that no one, least of all myself, lives up to them.
It would rather defeat the purpose of even having ideals if expressing or thinking about them made people less likely to achieve them, so I do hope we can find ways to live with the fact that our everyday moral heuristics don’t have to be (indeed, as a matter of psychological realism, cannot be) the same as our rock-bottom moral algorithm.
‘It’s better for people to have preferences that cause them to do better things.’ is nearly a tautology for consequentialists, because the goodness of things that aren’t intrinsically good is always a function of their effects.
Consequentialism makes no sense without a system that judges which consequences are good. By the way, I don’t understand why consequentialism and egoism would be mutually exclusive, which you seem to imply by conflating consequentialism and utilitarianism.
‘Should’ can’t be relativized to the preferences of the person being morally judged, else you will be unable to express the idea that people are capable of voluntarily doing bad things.
I don’t think I voluntarily do bad things according to my values, ever. I also don’t understand why other people would voluntarily do bad things according to their own values. My values change though, and I might think I did something bad in the past.
Other people do bad things according to my values, but if their actions are truly voluntary and I can’t point out a relevant contradiction in their thinking, saying they should do something else is useless, and working to restrict their behavior by other means would be more effective. Connotatively comparing them to murderers and completely ignoring that values have a spectrum would be one of the least effective strategies that come to mind.
Do you take something about 1 or 2 to be unduly aggressive or dismissive?
No.
I don’t want to lie to you about my ideals in order to be compassionate and tolerant of the fact that no one, least of all myself, lives up to them.
To me that seems like you’re ignoring what’s normally persuasive to people out of plain stubbornness. The reason I’m bringing this up is because I have altruistic goals too, and I find such talk damaging to them.
It would rather defeat the purpose of even having ideals if expressing or thinking about them made people less likely to achieve them
Having ideals is fine if you make it absolutely clear that’s all that they are. If thinking about them in a certain way motivates you, then great, but if it just makes some people pissed off then it would make sense to be more careful about what you say. Consider also that some people might have laxer ideals than you do, and still do more good according to your values. Ideals don’t make or break a good person.
I don’t understand why consequentialism and egoism would be mutually exclusive, which you seem to imply by conflating consequentialism and utilitarianism.
I’m not conflating the two. There are non-utilitarian moral consequentialisms. I’m not sure egoism qualifies, since egoism (like paperclip maximization) might not bear a sufficient family resemblance to the things we call ‘morality’. But that’s just a terminological issue.
If an egoist did choose to adopt moral terminology like ‘ought’ and ‘good’, and to cash those terms out using egoism, then the egoist would agree with my claim ″It’s better for people to have preferences that cause them to do better things.′ But the egoist would mean by that ‘It better fits the goals of my form of egoism for people to have preferences that cause them to do things that make me personally happy’, whereas what I mean by the sentence is something more like ‘It better fits the goals of my form of altruism for people to have preferences that cause them to do things that improve the psychological welfare and preference-satisfaction of all agents’.
I don’t think I voluntarily do bad things according to my values, ever.
Interesting! Then your usage of ‘bad’ is very unusual. (Or your preferences and general psychological makeup is very unusual.) Most people think themselves capable of making voluntary mistakes, acting against their own better judgment, regretting their decisions, making normative progress, etc.
Connotatively comparing them to murderers
Sorry, I don’t think I was clear about why I drew this comparison. ‘Murder’ just means ‘bad killing’. It’s trivial to say that murder is bad. I was saying that it’s nearly as trivial to say that preferences that lead to bad outcomes are bad. But it would be bizarre for anyone to suggest that every suboptimal decision is as bad as murder! I clearly should have been more careful in picking my comparison, but I just didn’t think anyone would think I was honestly saying something almost unsurpassably silly.
I find such talk damaging to them.
What do you think is the best strategy for endorsing maximization as a good thing without seeming to endorse ‘you should feel horribly guilty and hate yourself if you haven’t 100% maximized your impact’? Or should we drop the idea that maximization is even a good thing?
Having ideals is fine if you make it absolutely clear that’s all that they are.
I don’t know what you mean by ‘that’s all they are’. Core preferences, ideals, values, goals… I’m using all these terms to pick out pretty much the same thing. I’m not using ‘ideal’ in any sense in which ideals are mere. They’re an encoding of the most important things in human life, by reference to optima.
Egoism is usually not the claim that everyone should act in the egoist’s self-interest, but that everyone should act in their own self-interest, i.e. “It better fits the goal of my egoism for people to have preferences that cause them do to things that make them happy”.
That’s true in the philosophical literature. But consequentialist egoism is a complicated, confusing, very hard to justify, and very hard to motivate view, since when I say ‘I endorse egoism’ in that sense I’m really endorsing two contradictory goals, not a single goal: (1) An overarching goal to have my personal desires met; (2) An overarching goal that every person act in whatever way ey expects to meet eir desires. The former ‘goal’ is the truer one, in that it’s the one that actually guides my actions to the extent I’m a ‘good’ egoist; the latter goal is a weird hanger-on that doesn’t seem to be action-guiding. If the two goals come in conflict, then the really important and valuable bit (from my perspective, as a hypothetical egoist) is that people satisfy my values, not that they satisfy their own; possibly the two goals don’t come into conflict that often, but it’s clear which one is more important when they do.
This is also useful because it sets up a starker contrast with utilitarianism; moral egoism as the SEP talks about it is a lot closer to descriptive egoism, and could well arise from utilitarianism plus a confused view of human psychology.
when I say ‘I endorse egoism’ in that sense I’m really endorsing two contradictory goals, not a single goal: (1) An overarching goal to have my personal desires met; (2) An overarching goal that every person act in whatever way ey expects to meet eir desires
The two goals don’t conflict, or, more precisely, (2) isn’t a goal, it’s a decision rule. There is no conflict in having the goal of having your personal desires met and believing that the correct decision rule is to do whatever maximizes the fulfillment of one’s own desires. It’s similar to how in the prisoner’s dilemma, each prisoner wants the other to cooperate, but doesn’t believe that the other prisoner should cooperate.
There is no conflict in having the goal of having your personal desires met and believing that the correct decision rule is to do whatever maximizes the fulfillment of one’s own desires.
I think it depends on what’s meant by ‘correct decision rule’. Suppose I came up to you and said that intuitionistic mathematics is ‘correct’, and conventional mathematics is ‘incorrect’; but not in virtue of correspondence to any non-physical mathematical facts; and conventional mathematics is what I want people to use; and using conventional mathematics, and treating it as correct, furthers other everyone else’s goals more too; and there is no deeper underlying rule that rationally commits anyone to saying that intuitionistic mathematics is correct. What then is the content of saying that intuitionistic mathematics is right and conventional is wrong?
It’s similar to how in the prisoner’s dilemma, each prisoner wants the other to cooperate, but doesn’t believe that the other prisoner should cooperate.
I don’t think the other player will cooperate, if I think the other player is best modeled as a rational agent. I don’t know what it means to add to that that the other player ‘shouldn’t cooperate. If I get into a PD with a non-sentient Paperclip Maximizer, I might predict that it will defect, but there’s no normative demand that it do so. I don’t think that it should maximize paperclips, and if a bolt of lightning suddenly melted part of its brain and made it better at helping humans than at making paperclips, I wouldn’t conclude that this was a bad or wrong or ‘incorrect’ thing, though it might be a thing that makes my mental model of the erstwhile paperclipper more complicated.
Sorry, I don’t know much about the philosophy of mathematics, so your analogy goes over my head.
I don’t know what it means to add to that that the other player ’shouldn’t cooperate.
It means that it is optimal for the other player to defect, from the other player’s point of view, if they’re following the same decision rule that you’re following. Given that you’ve endorsed this decision rule to yourself, you have no grounds on which to say that others shouldn’t use it as well. If the other player chooses to cooperate, I would be happy because my preferences would have been fulfilled more than they would have been had he defected, but I would also judge that he had acted suboptimally, i.e. in a way he shouldn’t have.
I’m not sure egoism qualifies, since egoism (like paperclip maximization) might not bear a sufficient family resemblance to the things we call ‘morality’. But that’s just a terminological issue.
I’d have no problem calling Clippy a consequentialist, but a polar bear would probably lack the sufficient introspection. You have to have some inkling about what your values are to have morality. You’re right it’s a terminology issue and difficult one at that.
It’s better for people to have preferences that cause them to do better things.′ But the egoist would mean by that ’It better fits the goals of my form of egoism for people to have preferences that cause them to do things that make me personally happy
Disclaimer: I use “pleasure” as an umbrella term for various forms of experiential goodness. Say there’s some utility cap in my brain that limits the amount of pleasure I can get from a single activity. One of these activities is helping other people, and the amount of pleasure I get from this activity is capped in a way that I can only get under 50 % of the maximum possible pleasure from altruism. Necessarily this will make me look for sources of pleasure elsewhere. What exactly does this make me? If I can’t call myself an egoist, then I’m at a loss here. Perhaps “egoism” is a reputation hit anyway and I should ditch the word, huh?
Actually, the reason why EA ideas appeal to me is that the pleasure I can get by using the money on myself seems to be already capped, I’m making much more money than I use, and I’m looking for other sources. Since I learned about fuzzies, being actually effective seems to be the only way to get any pleasure from this altruism thing.
Then your usage of ‘bad’ is very unusual.
Most people don’t do much introspection, so I would expect that. However you saying this surprises me, since I didn’t expect to be unusual in this crowd.
mistakes, acting against their own better judgment, regretting their decisions, making normative progress, etc.
These are all bad only in retrospect and explicable by having insufficient information or different values compared to now, except “normative progress” I don’t understand. Acting bad voluntarily would mean I make a choice which I expect to have bad consequences. It might help your understanding to know what part of my decision process I usually identify with.
This brings up another terminological problem. See, I totally understand I better use the word “bad” in a way that other people understand me, but if I used it while I’m describing my own decision process, that would lead me to scold myself unnecessarily. I don’t think I voluntarily do anything bad in my brain, but it makes sense for other people to ascribe voluntary action to some of my mistakes, since they don’t really have access to my decision processes. I also have very different private and public meanings for the word “I”. In my private considerations, the role of “I” in my brain is very limited.
I just didn’t think anyone would think I was honestly saying something almost unsurpassably silly.
I probably should have just asked what you meant since my brain came up with only the silly interpretation. I think the reason why I got angry at the murder example was the perceived social cost of my actions being associated with murder. Toe stubbing is trivially bad too you know, bad scales. I made a mistake, but only in retrospect. I’ll make a different mistake next time.
What do you think is the best strategy for endorsing maximization as a good thing without seeming to endorse ‘you should feel horribly guilty and hate yourself if you haven’t 100% maximized your impact’? Or should we drop the idea that maximization is even a good thing?
When I first learned how little a life costs, my reaction wasn’t guilt, at least not for long. This lead me to think “wow, apparently I care about people suffering much less than I previously thought, wonder why that is”, not “I must be mistaken about my values and should feel horrible guilt for not maximizing my actual values”.
As I previously described, motivation for altruism is purely positive for me, and I’m pretty sure that if I associated EA with guilt, that would make me ditch the idea altogether and look for sources of pleasure elsewhere. I get depressed easily, which makes any negative motivation very costly.
I’m not motivated by the idea of maximization in itself, but it helps my happiness to know how much my money can buy. Your idea of motivational can be another person’s idea of demotivational. I think we should try to identify our audience to maximize impact. As a default I’d still try to motivate people positively, not to associate crappy feelings with the important ideas. Human brains are predictably irrational and there’s a difference in saying you can save several lives in a month and be a superhero by donating compared to saying you can be a serial killer by spending the money on yourself.
I disagree. In most cases like this people wish they were more empathetic to their future selves, which isn’t relevant in the case of tricking yourself to do radical altruism, if your future self won’t value it more than your current self.
This argument depends entirely on how much you value altruism in the first place, which makes it not very appealing to me.
I don’t see the relevance. In prudential cases (e.g., getting yourself to go on a diet), the goal isn’t to feel more empathy toward your future self. The goal is to get healthy; feeling more empathy toward your future self may be a useful means to that end, but it’s not the only possible one. Similarly, in moral cases (e.g., getting yourself to donate to GiveWell), the goal isn’t to feel more empathy toward strangers. The goal is to help strangers suffer and die less.
Suppose you see a child drowning in your neighbor’s pool, and you can save the child without incurring risk. But, a twist: You have a fear of water.
Kaj and I aren’t saying: If you’re completely indifferent to the suffering of others, then there exists an argument so powerful that it can physically compel you to save the child. If that’s your precondition for an interesting or compelling moral argument, then you’re bound to be disappointed.
Kaj and I are saying: If you care to some extent about the suffering of others, then it makes sense for you to wish that you weren’t averse to water, because your preference not to be in the water is getting in the way of other preferences that you much more strongly prefer to hold. This is true even if you don’t care at all about your aversion to bodies of water in other contexts (e.g., you aren’t pining to join any swim teams). For the same reason, it can make sense to wish that you weren’t selfish enough to squander money on bone marrow transplants for yourself, even though you are that selfish.
Sorry, I used empathy a bit loosely. Anyways, the goal is to generate utility for my future self. Empathy is one mechanism for that, and there are others. The only reason to lose weight and get healthy at least for me is that I know for sure my future self will appreciate that. Otherwise I would just binge to satisfy my current self.
What I’m saying is that if the child was random and I had a high risk of dying when trying to save them then there’s no argument that would make me take that risk although I’m probably much more altruistic than average already. If I had an irrational aversion to water that actually reflected none of my values then of course I’d like to get rid of that.
It seems to me more like you’re saying that if I have even an inkling of altruism in me then I should make it a core value that overrides everything else.
I really don’t understand. Either you are that selfish, or you aren’t. I’m that selfish, but also happily donate money. There’s no argument that could change that. I think the human ability to change core values is very limited, much more limited than the human ability to lose weight.
No. There are also important things that my present self desires be true of my future self, to some extent independently of what my future self wants. For instance, I don’t want to take a pill that will turn me into a murderer who loves that he’s a murderer, even though if I took such a pill I’d be happy I did.
If your risk of dying is high enough, then you shouldn’t try to save the child, since if you’re sure to die the expected value may well be negative. Still, I don’t see how this is relevant to any claim that anyone else on this thread (or in the OP) is making. ‘My altruism is limited, and I’m perfectly OK with how limited it is and wouldn’t take a pill to become more altruistic if one were freely available’ is a coherent position, though it’s not one I happen to find myself in.
Then you understand the thing you were confused about initially: “I don’t understand this values vs preferred values thing.” Whether you call hydrophobia a ‘value’ or not, it’s clearly a preference; what Kaj and I are talking about is privileging some preferences over others, having meta-preferences, etc. This is pretty ordinary, I think.
Well, of course you should; when I say the word ‘should’, I’m building in my (conception of) morality, which is vaguely utilitarian and therefore is about maximizing, not satisficing, human well-being. For me to say that you should become more moral is like my saying that you shouldn’t murder people. If you’re inclined to murder people, then it’s unlikely that my saying ‘please don’t do that, it would be a breach of your moral obligations’ is going to have a large effect in dissuading you. Yet, all the same, it is bad to kill people, by the facts on the ground and the meaning of ‘bad’ (and of ‘kill’, and of ‘to’...). And it’s bad to strongly desire to kill people; and it’s bad to be satisfied with a strong desire to kill people; etc. Acts and their consequences can be judged morally even when the actors don’t themselves adhere to the moral system being used for judging.
People aren’t any level of selfish consistently; they exhibit more selfishness in some situations than others. Kaj’s argument is that if I prize being altruistic over being egoistic, then it’s reasonable for me to put no effort into eliminating my aversion to cryonics, even though signing up for cryonics would exhibit no more egoism than the amount of egoism revealed in a lot of my other behaviors.
‘You ate those seventeen pancakes, therefore you should eat this muffin’ shouldn’t hold sway as an argument against someone who wants to go on a diet. For the same reason, ‘You would spend thousands of dollars on heart surgery if you needed it to live, therefore you should spend comparable amounts of money on cryonics to get a chance at continued life’ shouldn’t hold sway as an argument against someone who wants above all else to optimize for the happiness of the whole human species. (And who therefore wants to want to optimize for everyone’s aggregate happiness.)
I’d love to see someone try to pick units with which to compare those two values. :)
You should be more careful when thinking of examples and judging people explicitly. A true utilitarian would probably not want to make EA look as bad as you just did there, and would also understand that allies are useful to have even if their values aren’t in perfect alignment with yours. Because of that paragraph, it’s pretty difficult for me to look at anything else you said rationally.
Here’s some discussion by another person on why the social pressure applied by some EA people might be damaging to the movement.
I’m not trying to browbeat you into changing your values. (Your own self-descriptions make it sound like that would be a waste of time, and I’m really more into the Socratic approach than the Crusader approach.) I’m making two points about the structure of utilitarian reasoning:
‘It’s better for people to have preferences that cause them to do better things.’ is nearly a tautology for consequentialists, because the goodness of things that aren’t intrinsically good is always a function of their effects. It’s not a bold or interesting claim; I could equally well have said ‘it’s good for polar bears to have preferences that cause them to do good things’. Ditto for Clippy. If any voluntary behavior can be good or bad, then the volitions causing such behavior can also be good or bad.
‘Should’ can’t be relativized to the preferences of the person being morally judged, else you will be unable to express the idea that people are capable of voluntarily doing bad things.
Do you take something about 1 or 2 to be unduly aggressive or dismissive? Maybe it would help if you said more about what your own views on these questions are.
I’ll also say (equally non-facetiously): I don’t endorse making yourself miserable with guilt, forbidding yourself to go to weddings, or obsessing over the fact that you aren’t exactly 100% the person you wish you were. Those aren’t good for personal or altruistic goals. (And I think both of those matter, even if I think altruistic goals matter more.) I don’t want to lie to you about my ideals in order to be compassionate and tolerant of the fact that no one, least of all myself, lives up to them.
It would rather defeat the purpose of even having ideals if expressing or thinking about them made people less likely to achieve them, so I do hope we can find ways to live with the fact that our everyday moral heuristics don’t have to be (indeed, as a matter of psychological realism, cannot be) the same as our rock-bottom moral algorithm.
Consequentialism makes no sense without a system that judges which consequences are good. By the way, I don’t understand why consequentialism and egoism would be mutually exclusive, which you seem to imply by conflating consequentialism and utilitarianism.
I don’t think I voluntarily do bad things according to my values, ever. I also don’t understand why other people would voluntarily do bad things according to their own values. My values change though, and I might think I did something bad in the past.
Other people do bad things according to my values, but if their actions are truly voluntary and I can’t point out a relevant contradiction in their thinking, saying they should do something else is useless, and working to restrict their behavior by other means would be more effective. Connotatively comparing them to murderers and completely ignoring that values have a spectrum would be one of the least effective strategies that come to mind.
No.
To me that seems like you’re ignoring what’s normally persuasive to people out of plain stubbornness. The reason I’m bringing this up is because I have altruistic goals too, and I find such talk damaging to them.
Having ideals is fine if you make it absolutely clear that’s all that they are. If thinking about them in a certain way motivates you, then great, but if it just makes some people pissed off then it would make sense to be more careful about what you say. Consider also that some people might have laxer ideals than you do, and still do more good according to your values. Ideals don’t make or break a good person.
I’m not conflating the two. There are non-utilitarian moral consequentialisms. I’m not sure egoism qualifies, since egoism (like paperclip maximization) might not bear a sufficient family resemblance to the things we call ‘morality’. But that’s just a terminological issue.
If an egoist did choose to adopt moral terminology like ‘ought’ and ‘good’, and to cash those terms out using egoism, then the egoist would agree with my claim ″It’s better for people to have preferences that cause them to do better things.′ But the egoist would mean by that ‘It better fits the goals of my form of egoism for people to have preferences that cause them to do things that make me personally happy’, whereas what I mean by the sentence is something more like ‘It better fits the goals of my form of altruism for people to have preferences that cause them to do things that improve the psychological welfare and preference-satisfaction of all agents’.
Interesting! Then your usage of ‘bad’ is very unusual. (Or your preferences and general psychological makeup is very unusual.) Most people think themselves capable of making voluntary mistakes, acting against their own better judgment, regretting their decisions, making normative progress, etc.
Sorry, I don’t think I was clear about why I drew this comparison. ‘Murder’ just means ‘bad killing’. It’s trivial to say that murder is bad. I was saying that it’s nearly as trivial to say that preferences that lead to bad outcomes are bad. But it would be bizarre for anyone to suggest that every suboptimal decision is as bad as murder! I clearly should have been more careful in picking my comparison, but I just didn’t think anyone would think I was honestly saying something almost unsurpassably silly.
What do you think is the best strategy for endorsing maximization as a good thing without seeming to endorse ‘you should feel horribly guilty and hate yourself if you haven’t 100% maximized your impact’? Or should we drop the idea that maximization is even a good thing?
I don’t know what you mean by ‘that’s all they are’. Core preferences, ideals, values, goals… I’m using all these terms to pick out pretty much the same thing. I’m not using ‘ideal’ in any sense in which ideals are mere. They’re an encoding of the most important things in human life, by reference to optima.
Egoism is usually not the claim that everyone should act in the egoist’s self-interest, but that everyone should act in their own self-interest, i.e. “It better fits the goal of my egoism for people to have preferences that cause them do to things that make them happy”.
That’s true in the philosophical literature. But consequentialist egoism is a complicated, confusing, very hard to justify, and very hard to motivate view, since when I say ‘I endorse egoism’ in that sense I’m really endorsing two contradictory goals, not a single goal: (1) An overarching goal to have my personal desires met; (2) An overarching goal that every person act in whatever way ey expects to meet eir desires. The former ‘goal’ is the truer one, in that it’s the one that actually guides my actions to the extent I’m a ‘good’ egoist; the latter goal is a weird hanger-on that doesn’t seem to be action-guiding. If the two goals come in conflict, then the really important and valuable bit (from my perspective, as a hypothetical egoist) is that people satisfy my values, not that they satisfy their own; possibly the two goals don’t come into conflict that often, but it’s clear which one is more important when they do.
This is also useful because it sets up a starker contrast with utilitarianism; moral egoism as the SEP talks about it is a lot closer to descriptive egoism, and could well arise from utilitarianism plus a confused view of human psychology.
The two goals don’t conflict, or, more precisely, (2) isn’t a goal, it’s a decision rule. There is no conflict in having the goal of having your personal desires met and believing that the correct decision rule is to do whatever maximizes the fulfillment of one’s own desires. It’s similar to how in the prisoner’s dilemma, each prisoner wants the other to cooperate, but doesn’t believe that the other prisoner should cooperate.
I think it depends on what’s meant by ‘correct decision rule’. Suppose I came up to you and said that intuitionistic mathematics is ‘correct’, and conventional mathematics is ‘incorrect’; but not in virtue of correspondence to any non-physical mathematical facts; and conventional mathematics is what I want people to use; and using conventional mathematics, and treating it as correct, furthers other everyone else’s goals more too; and there is no deeper underlying rule that rationally commits anyone to saying that intuitionistic mathematics is correct. What then is the content of saying that intuitionistic mathematics is right and conventional is wrong?
I don’t think the other player will cooperate, if I think the other player is best modeled as a rational agent. I don’t know what it means to add to that that the other player ‘shouldn’t cooperate. If I get into a PD with a non-sentient Paperclip Maximizer, I might predict that it will defect, but there’s no normative demand that it do so. I don’t think that it should maximize paperclips, and if a bolt of lightning suddenly melted part of its brain and made it better at helping humans than at making paperclips, I wouldn’t conclude that this was a bad or wrong or ‘incorrect’ thing, though it might be a thing that makes my mental model of the erstwhile paperclipper more complicated.
Sorry, I don’t know much about the philosophy of mathematics, so your analogy goes over my head.
It means that it is optimal for the other player to defect, from the other player’s point of view, if they’re following the same decision rule that you’re following. Given that you’ve endorsed this decision rule to yourself, you have no grounds on which to say that others shouldn’t use it as well. If the other player chooses to cooperate, I would be happy because my preferences would have been fulfilled more than they would have been had he defected, but I would also judge that he had acted suboptimally, i.e. in a way he shouldn’t have.
It seems various things are meant by egoism.
Begins with “Egoism can be a descriptive or a normative position.”
It’s also a common attack term :-/
I better stop using it. In fact, I better stop using any label for my value system.
I’d have no problem calling Clippy a consequentialist, but a polar bear would probably lack the sufficient introspection. You have to have some inkling about what your values are to have morality. You’re right it’s a terminology issue and difficult one at that.
Disclaimer: I use “pleasure” as an umbrella term for various forms of experiential goodness. Say there’s some utility cap in my brain that limits the amount of pleasure I can get from a single activity. One of these activities is helping other people, and the amount of pleasure I get from this activity is capped in a way that I can only get under 50 % of the maximum possible pleasure from altruism. Necessarily this will make me look for sources of pleasure elsewhere. What exactly does this make me? If I can’t call myself an egoist, then I’m at a loss here. Perhaps “egoism” is a reputation hit anyway and I should ditch the word, huh?
Actually, the reason why EA ideas appeal to me is that the pleasure I can get by using the money on myself seems to be already capped, I’m making much more money than I use, and I’m looking for other sources. Since I learned about fuzzies, being actually effective seems to be the only way to get any pleasure from this altruism thing.
Most people don’t do much introspection, so I would expect that. However you saying this surprises me, since I didn’t expect to be unusual in this crowd.
These are all bad only in retrospect and explicable by having insufficient information or different values compared to now, except “normative progress” I don’t understand. Acting bad voluntarily would mean I make a choice which I expect to have bad consequences. It might help your understanding to know what part of my decision process I usually identify with.
This brings up another terminological problem. See, I totally understand I better use the word “bad” in a way that other people understand me, but if I used it while I’m describing my own decision process, that would lead me to scold myself unnecessarily. I don’t think I voluntarily do anything bad in my brain, but it makes sense for other people to ascribe voluntary action to some of my mistakes, since they don’t really have access to my decision processes. I also have very different private and public meanings for the word “I”. In my private considerations, the role of “I” in my brain is very limited.
I probably should have just asked what you meant since my brain came up with only the silly interpretation. I think the reason why I got angry at the murder example was the perceived social cost of my actions being associated with murder. Toe stubbing is trivially bad too you know, bad scales. I made a mistake, but only in retrospect. I’ll make a different mistake next time.
When I first learned how little a life costs, my reaction wasn’t guilt, at least not for long. This lead me to think “wow, apparently I care about people suffering much less than I previously thought, wonder why that is”, not “I must be mistaken about my values and should feel horrible guilt for not maximizing my actual values”.
As I previously described, motivation for altruism is purely positive for me, and I’m pretty sure that if I associated EA with guilt, that would make me ditch the idea altogether and look for sources of pleasure elsewhere. I get depressed easily, which makes any negative motivation very costly.
I’m not motivated by the idea of maximization in itself, but it helps my happiness to know how much my money can buy. Your idea of motivational can be another person’s idea of demotivational. I think we should try to identify our audience to maximize impact. As a default I’d still try to motivate people positively, not to associate crappy feelings with the important ideas. Human brains are predictably irrational and there’s a difference in saying you can save several lives in a month and be a superhero by donating compared to saying you can be a serial killer by spending the money on yourself.