I agree with others that the post is very nice and clear, as most of your posts are. Upvoted for that. I just want to provide a perspective not often voiced here. My mind does not work the way yours does and I do not think I am a worse person than you because of that. I am not sure how common my thought process is on this forum.
Going section by section:
I do not “care about every single individual on this planet”. I care about myself, my family, friends and some other people I know. I cannot bring myself to care (and I don’t really want to) about a random person half-way around the world, except in the non-scalable general sense that “it is sad that bad stuff happens, be it to 1 person or to 1 billion people”. I care about the humanity surviving and thriving, in the abstract, but I do not feel the connection between the current suffering and future thriving. (Actually, it’s worse than that. I am not sure whether humanity existing, in Yvain’s words, in a 10m x 10m x 10m box of computronium with billions of sims is much different from actually colonizing the observable universe (or the multiverse, as the case might be). But that’s a different story, unrelated to the main point.)
No disagreement there, the stakes are high, though I would not say that a thriving community of 1000 is necessarily worse than a thriving community of 1 googoleplex, as long as their probability of long-term survival and thriving is the same.
I occasionally donate modest amounts to this cause or that, if I feel like it. I don’t think I do what Alice, Bob or Christine did, and donate out of pressure or guilt.
I spend (or used to spend) a lot of time helping out strangers online with their math and physics questions. I find it more satisfying than caring for oiled birds or stray dogs. Like Daniel, I see the mountain ridges of bad education all around, of which the students asking for help on IRC are just tiny pebbles. Unlike Daniel, I do not feel that I “can’t possibly do enough”. I help people when I feel like it and I don’t pretend that I am a better person because of it, even if they thank me profusely after finally understanding how free-body diagram works. I do wish someone more capable worked on improving the education system to work better than at 1% efficiency, and I have seen isolated cases of it, but I do not feel that it is my problem to deal with. Wrong skillset.
I have read a fair amount of EA propaganda, and I still do not feel that I “should care about people suffering far away”, sorry. (Not really sorry, no.) It would be nice if fewer people died and suffered, sure. But “nice” is all it is. Call me heartless. I am happy that other people care, in case I am in the situation where I need their help. I am also happy that some people give money to those who care, for the same reason. I might even chip in, if it hits close to home.
I do not feel that I would be a better person if I donated more money or dedicated my life to solving one of the “biggest problems”, as opposed to doing what I am good at, though I am happy that some people feel that way; humanity’s strength is in its diversity.
Again, one of the main strengths of humankind is its diversity, and the Bell-curve outliers like “Gandhi, Mother Theresa, Nelson Mandela” tend to have more effect than those of us within 1 standard deviation. Some people address “global poverty”, others write poems, prove theorems, shoot the targets they are told to, or convince other people to do what they feel is right. No one knows which of these is more likely to result in the long-term prosperity of the human race. So it is best to diversify and hope that one of these outliers does not end up killing all of us, intentionally or accidentally.
I don’t feel the weight of the world. Because it does not weigh on me.
Note: having reread what I wrote, I suspect that some people might find it kind of Objectivist. I actually tried reading Atlas Shrugged and quit after 100 pages or so, getting extremely annoyed by the author belaboring an obvious and trivial point over and over. So I only have a vague idea what the movement is all about. And I have no interest in finding out more, given that people who find this kind of writing insightful are not ones I want to associate with.
I don’t disagree, and I don’t think you’re a bad person, and my intent is not to guilt or pressure you. My intent is more to show some people that certain things that may feel impossible are not impossible. :-)
A few things, though:
No one knows which of these is more likely to result in the long-term prosperity of the human race. So it is best to diversify and hope that one of these outliers does not end up killing all of us, intentionally or accidentally.
This seems like a cop out to me. Given a bunch of people trying to help the world, it would be best for all of them to do the thing that they think most helps the world. Often, this will lead to diversity (not just because people have different ideas about what is good, but also because of diminishing marginal returns and saturation). Sometimes, it won’t (e.g. after a syn bio proof of concept that kills 1⁄4 of the race I would hope that diversity in problem-selection would decrease). “It is best to diversify and hope” seems like a platitude that dodges the fun parts.
I do not “care about every single individual on this planet”. I care about myself, my family, friends and some other people I know.
I also have this feeling, in a sense. I interpret it very differently, and I am aware of the typical mind fallacy, but I also caution against the “you must be Fundamentally Different” fallacy. Part of the theme behind this post is “you can interpret the internal caring feelings differently if you want”, and while I interpret my care-senses differently, I do empathize with this sentiment.
That’s not to say that you should come around to my viewpoint, by any means. But if you (or others) would like to try, for one reason or another, consider the following points:
Do you care only about the people who are currently close friends, or also the people who could be close friends? Is the value a property of the person, or a property of the fact that that person has been brought to your awareness?
Would you care more about humans in a context where humanity is treated as the ‘in-group’? For example, consider a situation where an alien race is at war with humans, and a roving band of alien brutes have captured a human family and are torturing them for fun. Does this boil your blood? Or do you not really care?
I assume that you wouldn’t push a friend in front of the trolley to save ten strangers. However, if you and a friend were in a room with ten strangers behind a veil of uncertainty, and were informed that the twelve of you were about to play in a trolley game, would you sign a contract which stated that (assuming unanimous agreement) the pusher agrees to push the pushee?
In my case, much of my decision to care about the rest of the world is due to an adjustment upwards of the importance of other people (after noticing that I tend to care significantly about people after I have gotten to know them very well, and deciding that people don’t matter less just because I’m not yet close to them). There’s also a significant portion of my caring that comes from caring about others because I would want others to care about me if the positions were reversed, and this seeming like the right action in a timeless sense.
Finally, much of my caring comes from treating all of humanity as my in-group (everyone is a close friend, I just don’t know most of them yet; see also the expanding circle).
I mess with my brother sometimes, but anyone else who tries to mess with my brother has to go through me first. Similarly there is some sense in which I don’t “care” about most of the nameless masses who are out of my sight (in that I don’t have feelings for them), but there’s a fashion in which I do care about them, in that anyone who fucks with humans fucks with me.
Disease, war, and death are all messing with my people, and while I may not be strong enough to do anything about it today, there will come a time.
Do you care only about the people who are currently close friends, or also the people who could be close friends?
There may be a group of people, such that it is possible for any one individual of the group to become my close friend, but where it is not possible for all the individuals to become my close friends simultaneously.
In that case, saying “any individual could become a close friend, so I should multiply ‘caring for one friend’ by the the number of individuals in the group” is wrong. Instead, I should multiply “caring for one friend’ by the number of individuals in the group who can become my friend simultaneously, and not take into account the individuals in excess of that. In fact, even that may be too strong. It may be possible for one individual in the group to become my close friend only at the cost of reducing the closeness to my existing friends, in which case I should conclude that the total amount I care shouldn’t increase at all.
The point is that the fact that someone happens to be your close friend seems like the wrong reason to care about them.
Let’s say, for example, that:
If X was my close friend, I would care about X
If Y was my close friend, I would care about Y
X and Y could not both be close friends of mine simultaneously.
Why should whether I care for X or care for Y depend on which one I happen to end up being close friends with? Rather, why shouldn’t I just care about both X and Y regardless of whether they are my close friends or not?
Perhaps I have a limited amount of caring available and I am only able to care for a certain number of people. If I tried to care for both X and Y I would go over my limit and would have to reduce the amount of caring for other people to make up for it. In fact, “only X or Y could be my close friend, but not both” may be an effect of that.
It’s not “they’re my close friend, and that’s the reason to care about them”, it’s “they’re under my caring limit, and that allows me to care about them”. “Is my close friend” is just another way to express “this person happened, by chance, to be added while I was still under my limit”. There is nothing special about this person, compared to the pool of all possible close friends, except that this person happened to have been added at the right time (or under randomly advantageous circumstances that don’t affect their merit as a person, such as living closer to you).
Of course, this sounds bad because of platitudes we like to say but never really mean. We like to say that our friends are special. They aren’t; if you had lived somewhere else or had different random experiences, you’d have had different close friends.
Is my close friend” is just another way to express “this person happened, by chance, to be added while I was still under my limit”. There is nothing special about this person, compared to the pool of all possible close friends, except that this person happened to have been added at the right time (or under randomly advantageous circumstances that don’t affect their merit as a person, such as living closer to you).
I think I would state a similar claim in a very different way. Friends are allies; both of us have implicitly agreed to reserve resources for the use of the other person in the friendship. (Resources are often as simple as ‘time devoted to a common activity’ or ‘emotional availability.’) Potential friends and friends might be indistinguishable to an outside observer, but to me (or them) there’s an obvious difference in that a friend can expect to ask me for something and get it, and a potential friend can’t.
(Friendships in this view don’t have to be symmetric- there are people that I’d listen to them complain that I don’t expect they’d listen to me complain, and the reverse exists as well.)
They aren’t; if you had lived somewhere else or had different random experiences, you’d have had different close friends.
I think that it’s reasonable to call facts ‘special’ relative to counterfacts- yes, I would have had different college friends if I had gone to a different college, but I did actually go to the college I went to, and actually did make the friends I did there.
That’s a solid point, and to a significant extent I agree.
There are quite a lot of things that people can spend these kinds of resources on that are very effective at a small scale. This is an entirely sufficient basis to justify the idea of friends, or indeed “allies”, which is a more accurate term in this context. A network of local interconnections of such friends/allies who devote time and effort to one another is quite simply a highly efficient way to improve overall human well-being.
This also leads to a very simple, unbiased moral justification for devoting resources to your close friends; it’s simply that you, more so than other people, are in a unique position to affect the well-being of your friends, and vice versa. That kind of argument is also an entirely sufficient basis for some amount of “selfishness”—ceteris paribus, you yourself are in a better position to improve your own well-being than anyone else is.
However, this is not the same thing as “caring” in the sense So8res is using the term; I think he’s using the term more in the sense of “value”. For the above reasons, you can value your friends equally to anyone else while still devoting more time and effort to them. In general, you’re going to be better able to help your close friends than you are a random stranger on the street.
The way you put it, it seems like you want to care for both X and Y but are unable to.
However, if that’s the case then So8res’s point carries, because the core argument in the post translates to “if you think you ought to care about both X and Y but find yourself unable to, then you can still try to act the way that you would if you did, in fact, care about both X and Y”.
If you mean “devote time and effort to”, sure; I completely agree that it makes a lot of sense to do this for your friends, and you can’t do that for everyone.
If you mean “value as a human being and desire their well-being”, then I think it’s not justifiable to afford special privilege in this regard to close friends.
I don’t think the worth of a human being should be decided upon almost entirely circumstantial grounds, namely their proximity and/or relation to myself. If anything it should be a function of the qualities or the nature of that person, or perhaps even blanket equality.
If I believe that my friends are more valuable, it should be because of the qualities that led to them being my friend rather than simply the fact that they are my friends. However, if that’s so then there are many, many other people in the world who have similar qualities but are not my friends.
I don’t think the worth of a human being should be decided upon almost entirely circumstantial grounds, namely their proximity and/or relation to myself.
I assume you would pay your own mortgage. Would you mind paying my mortgage as well?
If you care equally for two people, your money should go to the one with the greatest need. It is very unlikely that in a country with many mortgage-payers, the person with the greatest need is you. So you should be paying down people’s mortgages until the mortgages of everyone in the world leave them no worse than you with respect to mortgages; only then should you then pay anything to yourself.
And even if it’s impractical to distribute your money to all mortgage payers in the world, surely you could find a specific mortgage payer who is so bad off that paying the mortgage of just this one person satisfies a greater need than paying off your own.
But you don’t. And you can’t. And everyone doesn’t and can’t, not just for mortgages, but for, say, food or malaria nets. You don’t send all your income above survival level to third-worlders who need malaria nets (or whatever other intervention people need the most); you don’t care for them and yourself equally.
Yes, if I really ought to value other human beings equally then it means I ought to devote a significant amount of time and/or money to altruistic causes, but is that really such an absurd conclusion?
Perhaps I don’t do those things, but that doesn’t mean I can’t and it doesn’t mean I shouldn’t.
You ought to value other human beings equally, but you don’t.
You do value other human beings equally, and you ought to act in accordance with that valuation, but you don’t.
You appear to be claiming 2 and denying 1. However, I don’t see a significant difference between 1 and 2; 1 and 2 result in exactly the same actions by you and it ends up just being a matter of semantics.
I agree; I don’t see a significant difference between thinking that I ought to value other human beings equally but failing to do so, and actually viewing them equally and not acting accordingly. If I accept either (1) or (2) it’s still a moral failure, and it is one that I should act to correct. In either case, what matters is the actions that I ought to take as a result (i.e. effective altruism), and I think the implications are the same in both cases.
That being said, I guess the methods that I would use to correct the problem would be different in either hypothetical. If it’s (1) then there may be ways of thinking about it that would result in a better valuation of other people, or perhaps to correct for the inaccuracy of the care-o-meter as per the original post.
If it’s (2), then the issue is one of akrasia, and there are plenty of psychological tools or rationalist techniques that could help.
Of course, (1) and (2) aren’t the only possibilities here; there’s at least two more that are important.
You seem to be agreeing by not really agreeing. What does it even mean to say “I value other people equally but I don’t act on that”? Your actions imply a valuation, and in that implied valuation you clearly value yourself more than other people. It’s like saying “I prefer chocolate over vanilla ice cream, but if you give me them I’ll always pick the vanilla”. Then you don’t really prefer chocolate over vanilla, because that’s what it means to prefer something.
My actions alone don’t necessarily imply a valuation, or at least not one that makes any sense.
There are a few different levels at which one can talk about what it means to value something, and revealed preference is not the only one that makes sense.
As usual, the word “better” hides a lot of relevant detail. Better for whom? By what measure?
Shockingly, in at least some cases by some measures, though, it works better for us if I pay your debt and you pay my debt, because it is possible for a third party to get much, much better terms on repayment than the original borrower. In many cases, debts can be sold for pennies on the dollar to anyone except the original borrower. See any of these articles
Ah. It seems we have been talking about somewhat different things.
You are talking about the worth of a human being. I’m talking about my personal perception of the value of a human being under the assumption that other people can and usually do have different perceptions of the same value.
I try not to pass judgement of the worth of humans, but I am quite content with assigning my personal values to people based, in part, on “their proximity and/or relation to myself”.
I’m not entirely sure what a “personal perception of the value of a human being” is, as distinct from the value or worth of a human being. Surely the latter is what the former is about?
Granted, I guess you could simply be talking about their instrumental value to yourself (e.g. “they make me happy”), but I don’t think that’s really the main thrust of what “caring” is.
I’m not entirely sure what a “personal perception of the value of a human being” is, as distinct from the value or worth of a human being.
The “worth a human being” implies that there is one, correct, “objective” value for that human being. We may not be able to observe it directly so we just estimate it, with some unavoidable noise and errors, but theoretically the estimates will converge to the “true” value. The worth of a human being is a function with one argument: that human being.
The “personal perception of the value of a human being” implies that there are multiple, different, “subjective” values for the same human being. There is no single underlying value to which the estimates converge. The personal perception of a value is a function with two arguments: who is evaluated and who does the evaluation.
So, either there is such a thing as the “objective” value and hence, implicitly, you should seek to approach that value, or there is not.
I don’t see any reason to believe in an objective worth of this kind, but I don’t really think it matters that much. If these is no single underlying value, then the act of assigning your own personal values to people is still the same thing as “passing judgement on the worth of humans”, because it’s the only thing those words could refer to; you can’t avoid the issue simply by calling it a subjective matter.
In my view, regardless of whether the value in question is “subjective” or “objective”, I don’t think it should be determined by the mere circumstance of whether I happened to meet that person or not.
So, for example, you believe that to a mother the value of her own child should be similar to that of a random person anywhere on Earth—right? It’s a “mere circumstance” that this particular human happens to be her child.
Probably not just any random person, because one can reasonably argue that children should be valued more highly than adults.
However, I do think that the mother should hold other peoples’ children as being of equal value to her own. That doesn’t mean valuing her own children less, it means valuing everyone else’s more.
Sure, it’s not very realistic to expect this of people, but that doesn’t mean they shouldn’t try.
one can reasonably argue that children should be valued more highly than adults.
One can reasonably argue the other way too. New children are easier to make than new adults.
However, I do think that the mother should hold other peoples’ children as being of equal value to her own. That doesn’t mean valuing her own children less, it means valuing everyone else’s more.
Since she has finite resources, is there a practical difference?
It seems to me extreme altruism is so easily abused that it will inevitably wipe itself out in the evolution of moral systems.
One can reasonably argue the other way too. New children are easier to make than new adults.
True. However, regardless of the relative value of children and adults, it is clear that one ought to devote significantly more time and effort to children than to adults, because they are incapable of supporting themselves and are necessarily in need of help from the rest of society.
Since she has finite resources, is there a practical difference?
Earlier I specifically drew a distinction between devoting time and effort and valuation; you don’t have to value your own children more to devote yourself to them and not to other peoples’ children.
That said, there are some practical differences. First of all, it may be better not to have children if you could do more to help other peoples’ children. Secondly, if you do have children and still have spare resources over and above what it takes to properly care for them, then you should consider where those spare resources could be spent most effectively.
It seems to me extreme altruism is so easily abused that it will inevitably wipe itself out in the evolution of moral systems.
If an extreme altruist recognises that taking such an extreme position would lead overall to less altruism in the future, and thus worse overall consequences, surely the right thing to do is stand up to that abuse. Besides, what exactly do you mean by “extreme altruism”?
If an extreme altruist recognises that taking such an extreme position would lead overall to less altruism in the future, and thus worse overall consequences, surely the right thing to do is stand up to that abuse.
A good point. By abuse I wouldn’t necessarily mean anything blatant though, just that selfish people are happy to receive resources from selfless people.
Besides, what exactly do you mean by “extreme altruism”?
Valuing people equally by default when their instrumental value isn’t considered. I hope I didn’t misunderstand you. That’s about as extreme it gets but I suppose you could get even more extreme by valuing other people more highly than yourself.
A good point. By abuse I wouldn’t necessarily mean anything blatant though, just that selfish people are happy to receive resources from selfless people.
Sure, and there isn’t really anything wrong with that as long as the person receiving the resources really needs them.
Valuing people equally by default when their instrumental value isn’t considered. I hope I didn’t misunderstand you. That’s about as extreme it gets but I suppose you could get even more extreme by valuing other people more highly than yourself.
The term “altruism” is often used to refer to the latter, so the clarification is necessary; I definitely don’t agree with that extreme.
In any case, it may not be reasonable to expect people (or yourself) to hold to that valuation, or to act in complete recognition of what that valuation implies even if they do, but it seems like the right standard to aim for. If you are likely biased against valuing distant strangers as much as you ought to, then it makes sense to correct for it.
My view is similar to yours, but with the following addition:
I have actual obligations to my friends and family, and I care about them quite a bit. I also care to a lesser extent about the city and region that I live in. If I act as though I instead have overriding obligations to the third world, then I risk being unable to satisfy my more basic obligations. To me, if for instance I spend my surplus income on mosquito nets instead of saving it and then have some personal disaster that my friends and family help bail me out of (because they also have obligations to me), I’ve effectively stolen their money and spent it on something they wouldn’t have chosen to spend it on. While I clearly have some leeway in these obligations and get to do some things other than save, charity falls into the same category as dinner out: I spend resources on it occasionally and enjoy or feel good about doing so, but it has to be kept strictly in check.
Thank you for posting that. My views and feelings about this topic are largely the same. (There goes any chance of my being accepted for a CFAR workshop. :))
On the question of thousands versus gigantic numbers of future people, what I would value is the amount of space they explore, physical and experiential, rather than numbers. A single planetful of humans is worth almost the same as a galaxy of them, if it consists of the same range of cultures and individuals, duplicated in vast numbers. The only greater value in a larger population is the more extreme range of random outliers it makes available.
Thank you for stating your perspective and opinion so clearly and honestly. It is valuable. Now allow me to do the same, and follow by a question (driven by sincere curiosity):
I do not think I am a worse person than you because of that.
I think you are.
It would be nice if fewer people died and suffered, sure. But “nice” is all it is. Call me heartless.
You are heartless.
I care about the humanity surviving and thriving, in the abstract
Here’s my question, and I hope you take the time to answer as honestly as you wrote your comment:
Why?
After all you’ve rejected to care about, why in the world would you care about something as abstract as “humanity surviving and thriving”? It’s just an ape species, and there have already been billions of them. In addition, you clearly don’t care about numbers of individuals or quality of life. And you know the heat death of the universe will kill them all off anyway, if they survive the next few centuries.
I don’t mean to convince you otherwise, but it seems arbitrary—and surprisingly common—that someone who doesn’t care about the suffering or lives of strangers would care about that one thing out of the blue.
I can’t speak for shminux, of course, but caring about humanity surviving and thriving while not caring about the suffering or lives of strangers doesn’t seem at all arbitrary or puzzling to me.
I mean, consider the impact on me if 1000 people I’ve never met or heard of die tomorrow, vs. the impact on me if humanity doesn’t survive. The latter seems incontestably and vastly greater to me… does it not seem that way to you?
It doesn’t seem at all arbitrary that I should care about something that affects me greatly more than something that affects me less. Does it seem that way to you?
I mean, consider the impact on me if 1000 people I’ve never met or heard of die tomorrow, vs. the impact on me if humanity doesn’t survive. The latter seems incontestably and vastly greater to me… does it not seem that way to you?
Yes, rereading it, I think I misinterpreted response 2 as saying it doesn’t matter whether a population of 1,000 people has a long future or a population of one googleplex [has an equally long future]. That is, that population scope doesn’t matter, just durability and surivival. I thought this defeated the usual Big Future argument.
But even so, his 5 turns it around: Practically all people in the Big Future will be strangers, and if it is only “nicer” if they don’t suffer (translation: their wellbeing doesn’t really matter), then in what way would the Big Future matter?
I care a lot about humanity’s future, but primarily because of its impact on the total amout of positive and negative conscious experiences that it will cause.
...Slow deep breath… Ignore inflammatory and judgmental comments… Exhale slowly… Resist the urge to downvote… OK, I’m good.
First, as usual, TheOtherDave has already put it better than I could.
Maybe to elaborate just a bit.
First, almost everyone cares about the survival of the human race as a terminal goal. Very few have the infamous ‘apres nous le deluge’ attitude. It seems neither abstract nor arbitrary to me. I want my family, friends and their descendants to have a bright and long-lasting future, and it is predicated on the humanity in general having one.
Second, a good life and a bright future for the people I care about does not necessarily require me to care about the wellbeing of everyone on Earth. So I only get mildly and non-scalably sad when bad stuff happen to them. Other people, including you, care a lot. Good for them.
Unlike you (and probably Eliezer), I do not tell other people what they should care about, and I get annoyed at those who think their morals are better than mine. And I certainly support any steps to stop people from actively making other people’s lives worse, be it abusing them, telling them whom to marry or how much and what cause to donate to. But other than that, it’s up to them. Live and let live and such.
Hope this helps you understand where I am coming from. If you decide to reply, please consider doing it in a thoughtful and respectful manner this time.
I’m actually having difficultly understanding the sentiment “I get annoyed at those who think their morals are better than mine”. I mean, I can understand not wanting other people to look down on you as a basic emotional reaction, but doesn’t everyone think their morals are better than other people?
That’s the difference between morals and tastes. If I like chocolate ice cream and you like vanilla, then oh well. I don’t really care and certainly don’t think my tastes are better for anyone other than me. But if I think people should value the welfare of strangers and you don’t, then of course I think my morality is better. Morals differ from tastes in that people believe that it’s not just different, but WRONG to not follow them. If you remove that element from morality, what’s left? The sentiment “I have these morals, but other people’s morals are equally valid” sounds good, all egalitarian and such, but it doesn’t make any sense to me. People judge the value of things through their moral system, and saying “System B is as good as System A, based on System A” is borderline nonsensical.
Also, as an aside, I think you should avoid rhetorical statements like “call me heartless if you like” if you’re going to get this upset when someone actually does.
Well, kinda-sorta. I don’t think the subject is amenable to black-and-white thinking.
I would consider people who think their personal morals are the very best there is to be deluded and dangerous. However I don’t feel that people who think their morals are bad are to be admired and emulated either.
There is some similarity to how smart do you consider yourself to be. Thinking yourself smarter than everyone else is no good. Thinking yourself stupid isn’t good either.
So would you say that moral systems that don’t think they’re better than other moral systems are better than other moral systems? What happens if you know to profess the former kind of a moral system and agree with the whole statement? :)
So if my morality tells me that murdering innocent people is good, then that’s not worse than whatever your moral system is?
I know it’s possible to believe that (it was pretty much used as an example in my epistemology textbook for arguments against moral relativism), I just never figured anyone actually believed it.
What basis do you have for judging others morality other than your own morality? And if you ARE using your own morality to judge their morality, aren’t you really just checking for similarity to your own?
I mean, it’s the same way with beliefs. I understand not everything I believe is true, and I thus understand intellectually that someone else might be more correct (or, less wrong, if you will) than me. But in practice, when I’m evaluating others’ beliefs I basically compare them with how similar they are to my own. On a particularly contentious issue, I consider reevaluating my beliefs, which of course is more difficult and involved, but for simple judgement I just use comparison.
Which of course is similar to the argument people sometimes bring up about “moral progress”, claiming that a random walk would look like progress if it ended up where we are now (that is, progress is defined as similarity to modern beliefs).
My question though is that how do you judge morality/behavior if not through your own moral system? And if that is how you do it, how is your own morality not necessarily better?
if you ARE using your own morality to judge their morality, aren’t you really just checking for similarity to your own?
No, I don’t think so.
Morals are a part of the value system (mostly the socially-relevant part) and as such you can think of morals as a set of values. The important thing here is that there are many values involved, they have different importance or weight, and some of them contradict other ones. Humans, generally speaking, do not have coherent value systems.
When you need to make a decision, your mind evaluates (mostly below the level of your consciousness) a weighted balance of the various values affected by this decision. One side wins and you make a particular choice, but if the balance was nearly even you feel uncomfortable or maybe even guilty about that choice; if the balance was very lopsided, the decision feels like a no-brainer to you.
Given the diversity and incoherence of personal values, comparison of morals is often an iffy thing. However there’s no reason to consider your own value system to be the very best there is, especially given that it’s your conscious mind that makes such comparisons, but part of morality is submerged and usually unseen by the consciousness. Looking at an exact copy of your own morals you will evaluate them as just fine, but not necessarily perfect.
Also don’t forget that your ability to manipulate your own morals is limited. Who you are is not necessarily who you wish you were.
This is a somewhat frustrating situation, where we both seem to agree on what morality is, but are talking over each other. I’ll make two points and see if they move the conversation forward:
1: “There’s no reason to consider your own value system to be the very best there is”
This seems to be similar to the point I made above about acknowledging on an intellectual level that my (factual) beliefs aren’t the absolute best there is. The same logic holds true for morals. I know I’m making some mistakes, but I don’t know where those mistakes are. On any individual issue, I think I’m right, and therefore logically if someone disagrees with me, I think they’re wrong. This is what I mean by “thinking that one’s own morals are the best”. I know I might not be right on everything, but I think I’m right about every single issue, even the ones I might really be wrong about. After all, if I was wrong about something, and I was also aware of this fact, I would simply change my beliefs to the right thing (assuming the concept is binary. I have many beliefs I consider to be only approximations, which I consider to be only the best of any explanation I have heard so far. Not prefect, but “least wrong”).
Which brings me to point 2.
2: “Also don’t forget that your ability to manipulate your own morals is limited. Who you are is not necessarily who you wish you were.”
I’m absolutely confused as to what this means. To me, a moral belief and a factual belief are approximately equal, at least internally (if I’ve been equivocating between the two, that’s why). I know I can’t alter my moral beliefs on a whim, but that’s because I have no reason to want to. Consider self-modifying to want to murder innocents. I can’t do this, primarily because I don’t want to, and CAN’T want to for any conceivable reason (what reason does Gandhi have to take the murder pill if he doesn’t get a million dollars?) I suppose modifying instrumental values to terminal values (which morals are) to enhance motivation is a possible reason, but that’s an entirely different can of worms. If I wished I held certain moral beliefs, I already have them. After all, morality is just saying “You should do X”. So wishing I had a different morality is like saying “I wish I though I should do X”. What does that mean?
Not being who you wish to be is an issue of akrasia, not morality. I consider the two to be separate issues, with morality being an issue of beliefs and akrasia being an issue of motivation.
In short, I’m with you for the first line and two following paragraphs, and then you pull a conclusion out in the next paragraph that I disagree with. Clearly there’s a discontinuity either in my reading or your writing.
To me, a moral belief and a factual belief are approximately equal
Ah. It seems we approach morals from a bit different angles. To you morals is somewhat like physics—it’s a system of “hard” facts and, generally speaking, they are either correct or not. As you say, “On any individual issue, I think I’m right, and therefore logically if someone disagrees with me, I think they’re wrong.”
To me morals is more like preferences—a system of flexible way to evaluate choices. You can have multiple ways to do that and they don’t have to be either correct or not.
Consider a simple example: eating meat. I am a carnivore and think that eating meat is absolutely fine from the morality point of view. Let’s take Alice who is an ideological vegetarian. She feels that eating meat is morally wrong.
My moral position different from (in fact, diametrically opposed to) Alice’s, but I’m not going to say that Alice’s morals are wrong. They are just different and she has full right to have her own.
That does not apply to everything, of course. There are “zones” where I’m fine with opposite morals and there are “zones” where I am not. But even when I would not accept a sufficiently different morality I would hesitate to call it wrong. It seems an inappropriate word to use when there is no external, objective yardstick one could apply. It probably would be better to say that there is a range of values/morals that I consider acceptable and there is a range which I do not.
If I wished I held certain moral beliefs, I already have them.
No, I don’t think so. Morals are values, not desires. It’s not particularly common to wish to hold different values (I think), but I don’t see why this is impossible. For example, consider somebody who values worldly success, winning, being at the top. But he has a side which isn’t too happy with this constant drive, the trampling of everything in the rush to be the first, the sacrifices it requires. That side of his would prefer him to value success less.
In general, people sometimes wish to radically change themselves (religious (de)conversions, acceptance of major ideologies, etc.) and that usually involves changing their morality. That doesn’t happen in a single moment.
My moral position different from (in fact, diametrically opposed to) Alice’s, but I’m not going to say that Alice’s morals are wrong
You do realize she’s implicitly calling you complicit in the perpetuation of the suffering and deaths of millions of animals right? I’m having difficulty understanding how you can NOT say that her morality is wrong. Her ACTIONS are clearly unobjectionable (Eating plants is certainly not worse than eating meat under the vast majority of ethical systems) but her MORALITY is quite controversial. I have a feeling like you accept this case because she is not doing anything that violates your own moral system, while you are doing something that violates hers. To use a (possibly hyperbolic and offensive) analogy, this is similar to a case where a murderer calls the morals of someone who doesn’t accept murder as “just different”, and something they have the full right to have.
No, I don’t think so. (and following text)
I don’t think your example works. He values success, AND he values other things (family, companionship, ect.) I’m not sure why you’re calling different values “Different sides” as though they are separate agents. We all have values that occasionally conflict. I value a long life, even biological immortality if possible (I know, what am I doing on lesswrong with a value like that? /sarcasm), but I wouldn’t sacrifice 1000 lives a day to keep me alive atop a golden throne. This doesn’t seem like a case of my “Don’t murder” side wanting me to value immortality less, it’s more a case of considering the expected utility of my actions and coming to a conclusion about what collateral damage I’m willing to accept. It’s a straight calculation, no value readjustment required.
As for your last point, I’ve never experienced such a radical change (I was raised religiously, but outside of weekly mass my family never seemed to take it very seriously and I can’t remember caring too much about it). I actually don’t know what makes other people adopt ideologies. For me, I’m a utilitarian because it seems like a logical way to formalize my empathy and altruistic desires, and to this day I have difficulty grokking deontology like natural law theology (you would think being raised catholic would teach you some of that. It did not).
So, to summarize my ramblings: I think your first example only LOOKS like reasonable disagreement because Alice’s actions are unobjectionable to you, and you would feel differently if positions were reversed. I think your example of different sides is really just explaining different values, which have to be weighed against each other but need not cause moral distress. And I have no idea what to make of your last point.
If I ignored or misstated any of your points, or am just completely talking over you and not getting the point at all, please let me know.
I’m having difficulty understanding how you can NOT say that her morality is wrong.
I think the terms “acceptable” and “not acceptable” are much better here than right and wrong.
If the positions were reversed, I might find Alice’s morality unacceptable to me, but I still wouldn’t call it wrong.
I’m not sure why you’re calling different values “Different sides” as though they are separate agents.
No, I’m not talking about different values here. Having different conflicting values is entirely normal and commonplace. I am here implicitly accepting the multi-agent theory of mind and saying that a part of Bob’s (let’s call the guy Bob) personality would like to change his values. It might even be a dominant part of Bob’s conscious personality, but it still is having difficulty controlling his drive to win.
Or let’s take a different example, with social pressure. Ali Ababwa emigrated from Backwardistan to the United States. His original morality was that women are… let’s say inferior. However Ali went to school in the US, got educated and somewhat assimilated. He understands—consciously—that his attitude towards women is neither adequate nor appropriate and moreover, his job made it clear to him that he ain’t in Backwardistan any more and noticeable sexism will get him fired. And yet his morals do not change just because he would prefer them to change. Maybe they will, eventually, but it will take time.
It’s not clear to me that comparing moral systems on a scale of good and bad makes sense without a metric outside the systems.
So if my morality tells me that murdering innocent people is good, then that’s not worse than whatever your moral system is?
So while I wouldn’t murder innocent people myself, comparing our moral systems on a scale of good and bad is uselessly meta, since that meta-reality doesn’t seem to have any metric I can use. Any statements of good or bad are inside the moral systems that I would be trying to compare. Making a comparison inside my own moral system doesn’t seem to provide any new information.
There’s no law of physics that talks about morality, certainly. Morals are derived from the human brain though, which is remarkably similar between individuals. With the exception of extreme outliers, possibly involving brain damage, all people feel emotions like happiness, sadness, pain and anger. Shouldn’t it be possible to judge most morality on the basis of these common features, making an argument like “wanton murder is bad, because it goes against the empathy your brain evolved to feel, and hurts the survival chance you are born valuing”? I think this is basically the point EY makes about the “psychological unity of humankind”.
Of course, this dream goes out the window with UFAI and aliens. Lets hope we don’t have to deal with those.
Shouldn’t it be possible to judge most morality on the basis of these common features, making an argument like “wanton murder is bad, because it goes against the empathy your brain evolved to feel, and hurts the survival chance you are born valuing”?
Yes, it should. However, in the hypothetical case involved, the reason is not true; the hypothetical brain does not have the quality “Has empathy and values survival and survival is impaired by murder”.
We are left with the simple truth that evolution (including memetic evolution) selects for things which produce offspring that imitate them, and “Has a moral system that prohibits murder” is a quality that successfully creates offspring that typically have the quality “Has a moral system that prohibits murder”.
The different quality “Commits wanton murder” is less successful at creating offspring in modern society, because convicted murderers don’t get to teach children that committing wanton murder is something to do.
It seems to me that when you explicitly make your own virtue or lack thereof a topic of discussion, and challenge readers in so many words to “call [you] heartless”, you should not then complain of someone else’s “inflammatory and judgmental comments” when they take you up on the offer.
And it doesn’t seem to me that Hedonic_Treader’s response was particularly thoughtless or disrespectful.
(For what it’s worth, I don’t think your comments indicate that you’re heartless.)
It’s interesting because people will often accuse a low status out group of “thinking they are better than everyone else” *. But I had never actually seen anyone actually claim that their ingroup is better than everyone else, the accusation was always made of straw …. until I saw Hedonic Treader’s comment.
I do sort of understand the attitude of the utilitarian EA’s. If you really believe that everyone must value everyone else’s life equally, then you’d be horrified by people’s brazen lack of caring. It is quite literally like watching a serial killer casually talk about how many people they killed and finding it odd that other people are horrified. After all, each life you fail to save is essentially the same a murder under utilitarianism.
*I’ve seen people make this accusation against nerds, atheists, fedora wearers, feminists, left leaning persons, Christians etc
I expect that’s correct, but I’m not sure your justification for it is correct. In particular it seems obviously possible for the following things all to be true:
A thinks her group is better than others.
A’s thinking this is obvious enough for B to be able to discern it with some confidence.
A never explicitly says that her group is better than others.
and I think people who say (e.g.) that atheists think they’re smarter than everyone else would claim that that’s what’s happening.
I repeat, I agree that these accusations are usually pretty strawy, but it’s a slightly more complicated variety of straw than simply claiming that people have said things they haven’t. More specifically, I think the usual situation is something like this:
A really does think that, to some extent and in some respects, her group is better than others.
But so does everyone else.
B imagines that he’s discerned unusual or unreasonable opinions of this sort in A.
But really he hasn’t; at most he’s picked up on something that he could find anywhere if he chose to look.
[EDITED to add, for clarity:] By “But so does everyone else” I meant that (almost!) everyone thinks that (many of) the groups they belong to are (to some extent and in some respects) better than others. Most of us mostly wouldn’t say so; most of us would mostly agree that these differences are statistical only and that there are respects in which are groups are worse too; but, still, on the whole if a person chooses to belong to some group (e.g., Christians or libertarians or effective altruists or whatever) that’s partly because they think that group gets right (or at least more right) some things that other groups get wrong (or at least less right).
I do imagine that the first situation is more common, in general, than the second.
This is entirely because of the point:
But so does everyone else.
A group that everyone considers better than others must be a single group, and probably very small; this requirement therefore limits your second scenario to a very small pool of people, while I imagine that your first scenario is very common.
Sorry, I wasn’t clear enough. By “so does everyone else” I meant “everyone else considers the groups they belong to to be better, to some extent and in some respects, better than others”.
Ah, that clarification certainly changes your post for the better. Thanks. In light of it, I do agree that the second scenario is common; but looking closely at it, I’m not sure that it’s actually different to the first scenario. In both cases, A thinks her group is better; in both cases, B discerns that fact and calls excessive attention to it.
but, still, on the whole if a person chooses to belong to some group (e.g., Christians or libertarians or effective altruists or whatever) that’s partly because they think that group gets right (or at least more right) some things that other groups get wrong (or at least less right).
Well, if I belong to the group of chocolate ice cream eaters, I do think that eating chocolate ice cream is better than eating vanilla ice cream—by my standards; it doesn’t follow that I also believe it’s better by your standards or by objective standards (whatever they might be) and feel smug about it.
Sure. Some things are near-universally understood to be subjective and personal. Preference in ice cream is one of them. Many others are less so, though; moral values, for instance. Some even less; opinions about apparently-factual matters such as whether there are any gods, for instance.
(Even food preferences—a thing so notoriously subjective that the very word “taste” is used in other contexts to indicate something subjective and personal—can in fact give people that same sort of sense of superiority. I think mostly for reasons tied up with social status.)
Perhaps to avoid confusion, my comment wasn’t intended as an in-group out-group thing or even as a statement about my own relative status.
“Better than” and “worse than” are very simple relative judgments. If A rapes 5 victims a week and B rapes 6, A is a better person than B. If X donates 1% of his income potential to good charities and Y donates 2%, X is a worse person than Y (all else equal). It’s a rather simple statement of relative moral status.
Here’s the problem: If we pretend—like some in the rationalist community do—that all behavior is morally equivalent and all morals are equal, then there is no social incentive to behave prosocially when possible. Social feedback matters and moral judgments have their legitimate place in any on-topic discourse.
Finally caring about not caring is self-defeating: One cannot logically judge jugmentalism without being judgmental oneself.
If we pretend—like some in the rationalist community do—that all behavior is morally equivalent and all morals are equal
That’s a strawman. I haven’t seen anyone say anything like that. What some people do say is that there is no objective standard by which to judge various moralities (that doesn’t make them equal, by the way).
there is no social incentive to behave prosocially when possible
Of course there is. Behavior has consequences regardless of morals. It is quite common to have incentives to behave (or not) in certain ways without morality being involved.
moral judgments have their legitimate place in any on-topic discourse.
Of course there is. Behavior has consequences regardless of morals. It is quite common to have incentives to behave (or not) in certain ways without morality being involved.
What do you mean by “morality”? Were the incentives the Heartstone wearer was facing when deciding whether to kill the kitten about morality, or not?
By morality I mean a particular part of somebody’s system of values. Roughly speaking, morality is the socially relevant part of the value system (though that’s not a hard definition, but rather a pointer to the area where you should search for it).
You are saying that shminux is “a worse person than you” and also “heartless”, but I am not sure what these words mean. How do you measure which person is better as compared to another person ? If the answer is, “whoever cares about more people is better”, then all you’re saying is, “shminux cares about fewer people because he cares about fewer people”. This is true, but tautologically so.
All morals are axioms, not theorems, and thus all moral claims are tautological.
Whatever morals we choose, we are driven to choose them by the morals we already have – the ones we were born with and raised to have. We did not get our morals from an objective external source. So no matter what your morals, if you condemn someone else by them, your condemnation will be tautoligcal.
Yes, at some level there are basic moral claims that behave like axioms, but many moral claims are much more like theorems than axioms.
Derived moral claims also depend upon factual information about the real world, and thus they can be false if they are based on incorrect beliefs about reality.
I disagree. There are degrees of caring, and appropriate responses to them. Admittedly, “nice” is a term with no specific meaning, but most of us can probably put it on a relative ranking with other positive terms, such “non-zero benefit” or “decent” (which I, and probably most people, would rank below “nice”) and “excellent”, “wonderful”, “the best thing in the world” (in the hyperbolic “best thing I have in mind right now” sense), or “literally, after months of introspection, study, and multiplying, I find that this is the best thing which could possibly occur at this time”; I suspect most native English speakers would agree that those are stronger sentiments than “nice”. I can certainly think of things that are more important than merely “nice” yet less important than a reduction in death and suffering.
For example, I would really like a Tesla car, with all the features. In the category of remotely-feasible things somebody could actually give me, I actually value that higher than there’s any rational reason for. On the other hand, if somebody gave me the money for such a car, I wouldn’t spend it on one… I don’t actually need a car, in fact don’t have a place for it, and there are much more valuable things I could do with that money. Donating it to some highly-effective charity, for example.
Leaving aside the fact that “every human being in existence” appears to require excluding a number of people who really are devoting their lives to bringing about reductions in suffering and death, there are lots of people who would respond to a cessation of some cause of suffering or death more positively than to simply think it “nice”. Maybe not proportionately more positively—as the post says, our care-o-meters don’t scale that far—but there would still be a major difference. I don’t know how common, in actual numbers, that reaction is vs. the “It would be nice” reaction (not to mention other possible reactions), but it is absolutely a significant number of people even among those who aren’t devoting their whole life towards that goal.
Pretty much every human being in existence who thinks that stopping death and suffering is a good thing, still spends resources on themselves and their loved ones beyond the bare minimum needed for survival. They could spend some money to buy poor Africans malaria nets, but have something which is not death or suffering which they consider more important than spending the money. to alleviate death and suffering.
In that sense, it’s nice that death and suffering are alleviated, but that’s all.
it is absolutely a significant number of people even among those who aren’t devoting their whole life towards that goal
“Not devoting their whole life towards stopping death and suffering” equates to “thinks something else is more important than stopping death and suffering”.
False dichotomy. You can have (many!) things which are more than merely “nice” yet less than the thing you spend all available resources on. To take a well-known public philanthropist as an example, are you seriously claiming that because he does not spend every cent he has eliminating malaria as fast as possible, Bill Gates’ view on malaria eradication is that “it’s nice that death and suffering are alleviated, but that’s all”?
We should probably taboo the word “nice” here; since we seem likely to be operating on different definitions of it. To rephrase my second sentence of this post, then: You can have (many!) things which you hold to be important and work to bring about, but which you do not spend every plausibly-available resource on.
Also, your final sentence is not logically consistent. To show that a particular goal is the most important thing to you, you only need to devote more resources (including time) to it than to any other particular goal. If you allocate 49% of your resources to ending world poverty, 48% to being a billionaire playboy, and 3% to personal/private uses that are not strictly required for either of those goals, that is probably not the most efficient possible manner to allocate your resources, but there is nothing you value more than ending poverty (a major cause of suffering and death) even though it doesn’t even consume a majority of your resources. Of course, this assumes that the value of your resources is fixed wherever you spend them; in the real world, the marginal value of your investments (especially in things like medicine) go down the more resources you pump into them in a given time frame; a better use might be to invest a large chunk of your resources into things that generate more resources, while providing as much towards your anti-suffering goals as they can efficiently use at once.
Let’s be a bit more concrete here. If you devote approximately half your resources to ending poverty and half to being a billionaire playboy, that means something like this: you value saving 10000 Africans’ lives less than you value having a second yacht. I’m sure that second yacht is fun to have, but I think it’s reasonable to categorize something that you value less than 1/10000 of the increment from “one yacht” to “two yachts” as no more important than “nice”.
This is of course not a problem unique to billionaire playboys, but it’s maybe a more acute problem for them; a psychologically equivalent luxury for an ordinarily rich person might be a second house costing $1M, which corresponds to 1⁄100 as many African lives and likely brings a bigger gain in personal utility; one for an ordinarily not-so-rich person might be a second car costing $10k, another 100x fewer dead Africans and (at least for some—e.g., two-income families living in the US where getting around without a car can be a biiiig pain) a considerable gain in personal utility. There’s still something kinda indecent about valuing your second car more than a person’s life, but at least to my mind it’s substantially less indecent than valuing your second megayacht more than 10000 people’s lives.
Suppose I have a net worth of $1M and you have a net worth of $10B. Each of us chooses to devote half our resources to ending poverty and half to having fun. That means that I think $500k of fun-having is worth the same as $500k of poverty-ending, and you think $5B of fun-having is worth the same as $5B of poverty-ending. But $5B of poverty-ending is about 10,000 times more poverty-ending than $500k of poverty-ending—but $5B of fun-having is nowhere near 10,000 times more fun than $500k of fun-having. (I doubt it’s even 10x more.) So in this situation it is reasonable to say that you value poverty-ending much less, relative to fun-having, than I do.
Pedantic notes: I’m supposing that your second yacht costs you $100M and that you can save one African’s life for $10k; billionaires’ yachts are often more expensive and the best estimates I’ve heard for saving poor people’s lives are cheaper. Presumably if you focus on ending poverty rather than on e.g. preventing malaria then you think that’s a more efficient way of helping the global poor, which makes your luxury trade off against more lives. I am using “saving lives” as a shorthand; presumably what you actually care about is something more like time-discounted aggregate QALYs. Your billionaire playboy’s luxury purchase might be something other than a yacht. Offer void where prohibited by law. Slippery when wet.
And, for the avoidance of doubt, I strongly endorse devoting half your resources to ending poverty and half to being a billionaire playboy, if the alternative is putting it all into being a billionaire playboy. The good you can do that way is tremendous, and I’d take my hat off to you if I were wearing one. I just don’t think it’s right to describe that situation by saying that poverty is the most important thing to you.
You can have (many!) things which you hold to be important and work to bring about, but which you do not spend every plausibly-available resource on.
What about the argument from marginal effectiveness? I.e. unless the best thing for you to work on is so small that your contribution reduces its marginal effectiveness below that of the second-best thing, you should devote all of your resources to the best thing.
I don’t myself act on the conclusion, but I also don’t see a flaw in the argument.
This is exactly how I feel. I would slightly amend 1 to “I care about family, friends, some other people I know, and some other people I don’t know but I have some other connection to”. For example, I care about people who are where I was several years ago and I’ll offer them help if we cross paths—there are TDT reasons for this. Are the they the “best” people for me to help under utilitarian grounds? No, and so what?
Personally I see EA* as kind of a dangerous delusion, basically people being talked into doing something stupid (in the sense that they’re probably moving away from maximizing their own true utility function to the extent that such a thing exists). When I hear about someone giving away 50% of their income when they’re only middle class to begin with I feel more pity than admiration.
* Meaning the extreme, “all human lives are equally valuable to me” version, rather than just a desire to not waste charity money.
I don’t understand this. Why should my utility function value me having a large income or having a large amount of money? What does that get me?
I don’t have a good logical reason for why my life is a lot more valuable than anyone else’s. I have a lot more information about how to effectively direct resources into improving my own life vs. improving the lives of others, but I can’t come up with a good reason to have a dominantly large “Life of leplen” term in my utility function. Much of the data suggests that happiness/life quality isn’t well correlated with income above a certain income range and that one of the primary purposes of large disposable incomes is status signalling. If I have cheaper ways of signalling high social status, why wouldn’t I direct resources into preserving/improving the lives of people who get much better life quality/dollar returns than I do? It doesn’t seem efficient to keep investing in myself for little to no return.
I wouldn’t feel comfortable winning a 500 dollar door prize in a drawing where half the people in the room were subsistence farmers. I’d probably tear up my ticket and give someone else a shot to win. From my perspective, just because I won the lottery on birth location and/or abilities doesn’t mean I’m entitled to hundreds of times as many resources as someone else who may be more deserving but less lucky.
With that being said, I certainly don’t give anywhere near half of my income to charity and it’s possible the values I actually live may be closer to what you describe than the situation I outline. I’m not sure, and not sure how it changes my argument.
I don’t understand this. Why should my utility function value me having a large income or having a large amount of money?
With that being said, I certainly don’t give anywhere near half of my income to charity and it’s possible the values I actually live may be closer to what you describe than the situation I outline. I’m not sure, and not sure how it changes my argument.
Sounds like you answered your own question!
(It’s one thing to have some simplistic far-mode argument about how this or that doesn’t matter, or how we should sacrifice ourselves for others, but the near-mode nitty-gritty of the real-world is another thing).
I agree with others that the post is very nice and clear, as most of your posts are. Upvoted for that. I just want to provide a perspective not often voiced here. My mind does not work the way yours does and I do not think I am a worse person than you because of that. I am not sure how common my thought process is on this forum.
Going section by section:
I do not “care about every single individual on this planet”. I care about myself, my family, friends and some other people I know. I cannot bring myself to care (and I don’t really want to) about a random person half-way around the world, except in the non-scalable general sense that “it is sad that bad stuff happens, be it to 1 person or to 1 billion people”. I care about the humanity surviving and thriving, in the abstract, but I do not feel the connection between the current suffering and future thriving. (Actually, it’s worse than that. I am not sure whether humanity existing, in Yvain’s words, in a 10m x 10m x 10m box of computronium with billions of sims is much different from actually colonizing the observable universe (or the multiverse, as the case might be). But that’s a different story, unrelated to the main point.)
No disagreement there, the stakes are high, though I would not say that a thriving community of 1000 is necessarily worse than a thriving community of 1 googoleplex, as long as their probability of long-term survival and thriving is the same.
I occasionally donate modest amounts to this cause or that, if I feel like it. I don’t think I do what Alice, Bob or Christine did, and donate out of pressure or guilt.
I spend (or used to spend) a lot of time helping out strangers online with their math and physics questions. I find it more satisfying than caring for oiled birds or stray dogs. Like Daniel, I see the mountain ridges of bad education all around, of which the students asking for help on IRC are just tiny pebbles. Unlike Daniel, I do not feel that I “can’t possibly do enough”. I help people when I feel like it and I don’t pretend that I am a better person because of it, even if they thank me profusely after finally understanding how free-body diagram works. I do wish someone more capable worked on improving the education system to work better than at 1% efficiency, and I have seen isolated cases of it, but I do not feel that it is my problem to deal with. Wrong skillset.
I have read a fair amount of EA propaganda, and I still do not feel that I “should care about people suffering far away”, sorry. (Not really sorry, no.) It would be nice if fewer people died and suffered, sure. But “nice” is all it is. Call me heartless. I am happy that other people care, in case I am in the situation where I need their help. I am also happy that some people give money to those who care, for the same reason. I might even chip in, if it hits close to home.
I do not feel that I would be a better person if I donated more money or dedicated my life to solving one of the “biggest problems”, as opposed to doing what I am good at, though I am happy that some people feel that way; humanity’s strength is in its diversity.
Again, one of the main strengths of humankind is its diversity, and the Bell-curve outliers like “Gandhi, Mother Theresa, Nelson Mandela” tend to have more effect than those of us within 1 standard deviation. Some people address “global poverty”, others write poems, prove theorems, shoot the targets they are told to, or convince other people to do what they feel is right. No one knows which of these is more likely to result in the long-term prosperity of the human race. So it is best to diversify and hope that one of these outliers does not end up killing all of us, intentionally or accidentally.
I don’t feel the weight of the world. Because it does not weigh on me.
Note: having reread what I wrote, I suspect that some people might find it kind of Objectivist. I actually tried reading Atlas Shrugged and quit after 100 pages or so, getting extremely annoyed by the author belaboring an obvious and trivial point over and over. So I only have a vague idea what the movement is all about. And I have no interest in finding out more, given that people who find this kind of writing insightful are not ones I want to associate with.
I don’t disagree, and I don’t think you’re a bad person, and my intent is not to guilt or pressure you. My intent is more to show some people that certain things that may feel impossible are not impossible. :-)
A few things, though:
This seems like a cop out to me. Given a bunch of people trying to help the world, it would be best for all of them to do the thing that they think most helps the world. Often, this will lead to diversity (not just because people have different ideas about what is good, but also because of diminishing marginal returns and saturation). Sometimes, it won’t (e.g. after a syn bio proof of concept that kills 1⁄4 of the race I would hope that diversity in problem-selection would decrease). “It is best to diversify and hope” seems like a platitude that dodges the fun parts.
I also have this feeling, in a sense. I interpret it very differently, and I am aware of the typical mind fallacy, but I also caution against the “you must be Fundamentally Different” fallacy. Part of the theme behind this post is “you can interpret the internal caring feelings differently if you want”, and while I interpret my care-senses differently, I do empathize with this sentiment.
That’s not to say that you should come around to my viewpoint, by any means. But if you (or others) would like to try, for one reason or another, consider the following points:
Do you care only about the people who are currently close friends, or also the people who could be close friends? Is the value a property of the person, or a property of the fact that that person has been brought to your awareness?
Would you care more about humans in a context where humanity is treated as the ‘in-group’? For example, consider a situation where an alien race is at war with humans, and a roving band of alien brutes have captured a human family and are torturing them for fun. Does this boil your blood? Or do you not really care?
I assume that you wouldn’t push a friend in front of the trolley to save ten strangers. However, if you and a friend were in a room with ten strangers behind a veil of uncertainty, and were informed that the twelve of you were about to play in a trolley game, would you sign a contract which stated that (assuming unanimous agreement) the pusher agrees to push the pushee?
In my case, much of my decision to care about the rest of the world is due to an adjustment upwards of the importance of other people (after noticing that I tend to care significantly about people after I have gotten to know them very well, and deciding that people don’t matter less just because I’m not yet close to them). There’s also a significant portion of my caring that comes from caring about others because I would want others to care about me if the positions were reversed, and this seeming like the right action in a timeless sense.
Finally, much of my caring comes from treating all of humanity as my in-group (everyone is a close friend, I just don’t know most of them yet; see also the expanding circle).
I mess with my brother sometimes, but anyone else who tries to mess with my brother has to go through me first. Similarly there is some sense in which I don’t “care” about most of the nameless masses who are out of my sight (in that I don’t have feelings for them), but there’s a fashion in which I do care about them, in that anyone who fucks with humans fucks with me.
Disease, war, and death are all messing with my people, and while I may not be strong enough to do anything about it today, there will come a time.
There may be a group of people, such that it is possible for any one individual of the group to become my close friend, but where it is not possible for all the individuals to become my close friends simultaneously.
In that case, saying “any individual could become a close friend, so I should multiply ‘caring for one friend’ by the the number of individuals in the group” is wrong. Instead, I should multiply “caring for one friend’ by the number of individuals in the group who can become my friend simultaneously, and not take into account the individuals in excess of that. In fact, even that may be too strong. It may be possible for one individual in the group to become my close friend only at the cost of reducing the closeness to my existing friends, in which case I should conclude that the total amount I care shouldn’t increase at all.
The point is that the fact that someone happens to be your close friend seems like the wrong reason to care about them.
Let’s say, for example, that:
If X was my close friend, I would care about X
If Y was my close friend, I would care about Y
X and Y could not both be close friends of mine simultaneously.
Why should whether I care for X or care for Y depend on which one I happen to end up being close friends with? Rather, why shouldn’t I just care about both X and Y regardless of whether they are my close friends or not?
Perhaps I have a limited amount of caring available and I am only able to care for a certain number of people. If I tried to care for both X and Y I would go over my limit and would have to reduce the amount of caring for other people to make up for it. In fact, “only X or Y could be my close friend, but not both” may be an effect of that.
It’s not “they’re my close friend, and that’s the reason to care about them”, it’s “they’re under my caring limit, and that allows me to care about them”. “Is my close friend” is just another way to express “this person happened, by chance, to be added while I was still under my limit”. There is nothing special about this person, compared to the pool of all possible close friends, except that this person happened to have been added at the right time (or under randomly advantageous circumstances that don’t affect their merit as a person, such as living closer to you).
Of course, this sounds bad because of platitudes we like to say but never really mean. We like to say that our friends are special. They aren’t; if you had lived somewhere else or had different random experiences, you’d have had different close friends.
I think I would state a similar claim in a very different way. Friends are allies; both of us have implicitly agreed to reserve resources for the use of the other person in the friendship. (Resources are often as simple as ‘time devoted to a common activity’ or ‘emotional availability.’) Potential friends and friends might be indistinguishable to an outside observer, but to me (or them) there’s an obvious difference in that a friend can expect to ask me for something and get it, and a potential friend can’t.
(Friendships in this view don’t have to be symmetric- there are people that I’d listen to them complain that I don’t expect they’d listen to me complain, and the reverse exists as well.)
I think that it’s reasonable to call facts ‘special’ relative to counterfacts- yes, I would have had different college friends if I had gone to a different college, but I did actually go to the college I went to, and actually did make the friends I did there.
That’s a solid point, and to a significant extent I agree.
There are quite a lot of things that people can spend these kinds of resources on that are very effective at a small scale. This is an entirely sufficient basis to justify the idea of friends, or indeed “allies”, which is a more accurate term in this context. A network of local interconnections of such friends/allies who devote time and effort to one another is quite simply a highly efficient way to improve overall human well-being.
This also leads to a very simple, unbiased moral justification for devoting resources to your close friends; it’s simply that you, more so than other people, are in a unique position to affect the well-being of your friends, and vice versa. That kind of argument is also an entirely sufficient basis for some amount of “selfishness”—ceteris paribus, you yourself are in a better position to improve your own well-being than anyone else is.
However, this is not the same thing as “caring” in the sense So8res is using the term; I think he’s using the term more in the sense of “value”. For the above reasons, you can value your friends equally to anyone else while still devoting more time and effort to them. In general, you’re going to be better able to help your close friends than you are a random stranger on the street.
The way you put it, it seems like you want to care for both X and Y but are unable to.
However, if that’s the case then So8res’s point carries, because the core argument in the post translates to “if you think you ought to care about both X and Y but find yourself unable to, then you can still try to act the way that you would if you did, in fact, care about both X and Y”.
“I want to care for an arbitrarily chosen person from the set of X and Y” is not “I want to care for X and Y”. It’s “I want to care for X or Y”.
Why do you think so? It seems to me the fact that someone is my close friend is an excellent reason to care about her.
I think it depends on what you mean by “care”.
If you mean “devote time and effort to”, sure; I completely agree that it makes a lot of sense to do this for your friends, and you can’t do that for everyone.
If you mean “value as a human being and desire their well-being”, then I think it’s not justifiable to afford special privilege in this regard to close friends.
By “care” I mean allocating a considerably higher value to his particular human compared to a random one.
Yes, I understand you do, but why do you think so?
I don’t think the worth of a human being should be decided upon almost entirely circumstantial grounds, namely their proximity and/or relation to myself. If anything it should be a function of the qualities or the nature of that person, or perhaps even blanket equality.
If I believe that my friends are more valuable, it should be because of the qualities that led to them being my friend rather than simply the fact that they are my friends. However, if that’s so then there are many, many other people in the world who have similar qualities but are not my friends.
I assume you would pay your own mortgage. Would you mind paying my mortgage as well?
I can’t pay everyone’s mortgage, and nor can anyone else, so different people will need to pay for different mortgages.
Which approach works better, me paying my mortgage and you paying yours, or me paying your mortgage and you paying mine?
If you care equally for two people, your money should go to the one with the greatest need. It is very unlikely that in a country with many mortgage-payers, the person with the greatest need is you. So you should be paying down people’s mortgages until the mortgages of everyone in the world leave them no worse than you with respect to mortgages; only then should you then pay anything to yourself.
And even if it’s impractical to distribute your money to all mortgage payers in the world, surely you could find a specific mortgage payer who is so bad off that paying the mortgage of just this one person satisfies a greater need than paying off your own.
But you don’t. And you can’t. And everyone doesn’t and can’t, not just for mortgages, but for, say, food or malaria nets. You don’t send all your income above survival level to third-worlders who need malaria nets (or whatever other intervention people need the most); you don’t care for them and yourself equally.
Yes, if I really ought to value other human beings equally then it means I ought to devote a significant amount of time and/or money to altruistic causes, but is that really such an absurd conclusion?
Perhaps I don’t do those things, but that doesn’t mean I can’t and it doesn’t mean I shouldn’t.
You can say either
You ought to value other human beings equally, but you don’t.
You do value other human beings equally, and you ought to act in accordance with that valuation, but you don’t.
You appear to be claiming 2 and denying 1. However, I don’t see a significant difference between 1 and 2; 1 and 2 result in exactly the same actions by you and it ends up just being a matter of semantics.
I agree; I don’t see a significant difference between thinking that I ought to value other human beings equally but failing to do so, and actually viewing them equally and not acting accordingly. If I accept either (1) or (2) it’s still a moral failure, and it is one that I should act to correct. In either case, what matters is the actions that I ought to take as a result (i.e. effective altruism), and I think the implications are the same in both cases.
That being said, I guess the methods that I would use to correct the problem would be different in either hypothetical. If it’s (1) then there may be ways of thinking about it that would result in a better valuation of other people, or perhaps to correct for the inaccuracy of the care-o-meter as per the original post.
If it’s (2), then the issue is one of akrasia, and there are plenty of psychological tools or rationalist techniques that could help.
Of course, (1) and (2) aren’t the only possibilities here; there’s at least two more that are important.
You seem to be agreeing by not really agreeing. What does it even mean to say “I value other people equally but I don’t act on that”? Your actions imply a valuation, and in that implied valuation you clearly value yourself more than other people. It’s like saying “I prefer chocolate over vanilla ice cream, but if you give me them I’ll always pick the vanilla”. Then you don’t really prefer chocolate over vanilla, because that’s what it means to prefer something.
My actions alone don’t necessarily imply a valuation, or at least not one that makes any sense.
There are a few different levels at which one can talk about what it means to value something, and revealed preference is not the only one that makes sense.
Is this basically another way of saying that you’re not the king of your brain, or something else?
That’s one way to put it, yes.
As usual, the word “better” hides a lot of relevant detail. Better for whom? By what measure?
Shockingly, in at least some cases by some measures, though, it works better for us if I pay your debt and you pay my debt, because it is possible for a third party to get much, much better terms on repayment than the original borrower. In many cases, debts can be sold for pennies on the dollar to anyone except the original borrower. See any of these articles
Ah. It seems we have been talking about somewhat different things.
You are talking about the worth of a human being. I’m talking about my personal perception of the value of a human being under the assumption that other people can and usually do have different perceptions of the same value.
I try not to pass judgement of the worth of humans, but I am quite content with assigning my personal values to people based, in part, on “their proximity and/or relation to myself”.
I’m not entirely sure what a “personal perception of the value of a human being” is, as distinct from the value or worth of a human being. Surely the latter is what the former is about?
Granted, I guess you could simply be talking about their instrumental value to yourself (e.g. “they make me happy”), but I don’t think that’s really the main thrust of what “caring” is.
The “worth a human being” implies that there is one, correct, “objective” value for that human being. We may not be able to observe it directly so we just estimate it, with some unavoidable noise and errors, but theoretically the estimates will converge to the “true” value. The worth of a human being is a function with one argument: that human being.
The “personal perception of the value of a human being” implies that there are multiple, different, “subjective” values for the same human being. There is no single underlying value to which the estimates converge. The personal perception of a value is a function with two arguments: who is evaluated and who does the evaluation.
So, either there is such a thing as the “objective” value and hence, implicitly, you should seek to approach that value, or there is not.
I don’t see any reason to believe in an objective worth of this kind, but I don’t really think it matters that much. If these is no single underlying value, then the act of assigning your own personal values to people is still the same thing as “passing judgement on the worth of humans”, because it’s the only thing those words could refer to; you can’t avoid the issue simply by calling it a subjective matter.
In my view, regardless of whether the value in question is “subjective” or “objective”, I don’t think it should be determined by the mere circumstance of whether I happened to meet that person or not.
So, for example, you believe that to a mother the value of her own child should be similar to that of a random person anywhere on Earth—right? It’s a “mere circumstance” that this particular human happens to be her child.
Probably not just any random person, because one can reasonably argue that children should be valued more highly than adults.
However, I do think that the mother should hold other peoples’ children as being of equal value to her own. That doesn’t mean valuing her own children less, it means valuing everyone else’s more.
Sure, it’s not very realistic to expect this of people, but that doesn’t mean they shouldn’t try.
One can reasonably argue the other way too. New children are easier to make than new adults.
Since she has finite resources, is there a practical difference?
It seems to me extreme altruism is so easily abused that it will inevitably wipe itself out in the evolution of moral systems.
True. However, regardless of the relative value of children and adults, it is clear that one ought to devote significantly more time and effort to children than to adults, because they are incapable of supporting themselves and are necessarily in need of help from the rest of society.
Earlier I specifically drew a distinction between devoting time and effort and valuation; you don’t have to value your own children more to devote yourself to them and not to other peoples’ children.
That said, there are some practical differences. First of all, it may be better not to have children if you could do more to help other peoples’ children. Secondly, if you do have children and still have spare resources over and above what it takes to properly care for them, then you should consider where those spare resources could be spent most effectively.
If an extreme altruist recognises that taking such an extreme position would lead overall to less altruism in the future, and thus worse overall consequences, surely the right thing to do is stand up to that abuse. Besides, what exactly do you mean by “extreme altruism”?
A good point. By abuse I wouldn’t necessarily mean anything blatant though, just that selfish people are happy to receive resources from selfless people.
Valuing people equally by default when their instrumental value isn’t considered. I hope I didn’t misunderstand you. That’s about as extreme it gets but I suppose you could get even more extreme by valuing other people more highly than yourself.
Sure, and there isn’t really anything wrong with that as long as the person receiving the resources really needs them.
The term “altruism” is often used to refer to the latter, so the clarification is necessary; I definitely don’t agree with that extreme.
In any case, it may not be reasonable to expect people (or yourself) to hold to that valuation, or to act in complete recognition of what that valuation implies even if they do, but it seems like the right standard to aim for. If you are likely biased against valuing distant strangers as much as you ought to, then it makes sense to correct for it.
My view is similar to yours, but with the following addition:
I have actual obligations to my friends and family, and I care about them quite a bit. I also care to a lesser extent about the city and region that I live in. If I act as though I instead have overriding obligations to the third world, then I risk being unable to satisfy my more basic obligations. To me, if for instance I spend my surplus income on mosquito nets instead of saving it and then have some personal disaster that my friends and family help bail me out of (because they also have obligations to me), I’ve effectively stolen their money and spent it on something they wouldn’t have chosen to spend it on. While I clearly have some leeway in these obligations and get to do some things other than save, charity falls into the same category as dinner out: I spend resources on it occasionally and enjoy or feel good about doing so, but it has to be kept strictly in check.
I feel like I’m somewhere halfway between you and so8res. I appreciate you sharing this perspective as well.
Thank you for posting that. My views and feelings about this topic are largely the same. (There goes any chance of my being accepted for a CFAR workshop. :))
On the question of thousands versus gigantic numbers of future people, what I would value is the amount of space they explore, physical and experiential, rather than numbers. A single planetful of humans is worth almost the same as a galaxy of them, if it consists of the same range of cultures and individuals, duplicated in vast numbers. The only greater value in a larger population is the more extreme range of random outliers it makes available.
Thank you for stating your perspective and opinion so clearly and honestly. It is valuable. Now allow me to do the same, and follow by a question (driven by sincere curiosity):
I think you are.
You are heartless.
Here’s my question, and I hope you take the time to answer as honestly as you wrote your comment:
Why?
After all you’ve rejected to care about, why in the world would you care about something as abstract as “humanity surviving and thriving”? It’s just an ape species, and there have already been billions of them. In addition, you clearly don’t care about numbers of individuals or quality of life. And you know the heat death of the universe will kill them all off anyway, if they survive the next few centuries.
I don’t mean to convince you otherwise, but it seems arbitrary—and surprisingly common—that someone who doesn’t care about the suffering or lives of strangers would care about that one thing out of the blue.
I can’t speak for shminux, of course, but caring about humanity surviving and thriving while not caring about the suffering or lives of strangers doesn’t seem at all arbitrary or puzzling to me.
I mean, consider the impact on me if 1000 people I’ve never met or heard of die tomorrow, vs. the impact on me if humanity doesn’t survive. The latter seems incontestably and vastly greater to me… does it not seem that way to you?
It doesn’t seem at all arbitrary that I should care about something that affects me greatly more than something that affects me less. Does it seem that way to you?
Yes, rereading it, I think I misinterpreted response 2 as saying it doesn’t matter whether a population of 1,000 people has a long future or a population of one googleplex [has an equally long future]. That is, that population scope doesn’t matter, just durability and surivival. I thought this defeated the usual Big Future argument.
But even so, his 5 turns it around: Practically all people in the Big Future will be strangers, and if it is only “nicer” if they don’t suffer (translation: their wellbeing doesn’t really matter), then in what way would the Big Future matter?
I care a lot about humanity’s future, but primarily because of its impact on the total amout of positive and negative conscious experiences that it will cause.
...Slow deep breath… Ignore inflammatory and judgmental comments… Exhale slowly… Resist the urge to downvote… OK, I’m good.
First, as usual, TheOtherDave has already put it better than I could.
Maybe to elaborate just a bit.
First, almost everyone cares about the survival of the human race as a terminal goal. Very few have the infamous ‘apres nous le deluge’ attitude. It seems neither abstract nor arbitrary to me. I want my family, friends and their descendants to have a bright and long-lasting future, and it is predicated on the humanity in general having one.
Second, a good life and a bright future for the people I care about does not necessarily require me to care about the wellbeing of everyone on Earth. So I only get mildly and non-scalably sad when bad stuff happen to them. Other people, including you, care a lot. Good for them.
Unlike you (and probably Eliezer), I do not tell other people what they should care about, and I get annoyed at those who think their morals are better than mine. And I certainly support any steps to stop people from actively making other people’s lives worse, be it abusing them, telling them whom to marry or how much and what cause to donate to. But other than that, it’s up to them. Live and let live and such.
Hope this helps you understand where I am coming from. If you decide to reply, please consider doing it in a thoughtful and respectful manner this time.
I’m actually having difficultly understanding the sentiment “I get annoyed at those who think their morals are better than mine”. I mean, I can understand not wanting other people to look down on you as a basic emotional reaction, but doesn’t everyone think their morals are better than other people?
That’s the difference between morals and tastes. If I like chocolate ice cream and you like vanilla, then oh well. I don’t really care and certainly don’t think my tastes are better for anyone other than me. But if I think people should value the welfare of strangers and you don’t, then of course I think my morality is better. Morals differ from tastes in that people believe that it’s not just different, but WRONG to not follow them. If you remove that element from morality, what’s left? The sentiment “I have these morals, but other people’s morals are equally valid” sounds good, all egalitarian and such, but it doesn’t make any sense to me. People judge the value of things through their moral system, and saying “System B is as good as System A, based on System A” is borderline nonsensical.
Also, as an aside, I think you should avoid rhetorical statements like “call me heartless if you like” if you’re going to get this upset when someone actually does.
I don’t.
Would you make that a normative statement?
Well, kinda-sorta. I don’t think the subject is amenable to black-and-white thinking.
I would consider people who think their personal morals are the very best there is to be deluded and dangerous. However I don’t feel that people who think their morals are bad are to be admired and emulated either.
There is some similarity to how smart do you consider yourself to be. Thinking yourself smarter than everyone else is no good. Thinking yourself stupid isn’t good either.
So would you say that moral systems that don’t think they’re better than other moral systems are better than other moral systems? What happens if you know to profess the former kind of a moral system and agree with the whole statement? :)
In one particular aspect, yes. There are many aspects.
The barber shaves everyone who doesn’t shave himself..? X-)
So if my morality tells me that murdering innocent people is good, then that’s not worse than whatever your moral system is?
I know it’s possible to believe that (it was pretty much used as an example in my epistemology textbook for arguments against moral relativism), I just never figured anyone actually believed it.
You are confused between two very different statements:
(1) I don’t think that my morals are (always, necessarily) better than other people’s.
(2) I have no basis whatsoever for judging morality and/or behavior of other people.
What basis do you have for judging others morality other than your own morality? And if you ARE using your own morality to judge their morality, aren’t you really just checking for similarity to your own?
I mean, it’s the same way with beliefs. I understand not everything I believe is true, and I thus understand intellectually that someone else might be more correct (or, less wrong, if you will) than me. But in practice, when I’m evaluating others’ beliefs I basically compare them with how similar they are to my own. On a particularly contentious issue, I consider reevaluating my beliefs, which of course is more difficult and involved, but for simple judgement I just use comparison.
Which of course is similar to the argument people sometimes bring up about “moral progress”, claiming that a random walk would look like progress if it ended up where we are now (that is, progress is defined as similarity to modern beliefs).
My question though is that how do you judge morality/behavior if not through your own moral system? And if that is how you do it, how is your own morality not necessarily better?
No, I don’t think so.
Morals are a part of the value system (mostly the socially-relevant part) and as such you can think of morals as a set of values. The important thing here is that there are many values involved, they have different importance or weight, and some of them contradict other ones. Humans, generally speaking, do not have coherent value systems.
When you need to make a decision, your mind evaluates (mostly below the level of your consciousness) a weighted balance of the various values affected by this decision. One side wins and you make a particular choice, but if the balance was nearly even you feel uncomfortable or maybe even guilty about that choice; if the balance was very lopsided, the decision feels like a no-brainer to you.
Given the diversity and incoherence of personal values, comparison of morals is often an iffy thing. However there’s no reason to consider your own value system to be the very best there is, especially given that it’s your conscious mind that makes such comparisons, but part of morality is submerged and usually unseen by the consciousness. Looking at an exact copy of your own morals you will evaluate them as just fine, but not necessarily perfect.
Also don’t forget that your ability to manipulate your own morals is limited. Who you are is not necessarily who you wish you were.
This is a somewhat frustrating situation, where we both seem to agree on what morality is, but are talking over each other. I’ll make two points and see if they move the conversation forward:
1: “There’s no reason to consider your own value system to be the very best there is”
This seems to be similar to the point I made above about acknowledging on an intellectual level that my (factual) beliefs aren’t the absolute best there is. The same logic holds true for morals. I know I’m making some mistakes, but I don’t know where those mistakes are. On any individual issue, I think I’m right, and therefore logically if someone disagrees with me, I think they’re wrong. This is what I mean by “thinking that one’s own morals are the best”. I know I might not be right on everything, but I think I’m right about every single issue, even the ones I might really be wrong about. After all, if I was wrong about something, and I was also aware of this fact, I would simply change my beliefs to the right thing (assuming the concept is binary. I have many beliefs I consider to be only approximations, which I consider to be only the best of any explanation I have heard so far. Not prefect, but “least wrong”).
Which brings me to point 2.
2: “Also don’t forget that your ability to manipulate your own morals is limited. Who you are is not necessarily who you wish you were.”
I’m absolutely confused as to what this means. To me, a moral belief and a factual belief are approximately equal, at least internally (if I’ve been equivocating between the two, that’s why). I know I can’t alter my moral beliefs on a whim, but that’s because I have no reason to want to. Consider self-modifying to want to murder innocents. I can’t do this, primarily because I don’t want to, and CAN’T want to for any conceivable reason (what reason does Gandhi have to take the murder pill if he doesn’t get a million dollars?) I suppose modifying instrumental values to terminal values (which morals are) to enhance motivation is a possible reason, but that’s an entirely different can of worms. If I wished I held certain moral beliefs, I already have them. After all, morality is just saying “You should do X”. So wishing I had a different morality is like saying “I wish I though I should do X”. What does that mean?
Not being who you wish to be is an issue of akrasia, not morality. I consider the two to be separate issues, with morality being an issue of beliefs and akrasia being an issue of motivation.
In short, I’m with you for the first line and two following paragraphs, and then you pull a conclusion out in the next paragraph that I disagree with. Clearly there’s a discontinuity either in my reading or your writing.
That’s already an excellent start :-)
Ah. It seems we approach morals from a bit different angles. To you morals is somewhat like physics—it’s a system of “hard” facts and, generally speaking, they are either correct or not. As you say, “On any individual issue, I think I’m right, and therefore logically if someone disagrees with me, I think they’re wrong.”
To me morals is more like preferences—a system of flexible way to evaluate choices. You can have multiple ways to do that and they don’t have to be either correct or not.
Consider a simple example: eating meat. I am a carnivore and think that eating meat is absolutely fine from the morality point of view. Let’s take Alice who is an ideological vegetarian. She feels that eating meat is morally wrong.
My moral position different from (in fact, diametrically opposed to) Alice’s, but I’m not going to say that Alice’s morals are wrong. They are just different and she has full right to have her own.
That does not apply to everything, of course. There are “zones” where I’m fine with opposite morals and there are “zones” where I am not. But even when I would not accept a sufficiently different morality I would hesitate to call it wrong. It seems an inappropriate word to use when there is no external, objective yardstick one could apply. It probably would be better to say that there is a range of values/morals that I consider acceptable and there is a range which I do not.
No, I don’t think so. Morals are values, not desires. It’s not particularly common to wish to hold different values (I think), but I don’t see why this is impossible. For example, consider somebody who values worldly success, winning, being at the top. But he has a side which isn’t too happy with this constant drive, the trampling of everything in the rush to be the first, the sacrifices it requires. That side of his would prefer him to value success less.
In general, people sometimes wish to radically change themselves (religious (de)conversions, acceptance of major ideologies, etc.) and that usually involves changing their morality. That doesn’t happen in a single moment.
You do realize she’s implicitly calling you complicit in the perpetuation of the suffering and deaths of millions of animals right? I’m having difficulty understanding how you can NOT say that her morality is wrong. Her ACTIONS are clearly unobjectionable (Eating plants is certainly not worse than eating meat under the vast majority of ethical systems) but her MORALITY is quite controversial. I have a feeling like you accept this case because she is not doing anything that violates your own moral system, while you are doing something that violates hers. To use a (possibly hyperbolic and offensive) analogy, this is similar to a case where a murderer calls the morals of someone who doesn’t accept murder as “just different”, and something they have the full right to have.
I don’t think your example works. He values success, AND he values other things (family, companionship, ect.) I’m not sure why you’re calling different values “Different sides” as though they are separate agents. We all have values that occasionally conflict. I value a long life, even biological immortality if possible (I know, what am I doing on lesswrong with a value like that? /sarcasm), but I wouldn’t sacrifice 1000 lives a day to keep me alive atop a golden throne. This doesn’t seem like a case of my “Don’t murder” side wanting me to value immortality less, it’s more a case of considering the expected utility of my actions and coming to a conclusion about what collateral damage I’m willing to accept. It’s a straight calculation, no value readjustment required.
As for your last point, I’ve never experienced such a radical change (I was raised religiously, but outside of weekly mass my family never seemed to take it very seriously and I can’t remember caring too much about it). I actually don’t know what makes other people adopt ideologies. For me, I’m a utilitarian because it seems like a logical way to formalize my empathy and altruistic desires, and to this day I have difficulty grokking deontology like natural law theology (you would think being raised catholic would teach you some of that. It did not).
So, to summarize my ramblings: I think your first example only LOOKS like reasonable disagreement because Alice’s actions are unobjectionable to you, and you would feel differently if positions were reversed. I think your example of different sides is really just explaining different values, which have to be weighed against each other but need not cause moral distress. And I have no idea what to make of your last point.
If I ignored or misstated any of your points, or am just completely talking over you and not getting the point at all, please let me know.
I think the terms “acceptable” and “not acceptable” are much better here than right and wrong.
If the positions were reversed, I might find Alice’s morality unacceptable to me, but I still wouldn’t call it wrong.
No, I’m not talking about different values here. Having different conflicting values is entirely normal and commonplace. I am here implicitly accepting the multi-agent theory of mind and saying that a part of Bob’s (let’s call the guy Bob) personality would like to change his values. It might even be a dominant part of Bob’s conscious personality, but it still is having difficulty controlling his drive to win.
Or let’s take a different example, with social pressure. Ali Ababwa emigrated from Backwardistan to the United States. His original morality was that women are… let’s say inferior. However Ali went to school in the US, got educated and somewhat assimilated. He understands—consciously—that his attitude towards women is neither adequate nor appropriate and moreover, his job made it clear to him that he ain’t in Backwardistan any more and noticeable sexism will get him fired. And yet his morals do not change just because he would prefer them to change. Maybe they will, eventually, but it will take time.
Sure, but do you accept that other people have?
I think akrasia could also be an issue of being mistaken about your beliefs, all of which you’re not conscious of at any given time.
It’s not clear to me that comparing moral systems on a scale of good and bad makes sense without a metric outside the systems.
So while I wouldn’t murder innocent people myself, comparing our moral systems on a scale of good and bad is uselessly meta, since that meta-reality doesn’t seem to have any metric I can use. Any statements of good or bad are inside the moral systems that I would be trying to compare. Making a comparison inside my own moral system doesn’t seem to provide any new information.
There’s no law of physics that talks about morality, certainly. Morals are derived from the human brain though, which is remarkably similar between individuals. With the exception of extreme outliers, possibly involving brain damage, all people feel emotions like happiness, sadness, pain and anger. Shouldn’t it be possible to judge most morality on the basis of these common features, making an argument like “wanton murder is bad, because it goes against the empathy your brain evolved to feel, and hurts the survival chance you are born valuing”? I think this is basically the point EY makes about the “psychological unity of humankind”.
Of course, this dream goes out the window with UFAI and aliens. Lets hope we don’t have to deal with those.
Yes, it should. However, in the hypothetical case involved, the reason is not true; the hypothetical brain does not have the quality “Has empathy and values survival and survival is impaired by murder”.
We are left with the simple truth that evolution (including memetic evolution) selects for things which produce offspring that imitate them, and “Has a moral system that prohibits murder” is a quality that successfully creates offspring that typically have the quality “Has a moral system that prohibits murder”.
The different quality “Commits wanton murder” is less successful at creating offspring in modern society, because convicted murderers don’t get to teach children that committing wanton murder is something to do.
I think those similarities are much less strong that EY appears to suggests; see e.g. “Typical Mind and Politics”.
It seems to me that when you explicitly make your own virtue or lack thereof a topic of discussion, and challenge readers in so many words to “call [you] heartless”, you should not then complain of someone else’s “inflammatory and judgmental comments” when they take you up on the offer.
And it doesn’t seem to me that Hedonic_Treader’s response was particularly thoughtless or disrespectful.
(For what it’s worth, I don’t think your comments indicate that you’re heartless.)
It’s interesting because people will often accuse a low status out group of “thinking they are better than everyone else” *. But I had never actually seen anyone actually claim that their ingroup is better than everyone else, the accusation was always made of straw …. until I saw Hedonic Treader’s comment.
I do sort of understand the attitude of the utilitarian EA’s. If you really believe that everyone must value everyone else’s life equally, then you’d be horrified by people’s brazen lack of caring. It is quite literally like watching a serial killer casually talk about how many people they killed and finding it odd that other people are horrified. After all, each life you fail to save is essentially the same a murder under utilitarianism.
*I’ve seen people make this accusation against nerds, atheists, fedora wearers, feminists, left leaning persons, Christians etc
I expect that’s correct, but I’m not sure your justification for it is correct. In particular it seems obviously possible for the following things all to be true:
A thinks her group is better than others.
A’s thinking this is obvious enough for B to be able to discern it with some confidence.
A never explicitly says that her group is better than others.
and I think people who say (e.g.) that atheists think they’re smarter than everyone else would claim that that’s what’s happening.
I repeat, I agree that these accusations are usually pretty strawy, but it’s a slightly more complicated variety of straw than simply claiming that people have said things they haven’t. More specifically, I think the usual situation is something like this:
A really does think that, to some extent and in some respects, her group is better than others.
But so does everyone else.
B imagines that he’s discerned unusual or unreasonable opinions of this sort in A.
But really he hasn’t; at most he’s picked up on something that he could find anywhere if he chose to look.
[EDITED to add, for clarity:] By “But so does everyone else” I meant that (almost!) everyone thinks that (many of) the groups they belong to are (to some extent and in some respects) better than others. Most of us mostly wouldn’t say so; most of us would mostly agree that these differences are statistical only and that there are respects in which are groups are worse too; but, still, on the whole if a person chooses to belong to some group (e.g., Christians or libertarians or effective altruists or whatever) that’s partly because they think that group gets right (or at least more right) some things that other groups get wrong (or at least less right).
I do imagine that the first situation is more common, in general, than the second.
This is entirely because of the point:
But so does everyone else.
A group that everyone considers better than others must be a single group, and probably very small; this requirement therefore limits your second scenario to a very small pool of people, while I imagine that your first scenario is very common.
Sorry, I wasn’t clear enough. By “so does everyone else” I meant “everyone else considers the groups they belong to to be better, to some extent and in some respects, better than others”.
Ah, that clarification certainly changes your post for the better. Thanks. In light of it, I do agree that the second scenario is common; but looking closely at it, I’m not sure that it’s actually different to the first scenario. In both cases, A thinks her group is better; in both cases, B discerns that fact and calls excessive attention to it.
Well, if I belong to the group of chocolate ice cream eaters, I do think that eating chocolate ice cream is better than eating vanilla ice cream—by my standards; it doesn’t follow that I also believe it’s better by your standards or by objective standards (whatever they might be) and feel smug about it.
Sure. Some things are near-universally understood to be subjective and personal. Preference in ice cream is one of them. Many others are less so, though; moral values, for instance. Some even less; opinions about apparently-factual matters such as whether there are any gods, for instance.
(Even food preferences—a thing so notoriously subjective that the very word “taste” is used in other contexts to indicate something subjective and personal—can in fact give people that same sort of sense of superiority. I think mostly for reasons tied up with social status.)
Perhaps to avoid confusion, my comment wasn’t intended as an in-group out-group thing or even as a statement about my own relative status.
“Better than” and “worse than” are very simple relative judgments. If A rapes 5 victims a week and B rapes 6, A is a better person than B. If X donates 1% of his income potential to good charities and Y donates 2%, X is a worse person than Y (all else equal). It’s a rather simple statement of relative moral status.
Here’s the problem: If we pretend—like some in the rationalist community do—that all behavior is morally equivalent and all morals are equal, then there is no social incentive to behave prosocially when possible. Social feedback matters and moral judgments have their legitimate place in any on-topic discourse.
Finally caring about not caring is self-defeating: One cannot logically judge jugmentalism without being judgmental oneself.
That’s a strawman. I haven’t seen anyone say anything like that. What some people do say is that there is no objective standard by which to judge various moralities (that doesn’t make them equal, by the way).
Of course there is. Behavior has consequences regardless of morals. It is quite common to have incentives to behave (or not) in certain ways without morality being involved.
Why is that?
What do you mean by “morality”? Were the incentives the Heartstone wearer was facing when deciding whether to kill the kitten about morality, or not?
By morality I mean a particular part of somebody’s system of values. Roughly speaking, morality is the socially relevant part of the value system (though that’s not a hard definition, but rather a pointer to the area where you should search for it).
It seems self termination was the most altruistic way of ending the discussion. A tad over the top I think.
One can judge “judgmentalism on set A” without being “judgemental on set A” (while, of course, still being judgmental on set B).
You are saying that shminux is “a worse person than you” and also “heartless”, but I am not sure what these words mean. How do you measure which person is better as compared to another person ? If the answer is, “whoever cares about more people is better”, then all you’re saying is, “shminux cares about fewer people because he cares about fewer people”. This is true, but tautologically so.
All morals are axioms, not theorems, and thus all moral claims are tautological.
Whatever morals we choose, we are driven to choose them by the morals we already have – the ones we were born with and raised to have. We did not get our morals from an objective external source. So no matter what your morals, if you condemn someone else by them, your condemnation will be tautoligcal.
I don’t agree.
Yes, at some level there are basic moral claims that behave like axioms, but many moral claims are much more like theorems than axioms.
Derived moral claims also depend upon factual information about the real world, and thus they can be false if they are based on incorrect beliefs about reality.
Then every human being in existence is heartless.
I disagree. There are degrees of caring, and appropriate responses to them. Admittedly, “nice” is a term with no specific meaning, but most of us can probably put it on a relative ranking with other positive terms, such “non-zero benefit” or “decent” (which I, and probably most people, would rank below “nice”) and “excellent”, “wonderful”, “the best thing in the world” (in the hyperbolic “best thing I have in mind right now” sense), or “literally, after months of introspection, study, and multiplying, I find that this is the best thing which could possibly occur at this time”; I suspect most native English speakers would agree that those are stronger sentiments than “nice”. I can certainly think of things that are more important than merely “nice” yet less important than a reduction in death and suffering.
For example, I would really like a Tesla car, with all the features. In the category of remotely-feasible things somebody could actually give me, I actually value that higher than there’s any rational reason for. On the other hand, if somebody gave me the money for such a car, I wouldn’t spend it on one… I don’t actually need a car, in fact don’t have a place for it, and there are much more valuable things I could do with that money. Donating it to some highly-effective charity, for example.
Leaving aside the fact that “every human being in existence” appears to require excluding a number of people who really are devoting their lives to bringing about reductions in suffering and death, there are lots of people who would respond to a cessation of some cause of suffering or death more positively than to simply think it “nice”. Maybe not proportionately more positively—as the post says, our care-o-meters don’t scale that far—but there would still be a major difference. I don’t know how common, in actual numbers, that reaction is vs. the “It would be nice” reaction (not to mention other possible reactions), but it is absolutely a significant number of people even among those who aren’t devoting their whole life towards that goal.
Pretty much every human being in existence who thinks that stopping death and suffering is a good thing, still spends resources on themselves and their loved ones beyond the bare minimum needed for survival. They could spend some money to buy poor Africans malaria nets, but have something which is not death or suffering which they consider more important than spending the money. to alleviate death and suffering.
In that sense, it’s nice that death and suffering are alleviated, but that’s all.
“Not devoting their whole life towards stopping death and suffering” equates to “thinks something else is more important than stopping death and suffering”.
False dichotomy. You can have (many!) things which are more than merely “nice” yet less than the thing you spend all available resources on. To take a well-known public philanthropist as an example, are you seriously claiming that because he does not spend every cent he has eliminating malaria as fast as possible, Bill Gates’ view on malaria eradication is that “it’s nice that death and suffering are alleviated, but that’s all”?
We should probably taboo the word “nice” here; since we seem likely to be operating on different definitions of it. To rephrase my second sentence of this post, then: You can have (many!) things which you hold to be important and work to bring about, but which you do not spend every plausibly-available resource on.
Also, your final sentence is not logically consistent. To show that a particular goal is the most important thing to you, you only need to devote more resources (including time) to it than to any other particular goal. If you allocate 49% of your resources to ending world poverty, 48% to being a billionaire playboy, and 3% to personal/private uses that are not strictly required for either of those goals, that is probably not the most efficient possible manner to allocate your resources, but there is nothing you value more than ending poverty (a major cause of suffering and death) even though it doesn’t even consume a majority of your resources. Of course, this assumes that the value of your resources is fixed wherever you spend them; in the real world, the marginal value of your investments (especially in things like medicine) go down the more resources you pump into them in a given time frame; a better use might be to invest a large chunk of your resources into things that generate more resources, while providing as much towards your anti-suffering goals as they can efficiently use at once.
Let’s be a bit more concrete here. If you devote approximately half your resources to ending poverty and half to being a billionaire playboy, that means something like this: you value saving 10000 Africans’ lives less than you value having a second yacht. I’m sure that second yacht is fun to have, but I think it’s reasonable to categorize something that you value less than 1/10000 of the increment from “one yacht” to “two yachts” as no more important than “nice”.
This is of course not a problem unique to billionaire playboys, but it’s maybe a more acute problem for them; a psychologically equivalent luxury for an ordinarily rich person might be a second house costing $1M, which corresponds to 1⁄100 as many African lives and likely brings a bigger gain in personal utility; one for an ordinarily not-so-rich person might be a second car costing $10k, another 100x fewer dead Africans and (at least for some—e.g., two-income families living in the US where getting around without a car can be a biiiig pain) a considerable gain in personal utility. There’s still something kinda indecent about valuing your second car more than a person’s life, but at least to my mind it’s substantially less indecent than valuing your second megayacht more than 10000 people’s lives.
Suppose I have a net worth of $1M and you have a net worth of $10B. Each of us chooses to devote half our resources to ending poverty and half to having fun. That means that I think $500k of fun-having is worth the same as $500k of poverty-ending, and you think $5B of fun-having is worth the same as $5B of poverty-ending. But $5B of poverty-ending is about 10,000 times more poverty-ending than $500k of poverty-ending—but $5B of fun-having is nowhere near 10,000 times more fun than $500k of fun-having. (I doubt it’s even 10x more.) So in this situation it is reasonable to say that you value poverty-ending much less, relative to fun-having, than I do.
Pedantic notes: I’m supposing that your second yacht costs you $100M and that you can save one African’s life for $10k; billionaires’ yachts are often more expensive and the best estimates I’ve heard for saving poor people’s lives are cheaper. Presumably if you focus on ending poverty rather than on e.g. preventing malaria then you think that’s a more efficient way of helping the global poor, which makes your luxury trade off against more lives. I am using “saving lives” as a shorthand; presumably what you actually care about is something more like time-discounted aggregate QALYs. Your billionaire playboy’s luxury purchase might be something other than a yacht. Offer void where prohibited by law. Slippery when wet.
And, for the avoidance of doubt, I strongly endorse devoting half your resources to ending poverty and half to being a billionaire playboy, if the alternative is putting it all into being a billionaire playboy. The good you can do that way is tremendous, and I’d take my hat off to you if I were wearing one. I just don’t think it’s right to describe that situation by saying that poverty is the most important thing to you.
Thank you, that’s what I would have said.
What about the argument from marginal effectiveness? I.e. unless the best thing for you to work on is so small that your contribution reduces its marginal effectiveness below that of the second-best thing, you should devote all of your resources to the best thing.
I don’t myself act on the conclusion, but I also don’t see a flaw in the argument.
This is exactly how I feel. I would slightly amend 1 to “I care about family, friends, some other people I know, and some other people I don’t know but I have some other connection to”. For example, I care about people who are where I was several years ago and I’ll offer them help if we cross paths—there are TDT reasons for this. Are the they the “best” people for me to help under utilitarian grounds? No, and so what?
Personally I see EA* as kind of a dangerous delusion, basically people being talked into doing something stupid (in the sense that they’re probably moving away from maximizing their own true utility function to the extent that such a thing exists). When I hear about someone giving away 50% of their income when they’re only middle class to begin with I feel more pity than admiration.
* Meaning the extreme, “all human lives are equally valuable to me” version, rather than just a desire to not waste charity money.
I don’t understand this. Why should my utility function value me having a large income or having a large amount of money? What does that get me?
I don’t have a good logical reason for why my life is a lot more valuable than anyone else’s. I have a lot more information about how to effectively direct resources into improving my own life vs. improving the lives of others, but I can’t come up with a good reason to have a dominantly large “Life of leplen” term in my utility function. Much of the data suggests that happiness/life quality isn’t well correlated with income above a certain income range and that one of the primary purposes of large disposable incomes is status signalling. If I have cheaper ways of signalling high social status, why wouldn’t I direct resources into preserving/improving the lives of people who get much better life quality/dollar returns than I do? It doesn’t seem efficient to keep investing in myself for little to no return.
I wouldn’t feel comfortable winning a 500 dollar door prize in a drawing where half the people in the room were subsistence farmers. I’d probably tear up my ticket and give someone else a shot to win. From my perspective, just because I won the lottery on birth location and/or abilities doesn’t mean I’m entitled to hundreds of times as many resources as someone else who may be more deserving but less lucky.
With that being said, I certainly don’t give anywhere near half of my income to charity and it’s possible the values I actually live may be closer to what you describe than the situation I outline. I’m not sure, and not sure how it changes my argument.
Sounds like you answered your own question!
(It’s one thing to have some simplistic far-mode argument about how this or that doesn’t matter, or how we should sacrifice ourselves for others, but the near-mode nitty-gritty of the real-world is another thing).