I don’t disagree, and I don’t think you’re a bad person, and my intent is not to guilt or pressure you. My intent is more to show some people that certain things that may feel impossible are not impossible. :-)
A few things, though:
No one knows which of these is more likely to result in the long-term prosperity of the human race. So it is best to diversify and hope that one of these outliers does not end up killing all of us, intentionally or accidentally.
This seems like a cop out to me. Given a bunch of people trying to help the world, it would be best for all of them to do the thing that they think most helps the world. Often, this will lead to diversity (not just because people have different ideas about what is good, but also because of diminishing marginal returns and saturation). Sometimes, it won’t (e.g. after a syn bio proof of concept that kills 1⁄4 of the race I would hope that diversity in problem-selection would decrease). “It is best to diversify and hope” seems like a platitude that dodges the fun parts.
I do not “care about every single individual on this planet”. I care about myself, my family, friends and some other people I know.
I also have this feeling, in a sense. I interpret it very differently, and I am aware of the typical mind fallacy, but I also caution against the “you must be Fundamentally Different” fallacy. Part of the theme behind this post is “you can interpret the internal caring feelings differently if you want”, and while I interpret my care-senses differently, I do empathize with this sentiment.
That’s not to say that you should come around to my viewpoint, by any means. But if you (or others) would like to try, for one reason or another, consider the following points:
Do you care only about the people who are currently close friends, or also the people who could be close friends? Is the value a property of the person, or a property of the fact that that person has been brought to your awareness?
Would you care more about humans in a context where humanity is treated as the ‘in-group’? For example, consider a situation where an alien race is at war with humans, and a roving band of alien brutes have captured a human family and are torturing them for fun. Does this boil your blood? Or do you not really care?
I assume that you wouldn’t push a friend in front of the trolley to save ten strangers. However, if you and a friend were in a room with ten strangers behind a veil of uncertainty, and were informed that the twelve of you were about to play in a trolley game, would you sign a contract which stated that (assuming unanimous agreement) the pusher agrees to push the pushee?
In my case, much of my decision to care about the rest of the world is due to an adjustment upwards of the importance of other people (after noticing that I tend to care significantly about people after I have gotten to know them very well, and deciding that people don’t matter less just because I’m not yet close to them). There’s also a significant portion of my caring that comes from caring about others because I would want others to care about me if the positions were reversed, and this seeming like the right action in a timeless sense.
Finally, much of my caring comes from treating all of humanity as my in-group (everyone is a close friend, I just don’t know most of them yet; see also the expanding circle).
I mess with my brother sometimes, but anyone else who tries to mess with my brother has to go through me first. Similarly there is some sense in which I don’t “care” about most of the nameless masses who are out of my sight (in that I don’t have feelings for them), but there’s a fashion in which I do care about them, in that anyone who fucks with humans fucks with me.
Disease, war, and death are all messing with my people, and while I may not be strong enough to do anything about it today, there will come a time.
Do you care only about the people who are currently close friends, or also the people who could be close friends?
There may be a group of people, such that it is possible for any one individual of the group to become my close friend, but where it is not possible for all the individuals to become my close friends simultaneously.
In that case, saying “any individual could become a close friend, so I should multiply ‘caring for one friend’ by the the number of individuals in the group” is wrong. Instead, I should multiply “caring for one friend’ by the number of individuals in the group who can become my friend simultaneously, and not take into account the individuals in excess of that. In fact, even that may be too strong. It may be possible for one individual in the group to become my close friend only at the cost of reducing the closeness to my existing friends, in which case I should conclude that the total amount I care shouldn’t increase at all.
The point is that the fact that someone happens to be your close friend seems like the wrong reason to care about them.
Let’s say, for example, that:
If X was my close friend, I would care about X
If Y was my close friend, I would care about Y
X and Y could not both be close friends of mine simultaneously.
Why should whether I care for X or care for Y depend on which one I happen to end up being close friends with? Rather, why shouldn’t I just care about both X and Y regardless of whether they are my close friends or not?
Perhaps I have a limited amount of caring available and I am only able to care for a certain number of people. If I tried to care for both X and Y I would go over my limit and would have to reduce the amount of caring for other people to make up for it. In fact, “only X or Y could be my close friend, but not both” may be an effect of that.
It’s not “they’re my close friend, and that’s the reason to care about them”, it’s “they’re under my caring limit, and that allows me to care about them”. “Is my close friend” is just another way to express “this person happened, by chance, to be added while I was still under my limit”. There is nothing special about this person, compared to the pool of all possible close friends, except that this person happened to have been added at the right time (or under randomly advantageous circumstances that don’t affect their merit as a person, such as living closer to you).
Of course, this sounds bad because of platitudes we like to say but never really mean. We like to say that our friends are special. They aren’t; if you had lived somewhere else or had different random experiences, you’d have had different close friends.
Is my close friend” is just another way to express “this person happened, by chance, to be added while I was still under my limit”. There is nothing special about this person, compared to the pool of all possible close friends, except that this person happened to have been added at the right time (or under randomly advantageous circumstances that don’t affect their merit as a person, such as living closer to you).
I think I would state a similar claim in a very different way. Friends are allies; both of us have implicitly agreed to reserve resources for the use of the other person in the friendship. (Resources are often as simple as ‘time devoted to a common activity’ or ‘emotional availability.’) Potential friends and friends might be indistinguishable to an outside observer, but to me (or them) there’s an obvious difference in that a friend can expect to ask me for something and get it, and a potential friend can’t.
(Friendships in this view don’t have to be symmetric- there are people that I’d listen to them complain that I don’t expect they’d listen to me complain, and the reverse exists as well.)
They aren’t; if you had lived somewhere else or had different random experiences, you’d have had different close friends.
I think that it’s reasonable to call facts ‘special’ relative to counterfacts- yes, I would have had different college friends if I had gone to a different college, but I did actually go to the college I went to, and actually did make the friends I did there.
That’s a solid point, and to a significant extent I agree.
There are quite a lot of things that people can spend these kinds of resources on that are very effective at a small scale. This is an entirely sufficient basis to justify the idea of friends, or indeed “allies”, which is a more accurate term in this context. A network of local interconnections of such friends/allies who devote time and effort to one another is quite simply a highly efficient way to improve overall human well-being.
This also leads to a very simple, unbiased moral justification for devoting resources to your close friends; it’s simply that you, more so than other people, are in a unique position to affect the well-being of your friends, and vice versa. That kind of argument is also an entirely sufficient basis for some amount of “selfishness”—ceteris paribus, you yourself are in a better position to improve your own well-being than anyone else is.
However, this is not the same thing as “caring” in the sense So8res is using the term; I think he’s using the term more in the sense of “value”. For the above reasons, you can value your friends equally to anyone else while still devoting more time and effort to them. In general, you’re going to be better able to help your close friends than you are a random stranger on the street.
The way you put it, it seems like you want to care for both X and Y but are unable to.
However, if that’s the case then So8res’s point carries, because the core argument in the post translates to “if you think you ought to care about both X and Y but find yourself unable to, then you can still try to act the way that you would if you did, in fact, care about both X and Y”.
If you mean “devote time and effort to”, sure; I completely agree that it makes a lot of sense to do this for your friends, and you can’t do that for everyone.
If you mean “value as a human being and desire their well-being”, then I think it’s not justifiable to afford special privilege in this regard to close friends.
I don’t think the worth of a human being should be decided upon almost entirely circumstantial grounds, namely their proximity and/or relation to myself. If anything it should be a function of the qualities or the nature of that person, or perhaps even blanket equality.
If I believe that my friends are more valuable, it should be because of the qualities that led to them being my friend rather than simply the fact that they are my friends. However, if that’s so then there are many, many other people in the world who have similar qualities but are not my friends.
I don’t think the worth of a human being should be decided upon almost entirely circumstantial grounds, namely their proximity and/or relation to myself.
I assume you would pay your own mortgage. Would you mind paying my mortgage as well?
If you care equally for two people, your money should go to the one with the greatest need. It is very unlikely that in a country with many mortgage-payers, the person with the greatest need is you. So you should be paying down people’s mortgages until the mortgages of everyone in the world leave them no worse than you with respect to mortgages; only then should you then pay anything to yourself.
And even if it’s impractical to distribute your money to all mortgage payers in the world, surely you could find a specific mortgage payer who is so bad off that paying the mortgage of just this one person satisfies a greater need than paying off your own.
But you don’t. And you can’t. And everyone doesn’t and can’t, not just for mortgages, but for, say, food or malaria nets. You don’t send all your income above survival level to third-worlders who need malaria nets (or whatever other intervention people need the most); you don’t care for them and yourself equally.
Yes, if I really ought to value other human beings equally then it means I ought to devote a significant amount of time and/or money to altruistic causes, but is that really such an absurd conclusion?
Perhaps I don’t do those things, but that doesn’t mean I can’t and it doesn’t mean I shouldn’t.
You ought to value other human beings equally, but you don’t.
You do value other human beings equally, and you ought to act in accordance with that valuation, but you don’t.
You appear to be claiming 2 and denying 1. However, I don’t see a significant difference between 1 and 2; 1 and 2 result in exactly the same actions by you and it ends up just being a matter of semantics.
I agree; I don’t see a significant difference between thinking that I ought to value other human beings equally but failing to do so, and actually viewing them equally and not acting accordingly. If I accept either (1) or (2) it’s still a moral failure, and it is one that I should act to correct. In either case, what matters is the actions that I ought to take as a result (i.e. effective altruism), and I think the implications are the same in both cases.
That being said, I guess the methods that I would use to correct the problem would be different in either hypothetical. If it’s (1) then there may be ways of thinking about it that would result in a better valuation of other people, or perhaps to correct for the inaccuracy of the care-o-meter as per the original post.
If it’s (2), then the issue is one of akrasia, and there are plenty of psychological tools or rationalist techniques that could help.
Of course, (1) and (2) aren’t the only possibilities here; there’s at least two more that are important.
You seem to be agreeing by not really agreeing. What does it even mean to say “I value other people equally but I don’t act on that”? Your actions imply a valuation, and in that implied valuation you clearly value yourself more than other people. It’s like saying “I prefer chocolate over vanilla ice cream, but if you give me them I’ll always pick the vanilla”. Then you don’t really prefer chocolate over vanilla, because that’s what it means to prefer something.
My actions alone don’t necessarily imply a valuation, or at least not one that makes any sense.
There are a few different levels at which one can talk about what it means to value something, and revealed preference is not the only one that makes sense.
As usual, the word “better” hides a lot of relevant detail. Better for whom? By what measure?
Shockingly, in at least some cases by some measures, though, it works better for us if I pay your debt and you pay my debt, because it is possible for a third party to get much, much better terms on repayment than the original borrower. In many cases, debts can be sold for pennies on the dollar to anyone except the original borrower. See any of these articles
Ah. It seems we have been talking about somewhat different things.
You are talking about the worth of a human being. I’m talking about my personal perception of the value of a human being under the assumption that other people can and usually do have different perceptions of the same value.
I try not to pass judgement of the worth of humans, but I am quite content with assigning my personal values to people based, in part, on “their proximity and/or relation to myself”.
I’m not entirely sure what a “personal perception of the value of a human being” is, as distinct from the value or worth of a human being. Surely the latter is what the former is about?
Granted, I guess you could simply be talking about their instrumental value to yourself (e.g. “they make me happy”), but I don’t think that’s really the main thrust of what “caring” is.
I’m not entirely sure what a “personal perception of the value of a human being” is, as distinct from the value or worth of a human being.
The “worth a human being” implies that there is one, correct, “objective” value for that human being. We may not be able to observe it directly so we just estimate it, with some unavoidable noise and errors, but theoretically the estimates will converge to the “true” value. The worth of a human being is a function with one argument: that human being.
The “personal perception of the value of a human being” implies that there are multiple, different, “subjective” values for the same human being. There is no single underlying value to which the estimates converge. The personal perception of a value is a function with two arguments: who is evaluated and who does the evaluation.
So, either there is such a thing as the “objective” value and hence, implicitly, you should seek to approach that value, or there is not.
I don’t see any reason to believe in an objective worth of this kind, but I don’t really think it matters that much. If these is no single underlying value, then the act of assigning your own personal values to people is still the same thing as “passing judgement on the worth of humans”, because it’s the only thing those words could refer to; you can’t avoid the issue simply by calling it a subjective matter.
In my view, regardless of whether the value in question is “subjective” or “objective”, I don’t think it should be determined by the mere circumstance of whether I happened to meet that person or not.
So, for example, you believe that to a mother the value of her own child should be similar to that of a random person anywhere on Earth—right? It’s a “mere circumstance” that this particular human happens to be her child.
Probably not just any random person, because one can reasonably argue that children should be valued more highly than adults.
However, I do think that the mother should hold other peoples’ children as being of equal value to her own. That doesn’t mean valuing her own children less, it means valuing everyone else’s more.
Sure, it’s not very realistic to expect this of people, but that doesn’t mean they shouldn’t try.
one can reasonably argue that children should be valued more highly than adults.
One can reasonably argue the other way too. New children are easier to make than new adults.
However, I do think that the mother should hold other peoples’ children as being of equal value to her own. That doesn’t mean valuing her own children less, it means valuing everyone else’s more.
Since she has finite resources, is there a practical difference?
It seems to me extreme altruism is so easily abused that it will inevitably wipe itself out in the evolution of moral systems.
One can reasonably argue the other way too. New children are easier to make than new adults.
True. However, regardless of the relative value of children and adults, it is clear that one ought to devote significantly more time and effort to children than to adults, because they are incapable of supporting themselves and are necessarily in need of help from the rest of society.
Since she has finite resources, is there a practical difference?
Earlier I specifically drew a distinction between devoting time and effort and valuation; you don’t have to value your own children more to devote yourself to them and not to other peoples’ children.
That said, there are some practical differences. First of all, it may be better not to have children if you could do more to help other peoples’ children. Secondly, if you do have children and still have spare resources over and above what it takes to properly care for them, then you should consider where those spare resources could be spent most effectively.
It seems to me extreme altruism is so easily abused that it will inevitably wipe itself out in the evolution of moral systems.
If an extreme altruist recognises that taking such an extreme position would lead overall to less altruism in the future, and thus worse overall consequences, surely the right thing to do is stand up to that abuse. Besides, what exactly do you mean by “extreme altruism”?
If an extreme altruist recognises that taking such an extreme position would lead overall to less altruism in the future, and thus worse overall consequences, surely the right thing to do is stand up to that abuse.
A good point. By abuse I wouldn’t necessarily mean anything blatant though, just that selfish people are happy to receive resources from selfless people.
Besides, what exactly do you mean by “extreme altruism”?
Valuing people equally by default when their instrumental value isn’t considered. I hope I didn’t misunderstand you. That’s about as extreme it gets but I suppose you could get even more extreme by valuing other people more highly than yourself.
A good point. By abuse I wouldn’t necessarily mean anything blatant though, just that selfish people are happy to receive resources from selfless people.
Sure, and there isn’t really anything wrong with that as long as the person receiving the resources really needs them.
Valuing people equally by default when their instrumental value isn’t considered. I hope I didn’t misunderstand you. That’s about as extreme it gets but I suppose you could get even more extreme by valuing other people more highly than yourself.
The term “altruism” is often used to refer to the latter, so the clarification is necessary; I definitely don’t agree with that extreme.
In any case, it may not be reasonable to expect people (or yourself) to hold to that valuation, or to act in complete recognition of what that valuation implies even if they do, but it seems like the right standard to aim for. If you are likely biased against valuing distant strangers as much as you ought to, then it makes sense to correct for it.
I don’t disagree, and I don’t think you’re a bad person, and my intent is not to guilt or pressure you. My intent is more to show some people that certain things that may feel impossible are not impossible. :-)
A few things, though:
This seems like a cop out to me. Given a bunch of people trying to help the world, it would be best for all of them to do the thing that they think most helps the world. Often, this will lead to diversity (not just because people have different ideas about what is good, but also because of diminishing marginal returns and saturation). Sometimes, it won’t (e.g. after a syn bio proof of concept that kills 1⁄4 of the race I would hope that diversity in problem-selection would decrease). “It is best to diversify and hope” seems like a platitude that dodges the fun parts.
I also have this feeling, in a sense. I interpret it very differently, and I am aware of the typical mind fallacy, but I also caution against the “you must be Fundamentally Different” fallacy. Part of the theme behind this post is “you can interpret the internal caring feelings differently if you want”, and while I interpret my care-senses differently, I do empathize with this sentiment.
That’s not to say that you should come around to my viewpoint, by any means. But if you (or others) would like to try, for one reason or another, consider the following points:
Do you care only about the people who are currently close friends, or also the people who could be close friends? Is the value a property of the person, or a property of the fact that that person has been brought to your awareness?
Would you care more about humans in a context where humanity is treated as the ‘in-group’? For example, consider a situation where an alien race is at war with humans, and a roving band of alien brutes have captured a human family and are torturing them for fun. Does this boil your blood? Or do you not really care?
I assume that you wouldn’t push a friend in front of the trolley to save ten strangers. However, if you and a friend were in a room with ten strangers behind a veil of uncertainty, and were informed that the twelve of you were about to play in a trolley game, would you sign a contract which stated that (assuming unanimous agreement) the pusher agrees to push the pushee?
In my case, much of my decision to care about the rest of the world is due to an adjustment upwards of the importance of other people (after noticing that I tend to care significantly about people after I have gotten to know them very well, and deciding that people don’t matter less just because I’m not yet close to them). There’s also a significant portion of my caring that comes from caring about others because I would want others to care about me if the positions were reversed, and this seeming like the right action in a timeless sense.
Finally, much of my caring comes from treating all of humanity as my in-group (everyone is a close friend, I just don’t know most of them yet; see also the expanding circle).
I mess with my brother sometimes, but anyone else who tries to mess with my brother has to go through me first. Similarly there is some sense in which I don’t “care” about most of the nameless masses who are out of my sight (in that I don’t have feelings for them), but there’s a fashion in which I do care about them, in that anyone who fucks with humans fucks with me.
Disease, war, and death are all messing with my people, and while I may not be strong enough to do anything about it today, there will come a time.
There may be a group of people, such that it is possible for any one individual of the group to become my close friend, but where it is not possible for all the individuals to become my close friends simultaneously.
In that case, saying “any individual could become a close friend, so I should multiply ‘caring for one friend’ by the the number of individuals in the group” is wrong. Instead, I should multiply “caring for one friend’ by the number of individuals in the group who can become my friend simultaneously, and not take into account the individuals in excess of that. In fact, even that may be too strong. It may be possible for one individual in the group to become my close friend only at the cost of reducing the closeness to my existing friends, in which case I should conclude that the total amount I care shouldn’t increase at all.
The point is that the fact that someone happens to be your close friend seems like the wrong reason to care about them.
Let’s say, for example, that:
If X was my close friend, I would care about X
If Y was my close friend, I would care about Y
X and Y could not both be close friends of mine simultaneously.
Why should whether I care for X or care for Y depend on which one I happen to end up being close friends with? Rather, why shouldn’t I just care about both X and Y regardless of whether they are my close friends or not?
Perhaps I have a limited amount of caring available and I am only able to care for a certain number of people. If I tried to care for both X and Y I would go over my limit and would have to reduce the amount of caring for other people to make up for it. In fact, “only X or Y could be my close friend, but not both” may be an effect of that.
It’s not “they’re my close friend, and that’s the reason to care about them”, it’s “they’re under my caring limit, and that allows me to care about them”. “Is my close friend” is just another way to express “this person happened, by chance, to be added while I was still under my limit”. There is nothing special about this person, compared to the pool of all possible close friends, except that this person happened to have been added at the right time (or under randomly advantageous circumstances that don’t affect their merit as a person, such as living closer to you).
Of course, this sounds bad because of platitudes we like to say but never really mean. We like to say that our friends are special. They aren’t; if you had lived somewhere else or had different random experiences, you’d have had different close friends.
I think I would state a similar claim in a very different way. Friends are allies; both of us have implicitly agreed to reserve resources for the use of the other person in the friendship. (Resources are often as simple as ‘time devoted to a common activity’ or ‘emotional availability.’) Potential friends and friends might be indistinguishable to an outside observer, but to me (or them) there’s an obvious difference in that a friend can expect to ask me for something and get it, and a potential friend can’t.
(Friendships in this view don’t have to be symmetric- there are people that I’d listen to them complain that I don’t expect they’d listen to me complain, and the reverse exists as well.)
I think that it’s reasonable to call facts ‘special’ relative to counterfacts- yes, I would have had different college friends if I had gone to a different college, but I did actually go to the college I went to, and actually did make the friends I did there.
That’s a solid point, and to a significant extent I agree.
There are quite a lot of things that people can spend these kinds of resources on that are very effective at a small scale. This is an entirely sufficient basis to justify the idea of friends, or indeed “allies”, which is a more accurate term in this context. A network of local interconnections of such friends/allies who devote time and effort to one another is quite simply a highly efficient way to improve overall human well-being.
This also leads to a very simple, unbiased moral justification for devoting resources to your close friends; it’s simply that you, more so than other people, are in a unique position to affect the well-being of your friends, and vice versa. That kind of argument is also an entirely sufficient basis for some amount of “selfishness”—ceteris paribus, you yourself are in a better position to improve your own well-being than anyone else is.
However, this is not the same thing as “caring” in the sense So8res is using the term; I think he’s using the term more in the sense of “value”. For the above reasons, you can value your friends equally to anyone else while still devoting more time and effort to them. In general, you’re going to be better able to help your close friends than you are a random stranger on the street.
The way you put it, it seems like you want to care for both X and Y but are unable to.
However, if that’s the case then So8res’s point carries, because the core argument in the post translates to “if you think you ought to care about both X and Y but find yourself unable to, then you can still try to act the way that you would if you did, in fact, care about both X and Y”.
“I want to care for an arbitrarily chosen person from the set of X and Y” is not “I want to care for X and Y”. It’s “I want to care for X or Y”.
Why do you think so? It seems to me the fact that someone is my close friend is an excellent reason to care about her.
I think it depends on what you mean by “care”.
If you mean “devote time and effort to”, sure; I completely agree that it makes a lot of sense to do this for your friends, and you can’t do that for everyone.
If you mean “value as a human being and desire their well-being”, then I think it’s not justifiable to afford special privilege in this regard to close friends.
By “care” I mean allocating a considerably higher value to his particular human compared to a random one.
Yes, I understand you do, but why do you think so?
I don’t think the worth of a human being should be decided upon almost entirely circumstantial grounds, namely their proximity and/or relation to myself. If anything it should be a function of the qualities or the nature of that person, or perhaps even blanket equality.
If I believe that my friends are more valuable, it should be because of the qualities that led to them being my friend rather than simply the fact that they are my friends. However, if that’s so then there are many, many other people in the world who have similar qualities but are not my friends.
I assume you would pay your own mortgage. Would you mind paying my mortgage as well?
I can’t pay everyone’s mortgage, and nor can anyone else, so different people will need to pay for different mortgages.
Which approach works better, me paying my mortgage and you paying yours, or me paying your mortgage and you paying mine?
If you care equally for two people, your money should go to the one with the greatest need. It is very unlikely that in a country with many mortgage-payers, the person with the greatest need is you. So you should be paying down people’s mortgages until the mortgages of everyone in the world leave them no worse than you with respect to mortgages; only then should you then pay anything to yourself.
And even if it’s impractical to distribute your money to all mortgage payers in the world, surely you could find a specific mortgage payer who is so bad off that paying the mortgage of just this one person satisfies a greater need than paying off your own.
But you don’t. And you can’t. And everyone doesn’t and can’t, not just for mortgages, but for, say, food or malaria nets. You don’t send all your income above survival level to third-worlders who need malaria nets (or whatever other intervention people need the most); you don’t care for them and yourself equally.
Yes, if I really ought to value other human beings equally then it means I ought to devote a significant amount of time and/or money to altruistic causes, but is that really such an absurd conclusion?
Perhaps I don’t do those things, but that doesn’t mean I can’t and it doesn’t mean I shouldn’t.
You can say either
You ought to value other human beings equally, but you don’t.
You do value other human beings equally, and you ought to act in accordance with that valuation, but you don’t.
You appear to be claiming 2 and denying 1. However, I don’t see a significant difference between 1 and 2; 1 and 2 result in exactly the same actions by you and it ends up just being a matter of semantics.
I agree; I don’t see a significant difference between thinking that I ought to value other human beings equally but failing to do so, and actually viewing them equally and not acting accordingly. If I accept either (1) or (2) it’s still a moral failure, and it is one that I should act to correct. In either case, what matters is the actions that I ought to take as a result (i.e. effective altruism), and I think the implications are the same in both cases.
That being said, I guess the methods that I would use to correct the problem would be different in either hypothetical. If it’s (1) then there may be ways of thinking about it that would result in a better valuation of other people, or perhaps to correct for the inaccuracy of the care-o-meter as per the original post.
If it’s (2), then the issue is one of akrasia, and there are plenty of psychological tools or rationalist techniques that could help.
Of course, (1) and (2) aren’t the only possibilities here; there’s at least two more that are important.
You seem to be agreeing by not really agreeing. What does it even mean to say “I value other people equally but I don’t act on that”? Your actions imply a valuation, and in that implied valuation you clearly value yourself more than other people. It’s like saying “I prefer chocolate over vanilla ice cream, but if you give me them I’ll always pick the vanilla”. Then you don’t really prefer chocolate over vanilla, because that’s what it means to prefer something.
My actions alone don’t necessarily imply a valuation, or at least not one that makes any sense.
There are a few different levels at which one can talk about what it means to value something, and revealed preference is not the only one that makes sense.
Is this basically another way of saying that you’re not the king of your brain, or something else?
That’s one way to put it, yes.
As usual, the word “better” hides a lot of relevant detail. Better for whom? By what measure?
Shockingly, in at least some cases by some measures, though, it works better for us if I pay your debt and you pay my debt, because it is possible for a third party to get much, much better terms on repayment than the original borrower. In many cases, debts can be sold for pennies on the dollar to anyone except the original borrower. See any of these articles
Ah. It seems we have been talking about somewhat different things.
You are talking about the worth of a human being. I’m talking about my personal perception of the value of a human being under the assumption that other people can and usually do have different perceptions of the same value.
I try not to pass judgement of the worth of humans, but I am quite content with assigning my personal values to people based, in part, on “their proximity and/or relation to myself”.
I’m not entirely sure what a “personal perception of the value of a human being” is, as distinct from the value or worth of a human being. Surely the latter is what the former is about?
Granted, I guess you could simply be talking about their instrumental value to yourself (e.g. “they make me happy”), but I don’t think that’s really the main thrust of what “caring” is.
The “worth a human being” implies that there is one, correct, “objective” value for that human being. We may not be able to observe it directly so we just estimate it, with some unavoidable noise and errors, but theoretically the estimates will converge to the “true” value. The worth of a human being is a function with one argument: that human being.
The “personal perception of the value of a human being” implies that there are multiple, different, “subjective” values for the same human being. There is no single underlying value to which the estimates converge. The personal perception of a value is a function with two arguments: who is evaluated and who does the evaluation.
So, either there is such a thing as the “objective” value and hence, implicitly, you should seek to approach that value, or there is not.
I don’t see any reason to believe in an objective worth of this kind, but I don’t really think it matters that much. If these is no single underlying value, then the act of assigning your own personal values to people is still the same thing as “passing judgement on the worth of humans”, because it’s the only thing those words could refer to; you can’t avoid the issue simply by calling it a subjective matter.
In my view, regardless of whether the value in question is “subjective” or “objective”, I don’t think it should be determined by the mere circumstance of whether I happened to meet that person or not.
So, for example, you believe that to a mother the value of her own child should be similar to that of a random person anywhere on Earth—right? It’s a “mere circumstance” that this particular human happens to be her child.
Probably not just any random person, because one can reasonably argue that children should be valued more highly than adults.
However, I do think that the mother should hold other peoples’ children as being of equal value to her own. That doesn’t mean valuing her own children less, it means valuing everyone else’s more.
Sure, it’s not very realistic to expect this of people, but that doesn’t mean they shouldn’t try.
One can reasonably argue the other way too. New children are easier to make than new adults.
Since she has finite resources, is there a practical difference?
It seems to me extreme altruism is so easily abused that it will inevitably wipe itself out in the evolution of moral systems.
True. However, regardless of the relative value of children and adults, it is clear that one ought to devote significantly more time and effort to children than to adults, because they are incapable of supporting themselves and are necessarily in need of help from the rest of society.
Earlier I specifically drew a distinction between devoting time and effort and valuation; you don’t have to value your own children more to devote yourself to them and not to other peoples’ children.
That said, there are some practical differences. First of all, it may be better not to have children if you could do more to help other peoples’ children. Secondly, if you do have children and still have spare resources over and above what it takes to properly care for them, then you should consider where those spare resources could be spent most effectively.
If an extreme altruist recognises that taking such an extreme position would lead overall to less altruism in the future, and thus worse overall consequences, surely the right thing to do is stand up to that abuse. Besides, what exactly do you mean by “extreme altruism”?
A good point. By abuse I wouldn’t necessarily mean anything blatant though, just that selfish people are happy to receive resources from selfless people.
Valuing people equally by default when their instrumental value isn’t considered. I hope I didn’t misunderstand you. That’s about as extreme it gets but I suppose you could get even more extreme by valuing other people more highly than yourself.
Sure, and there isn’t really anything wrong with that as long as the person receiving the resources really needs them.
The term “altruism” is often used to refer to the latter, so the clarification is necessary; I definitely don’t agree with that extreme.
In any case, it may not be reasonable to expect people (or yourself) to hold to that valuation, or to act in complete recognition of what that valuation implies even if they do, but it seems like the right standard to aim for. If you are likely biased against valuing distant strangers as much as you ought to, then it makes sense to correct for it.