If you don’t feel like you care about billions of people, and you recognize that the part of your brain that cares about small numbers of people has scope sensitivity, what observation causes you to believe that you do care about everyone equally?
Serious question; I traverse the reasoning the other way, and since I don’t care much about the aggregate six billion people I don’t know, I divide and say that I don’t care more than one six-billionth as much about the typical person that I don’t know.
People that I do know, I do care about- but I don’t have to multiply to figure my total caring, I have to add.
If you don’t feel like you care about billions of people, and you recognize that the part of your brain that cares about small numbers of people has scope sensitivity, what observation causes you to believe that you do care about everyone equally?
I can think of two categories of responses.
One is something like “I care by induction”. Over the course of your life, you have ostensibly had multiple experiences of meeting new people, and ending up caring about them. You can reasonably predict that, if you meet more people, you will end up caring about them too. From there, it’s not much of a leap to “I should just start caring about people before I meet them”. After all, rational agents should not be able to predict changes in their own beliefs; you might as well update now.
The other is something like “The caring is much better calibrated than the not-caring”. Let me use an analogy to physics. My everyday intuition says that clocks tick at the same rate for everybody, no matter how fast they move; my knowledge of relativity says clocks slow down significantly near c. The problem is that my intuition on the matter is baseless; I’ve never traveled at relativistic speeds. When my baseless intuition collides with rigorously-verified physics, I have to throw out my intuition.
I’ve also never had direct interaction with or made meaningful decisions about billions of people at a time, but I have lots of experience with individual people. “I don’t care much about billions of people” is an almost totally unfounded wild guess, but “I care lots about individual people” has lots of solid evidence, so when they collide, the latter wins.
(Neither of these are ironclad, at least not as I’ve presented them, but hopefully I’ve managed to gesture in a useful direction.)
Your second category of response seems to say “my intuitions about considering a group of people, taken billions at a time, aren’t reliable, but my intuitions about considering the same group of people, one at a time, are”. You then conclude that you care because taking the billions of people one at a time implies that you care about them.
But it seems that I could apply the same argument a little differently—instead of applying it to how many people you consider at a time, apply it to the total size of the group. “my intuitions about how much I care about a group of billions are bad, even though my intuitions about how much I care about a small group are good.” The second argument would, then, imply that it is wrong to use your intuitions about small groups to generalize to large groups—that is, the second argument refutes the first. Going from “I care about the people in my life” to “I would care about everyone if I met them” is as inappropriate as going from “I know what happens to clocks at slow speeds” to “I know what happens to clocks at near-light speeds”.
The next time you are in a queue with strangers, imagine the two people behind you (that you haven’t met before and don’t expect to meet again and didn’t really interact with much at all, but they are /concrete/). Put them on one track in the trolley problem, and one of the people that you know and care about on the other track.
If you prefer to save two strangers to one tribesman, you are different enough from me that we will have trouble talking about the subject, and you will probably find me to be a morally horrible person in hypothetical situations.
To address your first category: When I meet new people and interact with them, I do more than gain information- I perform transitive actions that move them out of the group “people I’ve never met” that I don’t care about, and into the group of people that I do care about.
Addressing your second: I found that a very effective way to estimate my intuition would be to imagine a group of X people that I have never met (or specific strangers) on one minecart track, and a specific person that I know on the other. I care so little about small groups of strangers, compared to people that I know, that I find my intuition about billions is roughly proportional; the dominant factor in my caring about strangers is that some number of people who are strangers to me are important to people who are important to me, and therefore indirectly important to me.
I second this question: Maybe I’m misunderstanding something, but part of me craves a set of axioms to justify the initial assumptions. That is: Person A cares about a small number of people who are close to them. Why does this equate to Person A having to care about everyone who isn’t?
For me, personally, I know that you could choose a person at random in the world, write a paragraph about them, and give it to me, and by doing that, I would care about them a lot more than before I had read that piece of paper, even though reading that paper hadn’t changed anything about them. Similarly, becoming friends with someone doesn’t usually change the person that much, but increases how much I care about them an awful lot.
Therefore, I look at all 7 billion people in the world, and even though I barely care about them, I know that it would be trivial for me to increase how much I care about one of them, and therefore I should care about them as if I had already completed that process, even if I hadn’t
Maybe a better way of putting this is that I know that all of the people in the world are potential carees of mine, so I should act as though I aready care about these people in deference to possible future-me.
For the most part, I follow—but there’s something I’m missing. I think it lies somewhere in: “It would be trivial for me to increase how much I care about one fo them, and therefore I should care about them as if I had already completed that process, even if I hadn’t.”
Is the underlying “axiom” here that you wish to maximize the number of effects that come from the caring you give to people, because that’s what an altruist does? Or that you wish to maximize your caring for people?
To contextualize the above question, here’s a (nonsensical, but illustrative) parallel: I get cuts and scrapes when running through the woods. They make me feel alive; I like this momentary pain stimuli. It would be trivial for me to woods-run more and get more cuts and scrapes. Therefore I should just get cuts and scrapes.
I know it’s silly, but let me explain: A person usually doesn’t want to maximize their cuts and scrapes, even though cuts and scrapes might be appreciated at some point. Thus, the above scenario’s conclusion seems silly. Similarly, I don’t feel a necessity to maximize my caring—even though caring might be nice at some point. Caring about someone is a product of my knowing them, and I care about a person because I know them in a particular way (if I knew a person and thought they were scum, I would not care about them). The fact that I could know someone else, and thus hypothetically care about them, doesn’t make me feel as if I should.
If, on the other hand, the axiom is true—then why bother considering your intuitive “care-o-meter” in the first place?
I think there’s something fundamental I’m missing.
(Upon further thought, is there an agreed-upon intrinsic value to caring that my ignorance of some LW culture has lead me to miss? This would also explain wanting to maximize caring.)
(Upon further-further thought, is it something like the following internal dialogue? “I care about people close to me. I also care about the fate of mankind. I know that the fate of mankind as a whole is far more important than the fate of the people close to me. Since I value internal consistency, in order for my caring-mechanism to be consistent, my care for the fate of mankind must be proportional to my care for the people close to me. Since my caring mechanism is incapable of actually computing such a proportionality, the next best thing is to be consciously aware of how much it should care if it were able, and act accordingly.”)
(Upon further-further thought, is it something like the following internal dialogue? “I care about people close to me. I also care about the fate of mankind. I know that the fate of mankind as a whole is far more important than the fate of the people close to me. Since I value internal consistency, in order for my caring-mechanism to be consistent, my care for the fate of mankind must be proportional to my care for the people close to me. Since my caring mechanism is incapable of actually computing such a proportionality, the next best thing is to be consciously aware of how much it should care if it were able, and act accordingly.”)
I care about self-consistency, but being self-consistent is something that must happen naturally; I can’t self-consistently say “This feeling is self-inconsistent, therefore I will change this feeling to be self-consistent”
I actually think that your internal dialogue was a pretty accurate representation of what I was failing to say. And as for self consistency having to be natural, I agree, but if you’re aware that you’re being inconsistent, you can still alter your actions to try and correct for that fact.
I look at a box of 100 bullets, and I know that it would be trivial for me to be in mortal danger from any one of them, but the box is perfectly safe.
It is trivial-ish for me to meet a trivial number of people and start to care about them, but it is certainly nontrivial to encounter a nontrivial number of people.
If you don’t feel like you care about billions of people, and you recognize that the part of your brain that cares about small numbers of people has scope sensitivity, what observation causes you to believe that you do care about everyone equally?
Serious question; I traverse the reasoning the other way, and since I don’t care much about the aggregate six billion people I don’t know, I divide and say that I don’t care more than one six-billionth as much about the typical person that I don’t know.
People that I do know, I do care about- but I don’t have to multiply to figure my total caring, I have to add.
I can think of two categories of responses.
One is something like “I care by induction”. Over the course of your life, you have ostensibly had multiple experiences of meeting new people, and ending up caring about them. You can reasonably predict that, if you meet more people, you will end up caring about them too. From there, it’s not much of a leap to “I should just start caring about people before I meet them”. After all, rational agents should not be able to predict changes in their own beliefs; you might as well update now.
The other is something like “The caring is much better calibrated than the not-caring”. Let me use an analogy to physics. My everyday intuition says that clocks tick at the same rate for everybody, no matter how fast they move; my knowledge of relativity says clocks slow down significantly near c. The problem is that my intuition on the matter is baseless; I’ve never traveled at relativistic speeds. When my baseless intuition collides with rigorously-verified physics, I have to throw out my intuition.
I’ve also never had direct interaction with or made meaningful decisions about billions of people at a time, but I have lots of experience with individual people. “I don’t care much about billions of people” is an almost totally unfounded wild guess, but “I care lots about individual people” has lots of solid evidence, so when they collide, the latter wins.
(Neither of these are ironclad, at least not as I’ve presented them, but hopefully I’ve managed to gesture in a useful direction.)
Your second category of response seems to say “my intuitions about considering a group of people, taken billions at a time, aren’t reliable, but my intuitions about considering the same group of people, one at a time, are”. You then conclude that you care because taking the billions of people one at a time implies that you care about them.
But it seems that I could apply the same argument a little differently—instead of applying it to how many people you consider at a time, apply it to the total size of the group. “my intuitions about how much I care about a group of billions are bad, even though my intuitions about how much I care about a small group are good.” The second argument would, then, imply that it is wrong to use your intuitions about small groups to generalize to large groups—that is, the second argument refutes the first. Going from “I care about the people in my life” to “I would care about everyone if I met them” is as inappropriate as going from “I know what happens to clocks at slow speeds” to “I know what happens to clocks at near-light speeds”.
I’ll go a more direct route:
The next time you are in a queue with strangers, imagine the two people behind you (that you haven’t met before and don’t expect to meet again and didn’t really interact with much at all, but they are /concrete/). Put them on one track in the trolley problem, and one of the people that you know and care about on the other track.
If you prefer to save two strangers to one tribesman, you are different enough from me that we will have trouble talking about the subject, and you will probably find me to be a morally horrible person in hypothetical situations.
To address your first category: When I meet new people and interact with them, I do more than gain information- I perform transitive actions that move them out of the group “people I’ve never met” that I don’t care about, and into the group of people that I do care about.
Addressing your second: I found that a very effective way to estimate my intuition would be to imagine a group of X people that I have never met (or specific strangers) on one minecart track, and a specific person that I know on the other. I care so little about small groups of strangers, compared to people that I know, that I find my intuition about billions is roughly proportional; the dominant factor in my caring about strangers is that some number of people who are strangers to me are important to people who are important to me, and therefore indirectly important to me.
I second this question: Maybe I’m misunderstanding something, but part of me craves a set of axioms to justify the initial assumptions. That is: Person A cares about a small number of people who are close to them. Why does this equate to Person A having to care about everyone who isn’t?
For me, personally, I know that you could choose a person at random in the world, write a paragraph about them, and give it to me, and by doing that, I would care about them a lot more than before I had read that piece of paper, even though reading that paper hadn’t changed anything about them. Similarly, becoming friends with someone doesn’t usually change the person that much, but increases how much I care about them an awful lot.
Therefore, I look at all 7 billion people in the world, and even though I barely care about them, I know that it would be trivial for me to increase how much I care about one of them, and therefore I should care about them as if I had already completed that process, even if I hadn’t
Maybe a better way of putting this is that I know that all of the people in the world are potential carees of mine, so I should act as though I aready care about these people in deference to possible future-me.
For the most part, I follow—but there’s something I’m missing. I think it lies somewhere in: “It would be trivial for me to increase how much I care about one fo them, and therefore I should care about them as if I had already completed that process, even if I hadn’t.”
Is the underlying “axiom” here that you wish to maximize the number of effects that come from the caring you give to people, because that’s what an altruist does? Or that you wish to maximize your caring for people?
To contextualize the above question, here’s a (nonsensical, but illustrative) parallel: I get cuts and scrapes when running through the woods. They make me feel alive; I like this momentary pain stimuli. It would be trivial for me to woods-run more and get more cuts and scrapes. Therefore I should just get cuts and scrapes.
I know it’s silly, but let me explain: A person usually doesn’t want to maximize their cuts and scrapes, even though cuts and scrapes might be appreciated at some point. Thus, the above scenario’s conclusion seems silly. Similarly, I don’t feel a necessity to maximize my caring—even though caring might be nice at some point. Caring about someone is a product of my knowing them, and I care about a person because I know them in a particular way (if I knew a person and thought they were scum, I would not care about them). The fact that I could know someone else, and thus hypothetically care about them, doesn’t make me feel as if I should.
If, on the other hand, the axiom is true—then why bother considering your intuitive “care-o-meter” in the first place?
I think there’s something fundamental I’m missing.
(Upon further thought, is there an agreed-upon intrinsic value to caring that my ignorance of some LW culture has lead me to miss? This would also explain wanting to maximize caring.)
(Upon further-further thought, is it something like the following internal dialogue? “I care about people close to me. I also care about the fate of mankind. I know that the fate of mankind as a whole is far more important than the fate of the people close to me. Since I value internal consistency, in order for my caring-mechanism to be consistent, my care for the fate of mankind must be proportional to my care for the people close to me. Since my caring mechanism is incapable of actually computing such a proportionality, the next best thing is to be consciously aware of how much it should care if it were able, and act accordingly.”)
I care about self-consistency, but being self-consistent is something that must happen naturally; I can’t self-consistently say “This feeling is self-inconsistent, therefore I will change this feeling to be self-consistent”
… Oh.
Hm. In that case, I think I’m still missing something fundamental.
I care about self-consistency because an inconsistent self is very strong evidence that I’m doing something wrong.
It’s not very likely that if I take the minimum steps to make the evidence of the error go away, I will make the error go away.
The general case of “find a self-inconsistency, make the minimum change to remove it” is not error-correcting.
I actually think that your internal dialogue was a pretty accurate representation of what I was failing to say. And as for self consistency having to be natural, I agree, but if you’re aware that you’re being inconsistent, you can still alter your actions to try and correct for that fact.
I look at a box of 100 bullets, and I know that it would be trivial for me to be in mortal danger from any one of them, but the box is perfectly safe.
It is trivial-ish for me to meet a trivial number of people and start to care about them, but it is certainly nontrivial to encounter a nontrivial number of people.