For me, personally, I know that you could choose a person at random in the world, write a paragraph about them, and give it to me, and by doing that, I would care about them a lot more than before I had read that piece of paper, even though reading that paper hadn’t changed anything about them. Similarly, becoming friends with someone doesn’t usually change the person that much, but increases how much I care about them an awful lot.
Therefore, I look at all 7 billion people in the world, and even though I barely care about them, I know that it would be trivial for me to increase how much I care about one of them, and therefore I should care about them as if I had already completed that process, even if I hadn’t
Maybe a better way of putting this is that I know that all of the people in the world are potential carees of mine, so I should act as though I aready care about these people in deference to possible future-me.
For the most part, I follow—but there’s something I’m missing. I think it lies somewhere in: “It would be trivial for me to increase how much I care about one fo them, and therefore I should care about them as if I had already completed that process, even if I hadn’t.”
Is the underlying “axiom” here that you wish to maximize the number of effects that come from the caring you give to people, because that’s what an altruist does? Or that you wish to maximize your caring for people?
To contextualize the above question, here’s a (nonsensical, but illustrative) parallel: I get cuts and scrapes when running through the woods. They make me feel alive; I like this momentary pain stimuli. It would be trivial for me to woods-run more and get more cuts and scrapes. Therefore I should just get cuts and scrapes.
I know it’s silly, but let me explain: A person usually doesn’t want to maximize their cuts and scrapes, even though cuts and scrapes might be appreciated at some point. Thus, the above scenario’s conclusion seems silly. Similarly, I don’t feel a necessity to maximize my caring—even though caring might be nice at some point. Caring about someone is a product of my knowing them, and I care about a person because I know them in a particular way (if I knew a person and thought they were scum, I would not care about them). The fact that I could know someone else, and thus hypothetically care about them, doesn’t make me feel as if I should.
If, on the other hand, the axiom is true—then why bother considering your intuitive “care-o-meter” in the first place?
I think there’s something fundamental I’m missing.
(Upon further thought, is there an agreed-upon intrinsic value to caring that my ignorance of some LW culture has lead me to miss? This would also explain wanting to maximize caring.)
(Upon further-further thought, is it something like the following internal dialogue? “I care about people close to me. I also care about the fate of mankind. I know that the fate of mankind as a whole is far more important than the fate of the people close to me. Since I value internal consistency, in order for my caring-mechanism to be consistent, my care for the fate of mankind must be proportional to my care for the people close to me. Since my caring mechanism is incapable of actually computing such a proportionality, the next best thing is to be consciously aware of how much it should care if it were able, and act accordingly.”)
(Upon further-further thought, is it something like the following internal dialogue? “I care about people close to me. I also care about the fate of mankind. I know that the fate of mankind as a whole is far more important than the fate of the people close to me. Since I value internal consistency, in order for my caring-mechanism to be consistent, my care for the fate of mankind must be proportional to my care for the people close to me. Since my caring mechanism is incapable of actually computing such a proportionality, the next best thing is to be consciously aware of how much it should care if it were able, and act accordingly.”)
I care about self-consistency, but being self-consistent is something that must happen naturally; I can’t self-consistently say “This feeling is self-inconsistent, therefore I will change this feeling to be self-consistent”
I actually think that your internal dialogue was a pretty accurate representation of what I was failing to say. And as for self consistency having to be natural, I agree, but if you’re aware that you’re being inconsistent, you can still alter your actions to try and correct for that fact.
I look at a box of 100 bullets, and I know that it would be trivial for me to be in mortal danger from any one of them, but the box is perfectly safe.
It is trivial-ish for me to meet a trivial number of people and start to care about them, but it is certainly nontrivial to encounter a nontrivial number of people.
For me, personally, I know that you could choose a person at random in the world, write a paragraph about them, and give it to me, and by doing that, I would care about them a lot more than before I had read that piece of paper, even though reading that paper hadn’t changed anything about them. Similarly, becoming friends with someone doesn’t usually change the person that much, but increases how much I care about them an awful lot.
Therefore, I look at all 7 billion people in the world, and even though I barely care about them, I know that it would be trivial for me to increase how much I care about one of them, and therefore I should care about them as if I had already completed that process, even if I hadn’t
Maybe a better way of putting this is that I know that all of the people in the world are potential carees of mine, so I should act as though I aready care about these people in deference to possible future-me.
For the most part, I follow—but there’s something I’m missing. I think it lies somewhere in: “It would be trivial for me to increase how much I care about one fo them, and therefore I should care about them as if I had already completed that process, even if I hadn’t.”
Is the underlying “axiom” here that you wish to maximize the number of effects that come from the caring you give to people, because that’s what an altruist does? Or that you wish to maximize your caring for people?
To contextualize the above question, here’s a (nonsensical, but illustrative) parallel: I get cuts and scrapes when running through the woods. They make me feel alive; I like this momentary pain stimuli. It would be trivial for me to woods-run more and get more cuts and scrapes. Therefore I should just get cuts and scrapes.
I know it’s silly, but let me explain: A person usually doesn’t want to maximize their cuts and scrapes, even though cuts and scrapes might be appreciated at some point. Thus, the above scenario’s conclusion seems silly. Similarly, I don’t feel a necessity to maximize my caring—even though caring might be nice at some point. Caring about someone is a product of my knowing them, and I care about a person because I know them in a particular way (if I knew a person and thought they were scum, I would not care about them). The fact that I could know someone else, and thus hypothetically care about them, doesn’t make me feel as if I should.
If, on the other hand, the axiom is true—then why bother considering your intuitive “care-o-meter” in the first place?
I think there’s something fundamental I’m missing.
(Upon further thought, is there an agreed-upon intrinsic value to caring that my ignorance of some LW culture has lead me to miss? This would also explain wanting to maximize caring.)
(Upon further-further thought, is it something like the following internal dialogue? “I care about people close to me. I also care about the fate of mankind. I know that the fate of mankind as a whole is far more important than the fate of the people close to me. Since I value internal consistency, in order for my caring-mechanism to be consistent, my care for the fate of mankind must be proportional to my care for the people close to me. Since my caring mechanism is incapable of actually computing such a proportionality, the next best thing is to be consciously aware of how much it should care if it were able, and act accordingly.”)
I care about self-consistency, but being self-consistent is something that must happen naturally; I can’t self-consistently say “This feeling is self-inconsistent, therefore I will change this feeling to be self-consistent”
… Oh.
Hm. In that case, I think I’m still missing something fundamental.
I care about self-consistency because an inconsistent self is very strong evidence that I’m doing something wrong.
It’s not very likely that if I take the minimum steps to make the evidence of the error go away, I will make the error go away.
The general case of “find a self-inconsistency, make the minimum change to remove it” is not error-correcting.
I actually think that your internal dialogue was a pretty accurate representation of what I was failing to say. And as for self consistency having to be natural, I agree, but if you’re aware that you’re being inconsistent, you can still alter your actions to try and correct for that fact.
I look at a box of 100 bullets, and I know that it would be trivial for me to be in mortal danger from any one of them, but the box is perfectly safe.
It is trivial-ish for me to meet a trivial number of people and start to care about them, but it is certainly nontrivial to encounter a nontrivial number of people.