Living among billions of happy people who have realistic chances to meet their goals is a world I find much more desirable than a world where my friends and I are the only successful people in existence.
On one hand, there’s the cold utilitarian who only values other lives inasmuch as they further hir goals, and assigns no intrinsic worth to whichever goals they may have for themselves. This position does not coincide, but overlaps, with solipsism. On the other hand, there’s what we could call the naïve Catholic who holds that more life is always better life, no matter in what horrid conditions. This position does not coincide, but overlaps, with panpsychism.
The strong altruistic component of EY’s philosophy is what sets it on a higher moral ground than Ayn Rand’s. For all her support of reason, Rand’s fatal flaw was that she failed to grasp the need for altruism; it was anathema to her, even if her brand of selfishness was strange in that she recognized other people’s right to be selfish too (the popular understanding of selfishness is more predatory than even she allowed).
EY agrees with Rand’s position that every mind should be free to improve itself, but he doesn’t dismiss cooperation. It makes perfect sense: The ferociously competitive realm of natural selection does often select for cooperation, which strongly suggests it’s a useful strategy. I can’t claim to divine his reasons, but the bottom line is that EY gets altruism.
(As chaosmage suggested, it is not impossible that EY merely pretends to be an altruist so people will feel more comfortable letting him talk his way into world domination (ahem, optimization), but the writing style of his texts about the future of humanity and about how much it matters to him is likelier if he really believes what he says.)
Still, the question stands: Why care about random people? I notice it’s difficult for me to verbalize this point because it’s intuitively obvious to me, so much so that my gut activates a red alarm at the sight of a fellow human who doesn’t share that feeling.
Whence empathy? Although empathy has a long tradition of support in many philosophies, antiquity alone is not a valid argument. Warfaring chimpanzees share as much DNA with us as hippie bonobos; mirror neurons are not conclusively proven to exist; and disguised sociopathy sounds like an optimal strategy.
Buddhism has a concept that I find highly appealing. It’s called metta and it basically states that sentient beings’ preference for not suffering is one you can readily agree with because you’re a sentient being too. There are several ways to express the same idea in contemporary terms: We’re all in this together, we’re not so different, and other feel-good platitudes.
We can go one step further and assert this: A world where only some personal sets of preferences get to be realized runs the risk of your preferences being ignored, because there’s no guarantee that you will be the one who decides which preferences are favored; whereas a world where all personal sets of preferences are equally respected is the one where yours have the best chance of being realized. To paraphrase the Toyota ads, what’s good for the entire world is good for you.
(I know most LWers will demand a selfish justification for altruism because any rational decision theory will require it, but I feel hypocritical having to provide a selfish argument for altruism. Ideally, caring for others shouldn’t need to be justified by resorting to an expected personal benefit, but I acknowledge that trying to advance this point is like trying to show a Christian ascetic that hoping to get to heaven by renouncing worldly pleasures is the epitome of calculated hedonism. I still haven’t resolved this contradiction, but fortunately this is the one place in all the Internet where I can feel safe expecting to be proved wrong.)
Another odd thing about Rand’s egoism is that it’s mostly directed towards being able to pursue one’s goal of making excellent things for other people, not being hassled in the process, and being appropriately rewarded.
But he views extinction-level events as “that much worse” than a single death. But is an extinction-level event that bad? If everyone gets wiped out, there’s no suffering left.
I’m not against others being happy and successful, and sure, that’s better than them not being. But I seem to have no preference for anyone existing. Even myself, my kids, my family—if I could, I’d erase the entire lot of us, but it’s just not practical.
Sure. Goal is to make TotalSuffering as small as possible, where each individual Suffering is >= 0. There may be some level of individual Suffering that rounds down to zero, like the pain of hurting your leg while trying to run faster, or stuff like that. The goal is to make sure no one is in real suffering, not eliminate all Fun.
One approach to do that is to make sure everyone is not suffering. That entails a gigantic amount of work. And if I understand MWI, it’s actually impossible, as branches will happen creating a sort of hell. (Only considering forward branches.) Sure, it “all averages out to normal”, but tell that to someone in a hell branch.
The other way is to eliminate all life (or the universe). Suffering is now at 0, an optimal value.
Living among billions of happy people who have realistic chances to meet their goals is a world I find much more desirable than a world where my friends and I are the only successful people in existence.
On one hand, there’s the cold utilitarian who only values other lives inasmuch as they further hir goals, and assigns no intrinsic worth to whichever goals they may have for themselves. This position does not coincide, but overlaps, with solipsism. On the other hand, there’s what we could call the naïve Catholic who holds that more life is always better life, no matter in what horrid conditions. This position does not coincide, but overlaps, with panpsychism.
The strong altruistic component of EY’s philosophy is what sets it on a higher moral ground than Ayn Rand’s. For all her support of reason, Rand’s fatal flaw was that she failed to grasp the need for altruism; it was anathema to her, even if her brand of selfishness was strange in that she recognized other people’s right to be selfish too (the popular understanding of selfishness is more predatory than even she allowed).
EY agrees with Rand’s position that every mind should be free to improve itself, but he doesn’t dismiss cooperation. It makes perfect sense: The ferociously competitive realm of natural selection does often select for cooperation, which strongly suggests it’s a useful strategy. I can’t claim to divine his reasons, but the bottom line is that EY gets altruism.
(As chaosmage suggested, it is not impossible that EY merely pretends to be an altruist so people will feel more comfortable letting him talk his way into world domination (ahem, optimization), but the writing style of his texts about the future of humanity and about how much it matters to him is likelier if he really believes what he says.)
Still, the question stands: Why care about random people? I notice it’s difficult for me to verbalize this point because it’s intuitively obvious to me, so much so that my gut activates a red alarm at the sight of a fellow human who doesn’t share that feeling.
Whence empathy? Although empathy has a long tradition of support in many philosophies, antiquity alone is not a valid argument. Warfaring chimpanzees share as much DNA with us as hippie bonobos; mirror neurons are not conclusively proven to exist; and disguised sociopathy sounds like an optimal strategy.
Buddhism has a concept that I find highly appealing. It’s called metta and it basically states that sentient beings’ preference for not suffering is one you can readily agree with because you’re a sentient being too. There are several ways to express the same idea in contemporary terms: We’re all in this together, we’re not so different, and other feel-good platitudes.
We can go one step further and assert this: A world where only some personal sets of preferences get to be realized runs the risk of your preferences being ignored, because there’s no guarantee that you will be the one who decides which preferences are favored; whereas a world where all personal sets of preferences are equally respected is the one where yours have the best chance of being realized. To paraphrase the Toyota ads, what’s good for the entire world is good for you.
(I know most LWers will demand a selfish justification for altruism because any rational decision theory will require it, but I feel hypocritical having to provide a selfish argument for altruism. Ideally, caring for others shouldn’t need to be justified by resorting to an expected personal benefit, but I acknowledge that trying to advance this point is like trying to show a Christian ascetic that hoping to get to heaven by renouncing worldly pleasures is the epitome of calculated hedonism. I still haven’t resolved this contradiction, but fortunately this is the one place in all the Internet where I can feel safe expecting to be proved wrong.)
Another odd thing about Rand’s egoism is that it’s mostly directed towards being able to pursue one’s goal of making excellent things for other people, not being hassled in the process, and being appropriately rewarded.
But he views extinction-level events as “that much worse” than a single death. But is an extinction-level event that bad? If everyone gets wiped out, there’s no suffering left.
I’m not against others being happy and successful, and sure, that’s better than them not being. But I seem to have no preference for anyone existing. Even myself, my kids, my family—if I could, I’d erase the entire lot of us, but it’s just not practical.
Your original post says,
Would you please describe the sequence of thoughts leading to that conclusion?
Sure. Goal is to make TotalSuffering as small as possible, where each individual Suffering is >= 0. There may be some level of individual Suffering that rounds down to zero, like the pain of hurting your leg while trying to run faster, or stuff like that. The goal is to make sure no one is in real suffering, not eliminate all Fun.
One approach to do that is to make sure everyone is not suffering. That entails a gigantic amount of work. And if I understand MWI, it’s actually impossible, as branches will happen creating a sort of hell. (Only considering forward branches.) Sure, it “all averages out to normal”, but tell that to someone in a hell branch.
The other way is to eliminate all life (or the universe). Suffering is now at 0, an optimal value.