Yeah, I’m pretty sure that one of the premises behind both of these stories is that there is one objectively correct morality, given that a few basic definitions are shared. “If we agree that suffering is bad and that ~suffering is good then humanist, rational utilitarianism should follow directly,” or something.
I can definitely agree that acting in a way that fails to optimally minimise suffering is >bad<. I can even agree that in a lot of cases, punishing bad people will minimise further suffering. The problem is (and this is the message that I got from The Sword of Good) no one sane actually thinks of themselves as the bad guy of the story, and it takes an Act of Rational Intellect to make a justified choice about which side is actually right about being right.
The upshot is that I balk at the “evil” label. People are bad and good, only cartoon characters are evil because they do bad for bad’s sake. I guess in my head I think that to be a Dark Lord requires being Evil and not just bad. That might be a silly definition, since it means I’m essentially defining Dark Lords not to exist, but then again we only hear about them in children’s stories and HPMoR.
Sometimes (not every time) I get a twinge of subconscious worry when EY primes his argument-cannon with rational powder but emotional wadding. It makes for incredible chapters—I get fully body pins and needles whenever Harry does something amazing with a dementor—and when the emotion is humanist triumphal awe then it’s probably safe. I know the argument: there’s no reason why rationality and emotion have to weaken one another; emotion aimed by rationality can work to good ends. I just don’t want to confront a situation where Dumbledore ends up being the Dark Lord despite wanting to be a good guy and trying very, very hard—because there’s the risk there of sending the message that we should not only fight against powerful bad people, but hate them because they are Evil for not quite being smart enough. I don’t think that would be emotion aimed by rationality.
I know that’s not the sort of ending EY would craft, I only wonder how we >would< write a Dark Lord Dumbledore ending, and how likely it is that he will.
EDIT: I just re-read the “Are you enemies innately evil” article that Xachariah referenced, only this time I read it from the point of view of a moral relativist. I have gained an insight into how the other side thinks. I have also found even more motivation to make hard relativism extinct.
A form of relativism does follow from naturalism, as does a moral system which favors ones own gains above the gains of others. See: the babykillers story.
Egoism and relativism are actually pretty powerful arguments. “Good” does not exist external to perception or experience. I cannot experience the “good” of anyone else’s experience, except in an extremely diluted way through empathic networks, so in order to maximize the amount of good that exists I must try to maximize my own pleasure and achieve my own values. This doesn’t preclude helping others, we can be helpful to others if we enjoy or value being helpful, etc.
Everyone accepts relativism on a small scale, like with favorite colors. It’s not very different on a large scale, either. If someone is genetically modified to have a passionate drive to eat children, and has a burning desire to do so and sees nothing wrong with doing so, I think they should eat children. It’s not justifiable to hold people to obligations that they don’t internally recognize as correct, for the same reasons that “because God said so” is not an actual moral argument.
There’s an important difference between this kind of relativism and other kinds, though. The kind of relativism I’m trying to defend here doesn’t say that nothing is wrong and that all paths lead up the same mountain, or whatever. I’m trying to say that because morality is derived from inherent values, where those inherent values change we will get different moral answers for different people. But most humans share a lot of values, so this kind of relativism really doesn’t open itself up to the kind of gut-level criticism that most people throw at it, like the argumentum ad consequentum “but then everyone would murder everyone!” which is so popular.
I personally don’t follow any form of relativism, although I’m fairly selfish and egoist (yay rationality!), but that’s because my ethics are totally ad hoc and disorganized. I personally use a sort of story based virtue ethics to justify my actions (“what would a hero do?”; “what kind of hero am I?”). Stories are important to humans, so there’s probably a naturalistic or cultural justification somewhere in there, but I don’t really know what it is specifically.
Well, that’s the difference between hard relativism and soft relativism. Hard relativism holds that there is no “right”, and that’s the one I think has to go. Mind you, I think the relativism you describe is still a bit hard for me—I’d argue that whilst what is right is relative in the sense that it’s contingent on the situation at hand, within a given situation rightness is fixed and not at all dependent on one’s viewpoint. I certainly don’t agree that a relativism with any hardness to it follows from naturalism. I say this only to identify my position relative to your own, since this probably isn’t the right place for me to start trying to debate against your ethics. Plus it’s 2am and I spent all day at uni arguing ethics. I’m burnt out.
I read the baby-eaters story a while ago—I think I disagree with EY on that one, although it’s possible I misread it. I share his apparent belief that the baby-eaters had to be stopped: the babies were suffering and suffering=bad. I don’t see why the first, “fake” ending was the sad one, though—to me the second, “real” ending was the horrible one and the fake one was close enough to what I would have tried for if I’d been there. I am warning you now, if I ever meet super-intelligent aliens who want to raise the human race up from perdition and erase the need for suffering, I’m going to be on the side of the angels until humanity sorts it’s shit out.
If you think the humans should have stopped the Babyeaters, but you don’t think the Babyeaters were evil for pursuing the values that evolution gave them, then you agree that naturalism leads to the form of relativism I am defending and that this form of relativism is okay.
Well as I’ve said somewhere in this tree, I don’t like the “evil” label, so I’ll stick to “bad”. But I do think that the baby-eaters were bad, regardless of what they were evolved to do. There are a whole bunch of things that humans are evolved to do that I also think are morally wrong—I think that humans who do those things are bad. If we didn’t have a drive to do bad things, no one would ever do them and morality would be pointless. The baby-eaters perhaps shouldn’t be hated too much for being bad (in the story they weren’t as intelligent as humans and they thought slower) but in my books they definitely were bad, and had to be stopped.
So I don’t need to admit relativism for the sake of consistency. I think the homosexuality thing is probably a huge can of worms that would trap me in this thread for weeks if I opened it, so with your permission I’m going to let that one pass.
I also, by the way, think that the baby-eaters’ psychology was probably impossible, and their evolutionary path extremely contrived and unlikely. I know baby-eaters aren’t necessary to argue for relativism, but if they were then I would think that relativism was outlandish and absurd. On the other hand, I think my ethics still extends to this crazy borderline case, so I’m willing to allow it for discussion, but that’s a reflection of my confidence in my ethics, not the fitness of the thought experiment.
Anyway, I’m not trying to convince you—I only spoke out against relativism above to register my disapproval of extreme hard relativism, which you don’t appear to espouse. I look forward to your reply if you choose to make one, but I won’t rebut it because we are waaay off topic for the thread. :)
Yeah, I’m pretty sure that one of the premises behind both of these stories is that there is one objectively correct morality, given that a few basic definitions are shared. “If we agree that suffering is bad and that ~suffering is good then humanist, rational utilitarianism should follow directly,” or something.
I can definitely agree that acting in a way that fails to optimally minimise suffering is >bad<. I can even agree that in a lot of cases, punishing bad people will minimise further suffering. The problem is (and this is the message that I got from The Sword of Good) no one sane actually thinks of themselves as the bad guy of the story, and it takes an Act of Rational Intellect to make a justified choice about which side is actually right about being right.
The upshot is that I balk at the “evil” label. People are bad and good, only cartoon characters are evil because they do bad for bad’s sake. I guess in my head I think that to be a Dark Lord requires being Evil and not just bad. That might be a silly definition, since it means I’m essentially defining Dark Lords not to exist, but then again we only hear about them in children’s stories and HPMoR.
Sometimes (not every time) I get a twinge of subconscious worry when EY primes his argument-cannon with rational powder but emotional wadding. It makes for incredible chapters—I get fully body pins and needles whenever Harry does something amazing with a dementor—and when the emotion is humanist triumphal awe then it’s probably safe. I know the argument: there’s no reason why rationality and emotion have to weaken one another; emotion aimed by rationality can work to good ends. I just don’t want to confront a situation where Dumbledore ends up being the Dark Lord despite wanting to be a good guy and trying very, very hard—because there’s the risk there of sending the message that we should not only fight against powerful bad people, but hate them because they are Evil for not quite being smart enough. I don’t think that would be emotion aimed by rationality.
I know that’s not the sort of ending EY would craft, I only wonder how we >would< write a Dark Lord Dumbledore ending, and how likely it is that he will.
EDIT: I just re-read the “Are you enemies innately evil” article that Xachariah referenced, only this time I read it from the point of view of a moral relativist. I have gained an insight into how the other side thinks. I have also found even more motivation to make hard relativism extinct.
A form of relativism does follow from naturalism, as does a moral system which favors ones own gains above the gains of others. See: the babykillers story.
Egoism and relativism are actually pretty powerful arguments. “Good” does not exist external to perception or experience. I cannot experience the “good” of anyone else’s experience, except in an extremely diluted way through empathic networks, so in order to maximize the amount of good that exists I must try to maximize my own pleasure and achieve my own values. This doesn’t preclude helping others, we can be helpful to others if we enjoy or value being helpful, etc.
Everyone accepts relativism on a small scale, like with favorite colors. It’s not very different on a large scale, either. If someone is genetically modified to have a passionate drive to eat children, and has a burning desire to do so and sees nothing wrong with doing so, I think they should eat children. It’s not justifiable to hold people to obligations that they don’t internally recognize as correct, for the same reasons that “because God said so” is not an actual moral argument.
There’s an important difference between this kind of relativism and other kinds, though. The kind of relativism I’m trying to defend here doesn’t say that nothing is wrong and that all paths lead up the same mountain, or whatever. I’m trying to say that because morality is derived from inherent values, where those inherent values change we will get different moral answers for different people. But most humans share a lot of values, so this kind of relativism really doesn’t open itself up to the kind of gut-level criticism that most people throw at it, like the argumentum ad consequentum “but then everyone would murder everyone!” which is so popular.
I personally don’t follow any form of relativism, although I’m fairly selfish and egoist (yay rationality!), but that’s because my ethics are totally ad hoc and disorganized. I personally use a sort of story based virtue ethics to justify my actions (“what would a hero do?”; “what kind of hero am I?”). Stories are important to humans, so there’s probably a naturalistic or cultural justification somewhere in there, but I don’t really know what it is specifically.
Well, that’s the difference between hard relativism and soft relativism. Hard relativism holds that there is no “right”, and that’s the one I think has to go. Mind you, I think the relativism you describe is still a bit hard for me—I’d argue that whilst what is right is relative in the sense that it’s contingent on the situation at hand, within a given situation rightness is fixed and not at all dependent on one’s viewpoint. I certainly don’t agree that a relativism with any hardness to it follows from naturalism. I say this only to identify my position relative to your own, since this probably isn’t the right place for me to start trying to debate against your ethics. Plus it’s 2am and I spent all day at uni arguing ethics. I’m burnt out.
I read the baby-eaters story a while ago—I think I disagree with EY on that one, although it’s possible I misread it. I share his apparent belief that the baby-eaters had to be stopped: the babies were suffering and suffering=bad. I don’t see why the first, “fake” ending was the sad one, though—to me the second, “real” ending was the horrible one and the fake one was close enough to what I would have tried for if I’d been there. I am warning you now, if I ever meet super-intelligent aliens who want to raise the human race up from perdition and erase the need for suffering, I’m going to be on the side of the angels until humanity sorts it’s shit out.
If you think the humans should have stopped the Babyeaters, but you don’t think the Babyeaters were evil for pursuing the values that evolution gave them, then you agree that naturalism leads to the form of relativism I am defending and that this form of relativism is okay.
An equivalent: the homosexuality “debate”.
Well as I’ve said somewhere in this tree, I don’t like the “evil” label, so I’ll stick to “bad”. But I do think that the baby-eaters were bad, regardless of what they were evolved to do. There are a whole bunch of things that humans are evolved to do that I also think are morally wrong—I think that humans who do those things are bad. If we didn’t have a drive to do bad things, no one would ever do them and morality would be pointless. The baby-eaters perhaps shouldn’t be hated too much for being bad (in the story they weren’t as intelligent as humans and they thought slower) but in my books they definitely were bad, and had to be stopped.
So I don’t need to admit relativism for the sake of consistency. I think the homosexuality thing is probably a huge can of worms that would trap me in this thread for weeks if I opened it, so with your permission I’m going to let that one pass.
I also, by the way, think that the baby-eaters’ psychology was probably impossible, and their evolutionary path extremely contrived and unlikely. I know baby-eaters aren’t necessary to argue for relativism, but if they were then I would think that relativism was outlandish and absurd. On the other hand, I think my ethics still extends to this crazy borderline case, so I’m willing to allow it for discussion, but that’s a reflection of my confidence in my ethics, not the fitness of the thought experiment.
Anyway, I’m not trying to convince you—I only spoke out against relativism above to register my disapproval of extreme hard relativism, which you don’t appear to espouse. I look forward to your reply if you choose to make one, but I won’t rebut it because we are waaay off topic for the thread. :)
Sure thing, that all means that you don’t support (pure) naturalism, which is okay with me even if I like naturalism.
I agree. What a pleasant conversation. This is why I love Less Wrong.