The LotR fanfic has a same basic theme as the MoR: Rationality leads to power and corruption. Both have an evil villain who is smarter, more rational, and more conventionally evil than the protagonists. LotR makes it more explicit: The ring makes you more rational, and this is synonymous with making you more evil.
But “corruption” is a word wielded by the less-smart. MoR portrays Voldemort with much more sympathy than any conventional fantasy would.
And we must remember that Eliezer’s CEV depends on the supposition that there is no absolute morality, no basis for calling Dumbledore morally superior to Voldemort, or Gandalf morally superior to Sauron. (If there were, then instead of CEV, we would seek to discover a superior morality, or at least to distinguish good morals from bad ones.)
This fits a pattern in which rationality opens ones eyes to possibilities that are shut out by conventional morality; and acknowledging those possibilities makes one appear evil to the conventional eyes telling the story.
I wonder where Eliezer can go with any conventional fantasy story, as their entire purpose is to validate moral presuppositions that (I think) he rejects entirely. Will MoR lead to a vindication of Voldemort?
And we must remember that Eliezer’s CEV depends on the supposition that there is no absolute morality, no basis for calling Dumbledore morally superior to Voldemort, or Gandalf morally superior to Sauron.
Do you expect that Eliezer would agree with what you just wrote? I personally would bet at 99:1 odds that he would disagree strongly with the second half (as the other commenters have pointed out).
If you think that he would disagree strongly, have you considered that it’s more likely you’ve misinterpreted his position (apparently Eliezer admits that the metaethics sequence isn’t his best-written material) than that he’s misinterpreted himself?
I personally would bet at 99:1 odds that he would disagree strongly with the second half (as the other commenters have pointed out).
You can’t have the first half without the second half. Either you are a moral absolutist, or a moral relativist. Moralities are things that can be ordered by an independent observer, or they aren’t. I don’t think there’s any middle ground. Saying “Gandalf is morally superior to Sauron when seen from a perspective similar to that of Gandalf; but Sauron is morally superior to Gandalf when seen from a perspective similar to that of Sauron”, is not saying that Gandalf is morally superior to Sauron. Claiming that they are the same thing is like saying, “I can believe that 3 is greater than 2 without admitting that there is any objective basis for comparing the magnitudes of numbers, and without denying that 2 may be greater than 3 to someone else.”
You can’t have the first half without the second half. Either you are a moral absolutist, or a moral relativist.
See, I’d give 99:1 odds that he’d strongly disagree with this as well (as do I). Now, your position is that you must be one or the other (if you’re to be coherent), but I hope you can admit of the possibility that Eliezer sees that as a false dichotomy. From your perspective, this makes his metaethics a hopeless muddle of absolutism and relativism, but this should give you different predictions about how MoR turns out than would the assumption that he’s a standard moral relativist.
I hope we can at least agree on that much, before we turn to arguing anything else.
I’ve just now given you an example where someone can be an absolutist on which criteria are to be used, but a relativist on the weight assigned to those criteria.
Dividing the world to pure absolutists (who can order each morality on a single axis) and pure relativists (who don’t order any moralities) is a very incomplete model.
Either you are a moral absolutist, or a moral relativist. Moralities are things that can be ordered by an independent observer, or they aren’t. I don’t think there’s any middle ground.
There appear to be different flavors of objective and subjective morality.
Moral Realism: There is objective truth, right, wrong; Moral Universalism: There is a morality that can be applied to a general class—for example all humans; Value Pluralism: There may be multiple equally correct and fundamental value systems that conflict with each other; Moral Consequentialism: Right and wrong can be determined from consequences; Moral Relativism: There is no objective truth—we should be equally tolerant of all systems; Moral Perspectivism: There is no objective truth—but some systems are better than others. Moral Nihilism: Morality is an illusion—nothing is moral or immoral.
This list could go on—but if you find it valuable to split belief systems between moral realism and not moral realism, then I don’t see how I could meaningfully object… :)
Um… I wouldn’t say CEV is based on a rejection of the idea of an objective moral standard, since Eliezer himself doesn’t exactly reject that:
Yes, I really truly do believe that humanity is better than the Pebblesorters! I am not being sarcastic, I really do believe that. I am not playing games by redefining “good” or “arbitrary”, I think I mean the same thing by those terms as everyone else. When you understand that I am genuinely sincere about that, you will understand my metaethics. I really don’t consider myself a moral relativist—not even in the slightest!
I do say CEV is based on a rejection of an objective moral standard, because the inference from CEV to moral relativism is clearer to me than those things Eliezer has said about not being a moral relativist, and because whether CEV implies moral relativism is a better-defined question than whether Eliezer is a moral relativist. (And whether someone is a moral relativist is determined by what they do, and by the actions they advocate, not by what they say when asked “Are you a moral relativist?”)
You can’t let what someone states directly trump the implications of the other things they say. Otherwise we could not refute theists when they state principles that imply some conclusion X, then deny that they imply X. (Example: “But if the reason to believe in God is that complex things must be designed by yet-more-complex things, then God must be even more complex.” “No, God is perfect simplicity.”)
Anyway, one thing I do know about Eliezer is that he doesn’t like it when I assert things about him. He may believe he is not a relativist, and act in accordance with that at times; and that may be more relevant to the outcome of Harry Potter: MoR than things I infer from CEV.
Except that, near as I can tell, CEV is NOT itself in any way based on relativism.
The idea basically amounts to “figure out what criteria people effectively actually mean by ‘morality’, or more generally, what it is they actually would have wanted if they knew more, spend more time considering moral issues, etc...”
If you believed in objective morality, you would try to figure out what good morals are, rather than take the position (as in CEV) that every moral framework is equally valid from within that moral framework, and therefore you may treat them simply as goals, and all you can do is try to fulfil your goals; and that it makes no sense to wonder about whether you should have different goals/values/morals.
Whut? Where in the concept of the CEV is that idea implied? The whole idea is something like “humans seem to mean SOMETHING when they talk about this morality stuff. When we throw around words like ‘should’, that’s basically (well, more or less) a reference to the underlying algorithm we use to reason about morality. So just extract that part, feed into it more accurate information and more processing power, let it run, including modeling how it would update itself in light of new thoughts/etc, and go from there.”
Where in that is anything saying anything resembling the idea that any framework that could be asserted to be a moral framework actually is?
And we must remember that Eliezer’s CEV depends on the supposition that there is no absolute morality, no basis for calling Dumbledore morally superior to Voldemort, or Gandalf morally superior to Sauron.
Going from “there is no absolute morality” to “there is no basis for calling agent A morally superior to agent B” is a much broader jump than you make it seem here. The first part I agree with; the second part is much less clear to me.
If by “basis” you mean “absolute basis,” well, OK, but so what?
If Tolkien has believed that rooting for Frodo over Sauron was morally equivalent to rooting for Arsenal over Manchester United, the LotR would have been very different.
What else could you possibly mean other than absolute basis? That’s not a rhetorical question; I’d appreciate seeing it spelled out. You can’t say “Agent A is morally superior to agent B” in anything but absolute terms. Otherwise, you can only say, “Agent A is morally superior to agent B from my perspective, which is close to agent A; but someone else at a position equally close to agent B might say with equal validity that agent B is morally superior to agent A.” And that is a very different statement!
What else could you possibly mean other than absolute basis?
I can call Gandalf morally superior to Sauron (1) on the basis of my moral standards.
If I’m understanding your question correctly, you think I can’t possibly do this; that my own moral standards aren’t sufficient basis for calling Gandalf morally superior to Sauron; that I have to invoke an absolute morality in order to do that.
Is that right? I have to admit, that strikes me as a silly idea, but I assure you I’m not mocking you here… I can’t come up with any other interpretation of your question. If you mean something different, I’d appreciate correction.
(1) Actually, it has been long enough ago since I read LoTR that I’m not actually certain of that judgment… I can’t recall what Sauron actually did beyond being everyone’s chosen enemy. As I recall, we don’t actually get to see much of Sauron’s activity. But I’m assuming for the sake of the argument that if I reread the books I would in fact conclude he was morally inferior to Gandalf.
What else could you possibly mean other than absolute basis?
Isn’t it possible to condemn Sauron’s moral stance as inconsistent (i.e. irrational)? If Gandalf, on the other hand, espouses and practices a consistent morality, isn’t that grounds for calling Gandalf morally superior to Sauron, without claiming the existence of absolute moral standards?
Well, except you’ve assigned “consistency” absolute moral value, the same way you might assign “saving the world” or “making rings that suck out peoples’ souls” moral value.
No, “consistency” is another cheap approximation of morality that doesn’t match our intuitions, even our intuitions informed by knowledge and reflection.
There could be agents with a perfectly consistent criteria for which actions it considers “right” and which actions it considers “wrong”, that would still allow morally abhorent actions.
I expect the story will make explicit what’s been hinted at so far: that Voldemort is the product of Dementation, and is brain damaged. This would, by Eliezer’s lights, place him in a different moral frame of reference to normal, neurologically intact humanity. Voldemort’s not mistaken about how he should be using his talents. He’s now an inhuman mind pursuing inhuman ends.
The LotR fanfic has a same basic theme as the MoR: Rationality leads to power and corruption. Both have an evil villain who is smarter, more rational, and more conventionally evil than the protagonists. LotR makes it more explicit: The ring makes you more rational, and this is synonymous with making you more evil.
But “corruption” is a word wielded by the less-smart. MoR portrays Voldemort with much more sympathy than any conventional fantasy would.
And we must remember that Eliezer’s CEV depends on the supposition that there is no absolute morality, no basis for calling Dumbledore morally superior to Voldemort, or Gandalf morally superior to Sauron. (If there were, then instead of CEV, we would seek to discover a superior morality, or at least to distinguish good morals from bad ones.)
This fits a pattern in which rationality opens ones eyes to possibilities that are shut out by conventional morality; and acknowledging those possibilities makes one appear evil to the conventional eyes telling the story.
I wonder where Eliezer can go with any conventional fantasy story, as their entire purpose is to validate moral presuppositions that (I think) he rejects entirely. Will MoR lead to a vindication of Voldemort?
Do you expect that Eliezer would agree with what you just wrote? I personally would bet at 99:1 odds that he would disagree strongly with the second half (as the other commenters have pointed out).
If you think that he would disagree strongly, have you considered that it’s more likely you’ve misinterpreted his position (apparently Eliezer admits that the metaethics sequence isn’t his best-written material) than that he’s misinterpreted himself?
You can’t have the first half without the second half. Either you are a moral absolutist, or a moral relativist. Moralities are things that can be ordered by an independent observer, or they aren’t. I don’t think there’s any middle ground. Saying “Gandalf is morally superior to Sauron when seen from a perspective similar to that of Gandalf; but Sauron is morally superior to Gandalf when seen from a perspective similar to that of Sauron”, is not saying that Gandalf is morally superior to Sauron. Claiming that they are the same thing is like saying, “I can believe that 3 is greater than 2 without admitting that there is any objective basis for comparing the magnitudes of numbers, and without denying that 2 may be greater than 3 to someone else.”
See, I’d give 99:1 odds that he’d strongly disagree with this as well (as do I). Now, your position is that you must be one or the other (if you’re to be coherent), but I hope you can admit of the possibility that Eliezer sees that as a false dichotomy. From your perspective, this makes his metaethics a hopeless muddle of absolutism and relativism, but this should give you different predictions about how MoR turns out than would the assumption that he’s a standard moral relativist.
I hope we can at least agree on that much, before we turn to arguing anything else.
If I rated morality as (2, 3) on two separate orthogonal axes of goodness (e.g. freedom and joy) and another morality as (3, 2) on the same two axes, a morality that rates (4, 4) is superior to either, and a morality that rates (1, 1) is inferior to both, while I wouldn’t be able to absolutely “order” versus , unless I appropriately weighted those two values—and those weights I needn’t consider absolute constants of the universe, even if I considered the maximization of those qualities as good absolutely.
Note: The above is meant as an example only, not as a description of my own system of morality.
The problem of comparing things on the different axes—“solved” by converting to units of utility—is the same for absolutists and relativists.
I’ve just now given you an example where someone can be an absolutist on which criteria are to be used, but a relativist on the weight assigned to those criteria.
Dividing the world to pure absolutists (who can order each morality on a single axis) and pure relativists (who don’t order any moralities) is a very incomplete model.
There appear to be different flavors of objective and subjective morality.
Moral Realism: There is objective truth, right, wrong; Moral Universalism: There is a morality that can be applied to a general class—for example all humans; Value Pluralism: There may be multiple equally correct and fundamental value systems that conflict with each other; Moral Consequentialism: Right and wrong can be determined from consequences; Moral Relativism: There is no objective truth—we should be equally tolerant of all systems; Moral Perspectivism: There is no objective truth—but some systems are better than others. Moral Nihilism: Morality is an illusion—nothing is moral or immoral.
This list could go on—but if you find it valuable to split belief systems between moral realism and not moral realism, then I don’t see how I could meaningfully object… :)
Um… I wouldn’t say CEV is based on a rejection of the idea of an objective moral standard, since Eliezer himself doesn’t exactly reject that:
From The Bedrock of Morality: Arbitrary?.
I do say CEV is based on a rejection of an objective moral standard, because the inference from CEV to moral relativism is clearer to me than those things Eliezer has said about not being a moral relativist, and because whether CEV implies moral relativism is a better-defined question than whether Eliezer is a moral relativist. (And whether someone is a moral relativist is determined by what they do, and by the actions they advocate, not by what they say when asked “Are you a moral relativist?”)
You can’t let what someone states directly trump the implications of the other things they say. Otherwise we could not refute theists when they state principles that imply some conclusion X, then deny that they imply X. (Example: “But if the reason to believe in God is that complex things must be designed by yet-more-complex things, then God must be even more complex.” “No, God is perfect simplicity.”)
Anyway, one thing I do know about Eliezer is that he doesn’t like it when I assert things about him. He may believe he is not a relativist, and act in accordance with that at times; and that may be more relevant to the outcome of Harry Potter: MoR than things I infer from CEV.
Except that, near as I can tell, CEV is NOT itself in any way based on relativism.
The idea basically amounts to “figure out what criteria people effectively actually mean by ‘morality’, or more generally, what it is they actually would have wanted if they knew more, spend more time considering moral issues, etc...”
If you believed in objective morality, you would try to figure out what good morals are, rather than take the position (as in CEV) that every moral framework is equally valid from within that moral framework, and therefore you may treat them simply as goals, and all you can do is try to fulfil your goals; and that it makes no sense to wonder about whether you should have different goals/values/morals.
Whut? Where in the concept of the CEV is that idea implied? The whole idea is something like “humans seem to mean SOMETHING when they talk about this morality stuff. When we throw around words like ‘should’, that’s basically (well, more or less) a reference to the underlying algorithm we use to reason about morality. So just extract that part, feed into it more accurate information and more processing power, let it run, including modeling how it would update itself in light of new thoughts/etc, and go from there.”
Where in that is anything saying anything resembling the idea that any framework that could be asserted to be a moral framework actually is?
Going from “there is no absolute morality” to “there is no basis for calling agent A morally superior to agent B” is a much broader jump than you make it seem here. The first part I agree with; the second part is much less clear to me.
If by “basis” you mean “absolute basis,” well, OK, but so what?
If Tolkien has believed that rooting for Frodo over Sauron was morally equivalent to rooting for Arsenal over Manchester United, the LotR would have been very different.
That’s certainly true.
What else could you possibly mean other than absolute basis? That’s not a rhetorical question; I’d appreciate seeing it spelled out. You can’t say “Agent A is morally superior to agent B” in anything but absolute terms. Otherwise, you can only say, “Agent A is morally superior to agent B from my perspective, which is close to agent A; but someone else at a position equally close to agent B might say with equal validity that agent B is morally superior to agent A.” And that is a very different statement!
I can call Gandalf morally superior to Sauron (1) on the basis of my moral standards.
If I’m understanding your question correctly, you think I can’t possibly do this; that my own moral standards aren’t sufficient basis for calling Gandalf morally superior to Sauron; that I have to invoke an absolute morality in order to do that.
Is that right? I have to admit, that strikes me as a silly idea, but I assure you I’m not mocking you here… I can’t come up with any other interpretation of your question. If you mean something different, I’d appreciate correction.
(1) Actually, it has been long enough ago since I read LoTR that I’m not actually certain of that judgment… I can’t recall what Sauron actually did beyond being everyone’s chosen enemy. As I recall, we don’t actually get to see much of Sauron’s activity. But I’m assuming for the sake of the argument that if I reread the books I would in fact conclude he was morally inferior to Gandalf.
Isn’t it possible to condemn Sauron’s moral stance as inconsistent (i.e. irrational)? If Gandalf, on the other hand, espouses and practices a consistent morality, isn’t that grounds for calling Gandalf morally superior to Sauron, without claiming the existence of absolute moral standards?
Well, except you’ve assigned “consistency” absolute moral value, the same way you might assign “saving the world” or “making rings that suck out peoples’ souls” moral value.
No, “consistency” is another cheap approximation of morality that doesn’t match our intuitions, even our intuitions informed by knowledge and reflection.
There could be agents with a perfectly consistent criteria for which actions it considers “right” and which actions it considers “wrong”, that would still allow morally abhorent actions.
I expect the story will make explicit what’s been hinted at so far: that Voldemort is the product of Dementation, and is brain damaged. This would, by Eliezer’s lights, place him in a different moral frame of reference to normal, neurologically intact humanity. Voldemort’s not mistaken about how he should be using his talents. He’s now an inhuman mind pursuing inhuman ends.