The problem with what Elizier says there is making it compatible with his reason for being moral. For example:
“And once you realize that the brain can’t multiply by eight, then the other cases of scope neglect stop seeming to reveal some fundamental truth about 50,000 lives being worth just the same effort as 5,000 lives, or whatever. You don’t get the impression you’re looking at the revelation of a deep moral truth about nonagglomerative utilities. It’s just that the brain doesn’t goddamn multiply. Quantities get thrown out the window.”
However, Elizier’s comments on “The Pebblesorters” amongst others make clear that he defines morality based on what humans feel is moral. How is this compatible?
In addition, given that the morality in the Metaethics is fundamentally based on preferences, there are severe problems. Take Hypothetical case A, which is broad enough to cover a lot of plausible scenarios.
A- A hypothetical case where there is an option which will be the best from a consequentialist perspective, but which for some reason the person who takes the option would feel overall more guilty for choosing it AND be less happy aftewards than the alternative, both in the short run and the long run.
Elizier would say to take the action that is best from a consequentialist perspective. This is indefensible however you look at it- logically, philsophically, etc.
Ok, I can see why you read the Pebblesorters parable and concluded that on Eliezer’s view, morality comes from human feelings and intuitions alone. The Pebblesorters are not very reflective or deliberative (although there’s that one episode where a Pebblesorter makes a persuasive moral argument by demonstrating that a number is composite.) But I think you’ll find that it’s also compatible with the position that morality comes from human feelings and intuitions, as well as intuitions about how to reconcile conflicting intuitions and intuitions about the role of deliberation in morality. And, since The Moral Void and other posts explicitly say that such metaintuitions are an essential part of the foundation of morality, I think it’s safe to say this is what Eliezer meant.
I’ll set aside your scenario A for now because that seems like the start of a different conversation.
Elizier doesn’t have sufficient justification for including such metaintuitions anyway. Scenario A illustrates this well- assuming reflecting on the issue doesn’t change the balance of what a person wants to do anyway, it doesn’t make sense and Elizier’s consequentialism is the equivalent of the stone tablet.
You really ought to learn to spell Eliezer’s name.
Anyways, it looks like you’re no longer asking for clarification of the Metaethics sequence and have switched to critiquing it; I’ll let other commenters engage with you on that.
The problem with what Elizier says there is making it compatible with his reason for being moral. For example:
“And once you realize that the brain can’t multiply by eight, then the other cases of scope neglect stop seeming to reveal some fundamental truth about 50,000 lives being worth just the same effort as 5,000 lives, or whatever. You don’t get the impression you’re looking at the revelation of a deep moral truth about nonagglomerative utilities. It’s just that the brain doesn’t goddamn multiply. Quantities get thrown out the window.”
However, Elizier’s comments on “The Pebblesorters” amongst others make clear that he defines morality based on what humans feel is moral. How is this compatible?
In addition, given that the morality in the Metaethics is fundamentally based on preferences, there are severe problems. Take Hypothetical case A, which is broad enough to cover a lot of plausible scenarios.
A- A hypothetical case where there is an option which will be the best from a consequentialist perspective, but which for some reason the person who takes the option would feel overall more guilty for choosing it AND be less happy aftewards than the alternative, both in the short run and the long run.
Elizier would say to take the action that is best from a consequentialist perspective. This is indefensible however you look at it- logically, philsophically, etc.
Ok, I can see why you read the Pebblesorters parable and concluded that on Eliezer’s view, morality comes from human feelings and intuitions alone. The Pebblesorters are not very reflective or deliberative (although there’s that one episode where a Pebblesorter makes a persuasive moral argument by demonstrating that a number is composite.) But I think you’ll find that it’s also compatible with the position that morality comes from human feelings and intuitions, as well as intuitions about how to reconcile conflicting intuitions and intuitions about the role of deliberation in morality. And, since The Moral Void and other posts explicitly say that such metaintuitions are an essential part of the foundation of morality, I think it’s safe to say this is what Eliezer meant.
I’ll set aside your scenario A for now because that seems like the start of a different conversation.
Elizier doesn’t have sufficient justification for including such metaintuitions anyway. Scenario A illustrates this well- assuming reflecting on the issue doesn’t change the balance of what a person wants to do anyway, it doesn’t make sense and Elizier’s consequentialism is the equivalent of the stone tablet.
You really ought to learn to spell Eliezer’s name.
Anyways, it looks like you’re no longer asking for clarification of the Metaethics sequence and have switched to critiquing it; I’ll let other commenters engage with you on that.