You’re making a mistake, in assuming that ethical systems are intended to do what you think they’re intended to do. I’m going to make some complete unsubstantiated claims; you can evaluate them for yourself.
Point 1: The ethical systems aren’t designed to be followed by the people you’re talking to.
Normal people operate by internal guidance through implicit and internal ethics, primarily guilt; ethics are largely and -deliberately- a rationalization game. That’s not an accident. Being a functional person means being able to manipulate the ethical system as necessary, and justify the actions you would have taken anyways.
Point 2: The ethical systems aren’t just there to be followed, they’re there to see who follows them.
People who -do- need the ethical systems are, from a social perspective, dangerous and damaged. Ethical systems are ultimately a fallback for these kinds of people, but also a marker; “normal” people don’t -need- ethics. As a rule of thumb, anybody who has strict adherence to a code of ethics is some variant of sociopath. And also as a rule of thumb, some mechanism of taking advantage of these people, who can’t know any better, is going to be built into these ethical systems. It will generally take some form akin to “altruism”, and is most recognizable when ethical behavior begins to be labeled as selfishness, such as variants of Buddhism where personal enlightenment is treated as selfish, or Comtean altruism.
Point 3: The ethical systems are designed to be flexible
People who have internal ethical systems -do- need something to deal with situations which have no ethical solutions, but nonetheless are necessary to solve. Ethical systems which don’t permit considerable flexibility in dealing with these situations aren’t useful. But because of sociopaths, who still need ethical systems to be kept in line, you can’t just permit anything. This is where contradiction is useful; you can use mutually exclusive rules to justify whatever action you need to take, without worrying about any ordinary crazy person using the same contradictions to their advantage, since they’re trying to follow all the rules all the time.
Point 4: Ethical systems were invented by monkeys trying to out-monkey other monkeys
Finally, ethical systems provide a framework by which people can assert or prove their superiority, thereby improving their perceived social rank (what, you think most people here are arguing with an interest in actually getting the right answer?). A good ethical framework needs to provide room for disagreement; ambiguity and contradiction are useful here, as well, especially because a large point of ethical systems is to provide a framework to justify whatever action you happened to take. This is enhanced by perceptions of the ethical framework itself, which is why mathematicians will tend to claim utilitarianism is a great ethical system, in spite of it being a perfectly ambiguous “ethical system”; it has a superficially mathematical rigor to it, so appears more scientific, and lends itself to mathematics-based arguments.
See all the monkeys correcting you on trivial issues? Raising meaningless points that contribute nothing to anybody’s understanding of anything while giving them a basis to prove their intelligence in thinking about things you hadn’t considered? They’re just trying to elevate their social status, here measured by karma points. On a site called Less Wrong, descended from a site called Overcoming Bias, the vast majority of interactions are still ultimately driven by an unconscious bias for social status. Although I admit the quality of the monkey-games here is at times somewhat better than elsewhere.
If you want an ethical system that is actually intended to be followed as-is, try Objectivism. There may be other ethical systems designed for sociopaths, but as a rule, most ethical systems are ultimately designed to take advantage of the people who actually try to follow them, as opposed to pay lip service to them.
Being a functional person means being able to manipulate the ethical system as necessary, and justify the actions you would have taken anyways.
One sided. OTOH: An ethical system being a functional ethical system means its being able to resist too much system-gaming by individuals. Ethical systems have a social role. Communities that can’t persuade any of their members to sacrifice themselves in defence of the community don’t survive,
People who -do- need the ethical systems are, from a social perspective, dangerous and damaged. Ethical systems are ultimately a fallback for these kinds of people,
What? If voluntary ethics is the fallback for the dangerous and damaged, what is the law, the criminal justice system, the involuntary stuff? (ETA and isnt a sociopath by definition someone who can’t/won’t internalise social norms?)
Systems of ethical rules are needed to solve the difficult problem of making on the spot calculations, and the impossible problem of spontaneously coordinating without commitment. Which is to say they are needed by almost everybody.
you want an ethical system that is actually intended to be followed as-is, try Objectivism.
It may have the as-is characteristic, but it is very doubtful that it qualifies as ethics, since egoism is the opposite of ethics in the eyes of >99% of people.
Quit anthropomorphizing groups of people, criminal justice is designed for the sane, sociopathy is defined differently in psychiatry than among the general public, you never actually need to decide whether to save the world-famous violinist or the rather average doctor who occupy the different trolley tracks, and you’re believing what the other monkeys tell you about what you should do when their actions are in clear contradiction.
None of that actually matters, though, because you’re not actually arguing with me, you’re debating points. I didn’t give you anything to argue with, you’re just so used to people wanting to argue that you tried to find things you could argue -with-. There’s nothing there. It’s all just assertions of premises you either accept or deny. Arguing about premises doesn’t get you anywhere, will never get you anywhere, but you do it anyways. Why?
Do you think ethics are important enough to -defend-, even when there’s nothing to be gained from defending them? Why is that?
Good points. My entire post assumes that people are interested in figuring out what they would want to do in every conceivable decision-situation. That’s what I″d call “doing ethics”, but you’re completely correct that many people do something very different. Now, would they keep doing what they’re doing if they knew exactly what they’re doing and not doing, i.e. if they were aware of the alternatives? If they were aware of concepts like agentyness? And if yes, what would this show?
I wrote down some more thoughts on this in this comment. As a general reply to your main point: Just because people act as though they are interested in x rather than y doesn’t mean that they wouldn’t rather choose y if they were more informed. And to me, choosing something because one is not optimally informed seems like a bias, which is why I thought the comparison/the term “moral anti-epistemology” has merits. However, under a more Panglossian interpretation of ethics, you could just say that people want to do what they do, and that this is perfectly fine. I depends on how much you value ethical reflection (there is quite a rabbit hole to go down to, actually, having to do with the question whether terminal values are internal or chosen).
The sad thing is it probably will (the rationalist’s burden: aspiring to be more rational makes rationalizating harder, and you can’t just tweak your moral map and your map of the just world/universe to fit your desired (self-)image).
What is it that counts, revealed preferences or stated preferences or preferences that are somehow idealized (if the person knew more, was smarter etc.)? I’m not sure the last option can be pinned down in a non-arbitrary way. This would leave us with revealed preferences and stated preferences, even though stated preferences are often contradictory or incomplete. It would be confused to think that one type of preferences are correct, whereas the others aren’t. There are simply different things going on, and you may choose to focus on one or the other. Personally I don’t intrinsically care about making people more agenty, but I care about it instrumentally, because it turns out that making people more agenty often increases their (revealed) concern for reducing suffering.
What does this make of the claim under discussion, that deontology could sometimes/often be a form of moral rationalizing? The point still stands, but it is qualified with a caveat, namely that it is only a rationalizing if we are talking about (informed/complete) stated preferences. For whatever that’s worth. On LW, I assume it is worth a lot to most people, but there’s no mistake being made if it isn’t for someone.
You’re making a mistake, in assuming that ethical systems are intended to do what you think they’re intended to do. I’m going to make some complete unsubstantiated claims; you can evaluate them for yourself.
Point 1: The ethical systems aren’t designed to be followed by the people you’re talking to.
Normal people operate by internal guidance through implicit and internal ethics, primarily guilt; ethics are largely and -deliberately- a rationalization game. That’s not an accident. Being a functional person means being able to manipulate the ethical system as necessary, and justify the actions you would have taken anyways.
Point 2: The ethical systems aren’t just there to be followed, they’re there to see who follows them.
People who -do- need the ethical systems are, from a social perspective, dangerous and damaged. Ethical systems are ultimately a fallback for these kinds of people, but also a marker; “normal” people don’t -need- ethics. As a rule of thumb, anybody who has strict adherence to a code of ethics is some variant of sociopath. And also as a rule of thumb, some mechanism of taking advantage of these people, who can’t know any better, is going to be built into these ethical systems. It will generally take some form akin to “altruism”, and is most recognizable when ethical behavior begins to be labeled as selfishness, such as variants of Buddhism where personal enlightenment is treated as selfish, or Comtean altruism.
Point 3: The ethical systems are designed to be flexible
People who have internal ethical systems -do- need something to deal with situations which have no ethical solutions, but nonetheless are necessary to solve. Ethical systems which don’t permit considerable flexibility in dealing with these situations aren’t useful. But because of sociopaths, who still need ethical systems to be kept in line, you can’t just permit anything. This is where contradiction is useful; you can use mutually exclusive rules to justify whatever action you need to take, without worrying about any ordinary crazy person using the same contradictions to their advantage, since they’re trying to follow all the rules all the time.
Point 4: Ethical systems were invented by monkeys trying to out-monkey other monkeys
Finally, ethical systems provide a framework by which people can assert or prove their superiority, thereby improving their perceived social rank (what, you think most people here are arguing with an interest in actually getting the right answer?). A good ethical framework needs to provide room for disagreement; ambiguity and contradiction are useful here, as well, especially because a large point of ethical systems is to provide a framework to justify whatever action you happened to take. This is enhanced by perceptions of the ethical framework itself, which is why mathematicians will tend to claim utilitarianism is a great ethical system, in spite of it being a perfectly ambiguous “ethical system”; it has a superficially mathematical rigor to it, so appears more scientific, and lends itself to mathematics-based arguments.
See all the monkeys correcting you on trivial issues? Raising meaningless points that contribute nothing to anybody’s understanding of anything while giving them a basis to prove their intelligence in thinking about things you hadn’t considered? They’re just trying to elevate their social status, here measured by karma points. On a site called Less Wrong, descended from a site called Overcoming Bias, the vast majority of interactions are still ultimately driven by an unconscious bias for social status. Although I admit the quality of the monkey-games here is at times somewhat better than elsewhere.
If you want an ethical system that is actually intended to be followed as-is, try Objectivism. There may be other ethical systems designed for sociopaths, but as a rule, most ethical systems are ultimately designed to take advantage of the people who actually try to follow them, as opposed to pay lip service to them.
One sided. OTOH: An ethical system being a functional ethical system means its being able to resist too much system-gaming by individuals. Ethical systems have a social role. Communities that can’t persuade any of their members to sacrifice themselves in defence of the community don’t survive,
What? If voluntary ethics is the fallback for the dangerous and damaged, what is the law, the criminal justice system, the involuntary stuff? (ETA and isnt a sociopath by definition someone who can’t/won’t internalise social norms?)
Systems of ethical rules are needed to solve the difficult problem of making on the spot calculations, and the impossible problem of spontaneously coordinating without commitment. Which is to say they are needed by almost everybody.
It may have the as-is characteristic, but it is very doubtful that it qualifies as ethics, since egoism is the opposite of ethics in the eyes of >99% of people.
Quit anthropomorphizing groups of people, criminal justice is designed for the sane, sociopathy is defined differently in psychiatry than among the general public, you never actually need to decide whether to save the world-famous violinist or the rather average doctor who occupy the different trolley tracks, and you’re believing what the other monkeys tell you about what you should do when their actions are in clear contradiction.
None of that actually matters, though, because you’re not actually arguing with me, you’re debating points. I didn’t give you anything to argue with, you’re just so used to people wanting to argue that you tried to find things you could argue -with-. There’s nothing there. It’s all just assertions of premises you either accept or deny. Arguing about premises doesn’t get you anywhere, will never get you anywhere, but you do it anyways. Why?
Do you think ethics are important enough to -defend-, even when there’s nothing to be gained from defending them? Why is that?
LOL. So, are you here for conversations or are you here to make undebatable pronouncements?
“”
Good points. My entire post assumes that people are interested in figuring out what they would want to do in every conceivable decision-situation. That’s what I″d call “doing ethics”, but you’re completely correct that many people do something very different. Now, would they keep doing what they’re doing if they knew exactly what they’re doing and not doing, i.e. if they were aware of the alternatives? If they were aware of concepts like agentyness? And if yes, what would this show?
I wrote down some more thoughts on this in this comment. As a general reply to your main point: Just because people act as though they are interested in x rather than y doesn’t mean that they wouldn’t rather choose y if they were more informed. And to me, choosing something because one is not optimally informed seems like a bias, which is why I thought the comparison/the term “moral anti-epistemology” has merits. However, under a more Panglossian interpretation of ethics, you could just say that people want to do what they do, and that this is perfectly fine. I depends on how much you value ethical reflection (there is quite a rabbit hole to go down to, actually, having to do with the question whether terminal values are internal or chosen).
And if making people more informed in this manner makes them worse off?
The sad thing is it probably will (the rationalist’s burden: aspiring to be more rational makes rationalizating harder, and you can’t just tweak your moral map and your map of the just world/universe to fit your desired (self-)image).
What is it that counts, revealed preferences or stated preferences or preferences that are somehow idealized (if the person knew more, was smarter etc.)? I’m not sure the last option can be pinned down in a non-arbitrary way. This would leave us with revealed preferences and stated preferences, even though stated preferences are often contradictory or incomplete. It would be confused to think that one type of preferences are correct, whereas the others aren’t. There are simply different things going on, and you may choose to focus on one or the other. Personally I don’t intrinsically care about making people more agenty, but I care about it instrumentally, because it turns out that making people more agenty often increases their (revealed) concern for reducing suffering.
What does this make of the claim under discussion, that deontology could sometimes/often be a form of moral rationalizing? The point still stands, but it is qualified with a caveat, namely that it is only a rationalizing if we are talking about (informed/complete) stated preferences. For whatever that’s worth. On LW, I assume it is worth a lot to most people, but there’s no mistake being made if it isn’t for someone.