I didn’t think this was the sort of doppelgangering you were talking about. I’m not trying to ascribe additional consequentialist justifications, I’m just jettisoning the entire justification and calling a preference a spade. If the deontologist’s point is that (some of) their preferences somehow possess extra justification, then they’ve already succeeded in annoying me with their meaningless moral grandstanding.
If Anton Chigurh delivers an eloquent defense of his personal philosophy, it won’t change my opinion of his moral status. This doesn’t seem related to my consequentialist outlook—if your position is that “murder is always wrong, all of the time”, I would expect a similar reaction.
I feel like I’m still missing whatever it is that your post is trying to convey about the “deontologist’s point”. What is the point of deontological justification? The vertebrate/renate example doesn’t do it for me, because there’s a clear way to distinguish between the intensional and extensional definitions: postulate a creature with a spine and no kidneys. Such an organism seems at least conceivable. But I don’t see what analogous recourse a deontologist has when attempting to make this distinction. It all just reduces to a chain of “because if”s that terminates with preferences. Even in the case of “X is only wrong if the agent performing X is aware it leads to outcome Y”, a preference over the rituals of cognition employed by another agent is still a preference. It just seems like an awfully weird one.
I find your complaints a bit slippery to get ahold of, so I’m going to say some things that floated into my brain while I read your comment and see if that helps.
A preference is one sort of thing that a deontic theory can take into account when evaluating an action. For instance, one could hold that a moral right can be waived by its holder at eir option: this takes into account someone’s preference. But it is only one type of thing that could be included.
There is no special reason to privilege preferences as an excellent place to stop when justifying a moral theory. They’re unusually actionable, which makes theories that stop there more usable than theories that stop in some other places, but they are not magic. The fact that stopping in the places deontologists like to stop (I’m fond of “personhood”, myself) does not come naturally to you does not make deontology an inherently bizarre system in comparison to consequentialism.
There is no special reason to privilege preferences as an excellent place to stop when justifying a moral theory.
But I don’t see preference as justifying a moral theory, I see it as explaining a moral theory. I don’t see how a moral theory could possibly be justified, the concept appears nonsensical to me. About the closest thing I can make sense of would be soundly demonstrating that one’s theory doesn’t contradict itself.
Put another way, I can imagine invalidating a moral theory by demonstrating the lack of a necessary condition (like consistency), but I can’t imagine validating the theory by demonstrating the presence of a “sufficient” condition.
No real framework to speak of. Hanson’s efficiency criterion appeals to me as a sort of baseline morality. It’s hard to imagine a better first-order attack on the problem than “everyone should get as much of what they want as possible”, but of course one can imagine an endless stream of counter-examples and refinements. I presumably have most standard human “pull the child off the tracks” sorts of preferences.
I’m not sure I know what you’re looking for. Unusual moral beliefs or ethical injuctions? I think lying is simultaneously
Despicable by default
Easily justified in the right context
Usually unpleasant to perform even when feeling justified in doing so, but occasionally quite enjoyable
I’m not sure what to do with that as stated at all, I’m afraid. But “as possible” seems like a load-bearing phrase in the sentence “everyone should get as much of what they want as possible”, because this isn’t literally possible for everyone simultaneously (two people could simultaneously desire the same thing, such that it is possible that either of them get it), and you have to have some kind of mechanism to balance contradictory desires. What mechanism looks right to you?
Agreed, “as possible” is quite heavy, as is “everyone”. But it at least slightly refines the question “what’s right?” to “what’s fair?”. Which is still a huge question.
The quasi-literal answer to your question is: a voronoi diagram. It looks right—I don’t quite know what it means in practice, though.
In general, the further a situation is from my baseline intuitions concerning fairness and respect for apparent volition, the weaker my moral apprehension of it is. Life is full of trade-offs of wildly varying importance and difficulty. I’d be suspicious of any short account of them.
I’m just jettisoning the entire justification and calling a preference a spade.
Good point. There is a lot of fuzziness around “preferences”, “ethics”, “aesthetics”, “virtues” etc. Ultimately all of these seem to involve some axiological notion of “good”, or “the good life”, or “good character” or even “goods and services”.
For instance, what should we make of the so-called “grim aesthetic”? Is grimness a virtue? Should it count as an ethic? If not, why not?
I didn’t think this was the sort of doppelgangering you were talking about. I’m not trying to ascribe additional consequentialist justifications, I’m just jettisoning the entire justification and calling a preference a spade. If the deontologist’s point is that (some of) their preferences somehow possess extra justification, then they’ve already succeeded in annoying me with their meaningless moral grandstanding.
If Anton Chigurh delivers an eloquent defense of his personal philosophy, it won’t change my opinion of his moral status. This doesn’t seem related to my consequentialist outlook—if your position is that “murder is always wrong, all of the time”, I would expect a similar reaction.
I feel like I’m still missing whatever it is that your post is trying to convey about the “deontologist’s point”. What is the point of deontological justification? The vertebrate/renate example doesn’t do it for me, because there’s a clear way to distinguish between the intensional and extensional definitions: postulate a creature with a spine and no kidneys. Such an organism seems at least conceivable. But I don’t see what analogous recourse a deontologist has when attempting to make this distinction. It all just reduces to a chain of “because if”s that terminates with preferences. Even in the case of “X is only wrong if the agent performing X is aware it leads to outcome Y”, a preference over the rituals of cognition employed by another agent is still a preference. It just seems like an awfully weird one.
I find your complaints a bit slippery to get ahold of, so I’m going to say some things that floated into my brain while I read your comment and see if that helps.
A preference is one sort of thing that a deontic theory can take into account when evaluating an action. For instance, one could hold that a moral right can be waived by its holder at eir option: this takes into account someone’s preference. But it is only one type of thing that could be included.
There is no special reason to privilege preferences as an excellent place to stop when justifying a moral theory. They’re unusually actionable, which makes theories that stop there more usable than theories that stop in some other places, but they are not magic. The fact that stopping in the places deontologists like to stop (I’m fond of “personhood”, myself) does not come naturally to you does not make deontology an inherently bizarre system in comparison to consequentialism.
But I don’t see preference as justifying a moral theory, I see it as explaining a moral theory. I don’t see how a moral theory could possibly be justified, the concept appears nonsensical to me. About the closest thing I can make sense of would be soundly demonstrating that one’s theory doesn’t contradict itself.
Put another way, I can imagine invalidating a moral theory by demonstrating the lack of a necessary condition (like consistency), but I can’t imagine validating the theory by demonstrating the presence of a “sufficient” condition.
Perhaps you can tell me a little about your ethical beliefs so I know where to start when trying to explain?
No real framework to speak of. Hanson’s efficiency criterion appeals to me as a sort of baseline morality. It’s hard to imagine a better first-order attack on the problem than “everyone should get as much of what they want as possible”, but of course one can imagine an endless stream of counter-examples and refinements. I presumably have most standard human “pull the child off the tracks” sorts of preferences.
I’m not sure I know what you’re looking for. Unusual moral beliefs or ethical injuctions? I think lying is simultaneously
Despicable by default
Easily justified in the right context
Usually unpleasant to perform even when feeling justified in doing so, but occasionally quite enjoyable
if that helps.
I’m not sure what to do with that as stated at all, I’m afraid. But “as possible” seems like a load-bearing phrase in the sentence “everyone should get as much of what they want as possible”, because this isn’t literally possible for everyone simultaneously (two people could simultaneously desire the same thing, such that it is possible that either of them get it), and you have to have some kind of mechanism to balance contradictory desires. What mechanism looks right to you?
Agreed, “as possible” is quite heavy, as is “everyone”. But it at least slightly refines the question “what’s right?” to “what’s fair?”. Which is still a huge question.
The quasi-literal answer to your question is: a voronoi diagram. It looks right—I don’t quite know what it means in practice, though.
In general, the further a situation is from my baseline intuitions concerning fairness and respect for apparent volition, the weaker my moral apprehension of it is. Life is full of trade-offs of wildly varying importance and difficulty. I’d be suspicious of any short account of them.
Good point. There is a lot of fuzziness around “preferences”, “ethics”, “aesthetics”, “virtues” etc. Ultimately all of these seem to involve some axiological notion of “good”, or “the good life”, or “good character” or even “goods and services”.
For instance, what should we make of the so-called “grim aesthetic”? Is grimness a virtue? Should it count as an ethic? If not, why not?
The second virtue is relinquishment:
I think the necessary and sufficient conditions for “grimness” are found there.