Depends on what you mean by “undecidable”. There may be situations in which it’s hard in practice to decide whether it’s better to do A or to do B, sure, but in principle either A is better, B is better, or the choice doesn’t matter.
So, for example, suppose a situation where a (true) moral system demands both A and B, yet in this situation A and B are incomposssible. Or it forbids both A and B, yet in this situation doing neither is impossible. Those examples have a pretty deontological air to them...could we come up with examples of such dilemmas within consequentialism?
could we come up with examples of such dilemmas within consequentialism?
Well, the consequentialist version of a situation that demands A and B is one in which A and B provide equally positive expected consequences and no other option provides consequences that are as good. If A and B are incompossible, I suppose we can call this a moral dilemma if we like.
And, sure, consequentialism provides no tools for choosing between A and B, it merely endorses (A OR B). Which makes it undecidable using just consequentialism.
There are a number of mechanisms for resolving the dilemma that are compatible with a consequentialist perspective, though (e.g., picking one at random).
So, for example, suppose a situation where a (true) moral system demands both A and B, yet in this situation A and B are incomposssible. Or it forbids both A and B, yet in this situation doing neither is impossible.
Then, either the demand/forbiddance is not absolute or the moral system is broken.
How are you defining morality? If we use a shorthand definition that morality is a system that guides proper human action, then any “true moral dilemmas” would be a critique of whatever moral system failed to provide an answer, not proof that “true moral dilemmas” existed.
We have to make some choice. If a moral system stops giving us any useful guidance when faced with sufficiently difficult problems, that simply indicates a problem with the moral system.
ETA: For example, if I have completely strict sense of ethics based upon deontology, I may feel an absolute prohibition on lying and an absolute prohibition on allowing humans to die. That would create an moral dilemma for that system in the classical case of Nazis seeking Jews that I’m hiding in my house. So I’d have to switch to a different ethical system. If I switched to a system of deontology with a value hierarchy, I could conclude that human life has a higher value than telling the truth to governmental authorities under the circumstances and then decide to lie, solving the dilemma.
I strongly suspect that all true moral dilemmas are artifacts of the limitations of distinct moral systems, not morality per se. Since I am skeptical of moral realism, that is all the more the case; if morality can’t tell us how to act, it’s literally useless. We have to have some process for deciding on our actions.
I’m not: I anticipate that your answer to my question will vary on the basis of what you understand morality to be.
If we use a shorthand definition that morality is a system that guides proper human action, then any “true moral dilemmas” would be a critique of whatever moral system failed to provide an answer, not proof that “true moral dilemmas” existed.
Would it? It doesn’t follow from that definition that dilemmas are impossible. This:
I strongly suspect that all true moral dilemmas are artifacts of the limitations of distinct moral systems, not morality per se.
I’m really confused about the point of this discussion.
The simple answer is: either a moral system cares whether you do action A or action B, preferring one to the other, or it doesn’t. If it does, then the answer to the dilemma is that you should do the action your moral system prefers. If it doesn’t, then you can do either one.
Obviously this simple answer isn’t good enough for you, but why not?
The tricky task is to distinguish between those 3 cases—and to find general rules which can do this in every situation in a unique way, and represent your concept of morality at the same time.
Well, yes, finding a simple description of morality is hard. But you seem to be asking if there’s a possibility that it’s in principle impossible to distinguish between these 3 cases for some situation—and this is what you call a “true moral dilemma”—and I don’t see how the idea of that is coherent.
Most dilemmas are situations where similar-looking moral guidelines lead to different decisions, or situations where common moral rules are inconsistent or not well-defined. In those cases, it is hard to decide whether the moral system prefers one action or the other, or does not care.
It seems to me to omit a (maybe impossible?) possibility: for example that a moral system cares about whether you do A or B in the sense that it forbids both A and B, and yet ~(A v B) is impossible. My question was just whether or not cases like these were possible, and why or why not.
I admit that I hadn’t thought of moral systems as forbidding options, only as ranking them, in which case that doesn’t come up.
If your morality does have absolute rules like that, there isn’t any reason why those rules wouldn’t come in conflict. But even then, I wouldn’t say “this is a true moral dilemma” so much as “the moral system is self-contradictory”. Not that this is a great help to someone who does discover this about themselves.
Ideally, though, you’d only have one truly absolute rule, and a ranking between the rules, Laws of Robotics style.
But even then, I wouldn’t say “this is a true moral dilemma” so much as “the moral system is self-contradictory”.
So, Kant for example thought that such moral conflicts were impossible, and he would have agreed with you that no moral theory can be both true, and allow for moral conflicts. But it’s not obvious to me that the inference from ‘allows for moral conflict’ to ‘is a false moral theory’ is valid. I don’t have some axe to grind here, I was just curious if anyone had an argument defending that move (or attacking it for that matter).
I don’t think that it means it’s a false moral theory, just an incompletely defined one. In cases where it doesn’t tell you what to do (or, equivalently, tells you that both options are wrong), it’s useless, and a moral theory that did tell you what to do in those cases would be better.
But unless you get into self-referencing moral problems, no. I can’t think of one off the top of my head, but I suspect that you can find ones among decisions that affect your decision algorithm and decisions where your decision-making algorithm affects the possible outcomes. Probably like Newcomb’s problem, only twistier.
So you’re saying that there are no true moral dilemmas (no undecidable moral problems)?
Depends on what you mean by “undecidable”. There may be situations in which it’s hard in practice to decide whether it’s better to do A or to do B, sure, but in principle either A is better, B is better, or the choice doesn’t matter.
So, for example, suppose a situation where a (true) moral system demands both A and B, yet in this situation A and B are incomposssible. Or it forbids both A and B, yet in this situation doing neither is impossible. Those examples have a pretty deontological air to them...could we come up with examples of such dilemmas within consequentialism?
Well, the consequentialist version of a situation that demands A and B is one in which A and B provide equally positive expected consequences and no other option provides consequences that are as good. If A and B are incompossible, I suppose we can call this a moral dilemma if we like.
And, sure, consequentialism provides no tools for choosing between A and B, it merely endorses (A OR B). Which makes it undecidable using just consequentialism.
There are a number of mechanisms for resolving the dilemma that are compatible with a consequentialist perspective, though (e.g., picking one at random).
Thanks, that was helpful. I’d been having a hard time coming up with a consequentialist example.
Then, either the demand/forbiddance is not absolute or the moral system is broken.
How are you defining morality? If we use a shorthand definition that morality is a system that guides proper human action, then any “true moral dilemmas” would be a critique of whatever moral system failed to provide an answer, not proof that “true moral dilemmas” existed.
We have to make some choice. If a moral system stops giving us any useful guidance when faced with sufficiently difficult problems, that simply indicates a problem with the moral system.
ETA: For example, if I have completely strict sense of ethics based upon deontology, I may feel an absolute prohibition on lying and an absolute prohibition on allowing humans to die. That would create an moral dilemma for that system in the classical case of Nazis seeking Jews that I’m hiding in my house. So I’d have to switch to a different ethical system. If I switched to a system of deontology with a value hierarchy, I could conclude that human life has a higher value than telling the truth to governmental authorities under the circumstances and then decide to lie, solving the dilemma.
I strongly suspect that all true moral dilemmas are artifacts of the limitations of distinct moral systems, not morality per se. Since I am skeptical of moral realism, that is all the more the case; if morality can’t tell us how to act, it’s literally useless. We have to have some process for deciding on our actions.
I’m not: I anticipate that your answer to my question will vary on the basis of what you understand morality to be.
Would it? It doesn’t follow from that definition that dilemmas are impossible. This:
Is the claim I’m asking for an argument for.
I’m really confused about the point of this discussion.
The simple answer is: either a moral system cares whether you do action A or action B, preferring one to the other, or it doesn’t. If it does, then the answer to the dilemma is that you should do the action your moral system prefers. If it doesn’t, then you can do either one.
Obviously this simple answer isn’t good enough for you, but why not?
The tricky task is to distinguish between those 3 cases—and to find general rules which can do this in every situation in a unique way, and represent your concept of morality at the same time.
If you can do this, publish it.
Well, yes, finding a simple description of morality is hard. But you seem to be asking if there’s a possibility that it’s in principle impossible to distinguish between these 3 cases for some situation—and this is what you call a “true moral dilemma”—and I don’t see how the idea of that is coherent.
I did not call anything “true moral dilemma”.
Most dilemmas are situations where similar-looking moral guidelines lead to different decisions, or situations where common moral rules are inconsistent or not well-defined. In those cases, it is hard to decide whether the moral system prefers one action or the other, or does not care.
It seems to me to omit a (maybe impossible?) possibility: for example that a moral system cares about whether you do A or B in the sense that it forbids both A and B, and yet ~(A v B) is impossible. My question was just whether or not cases like these were possible, and why or why not.
I admit that I hadn’t thought of moral systems as forbidding options, only as ranking them, in which case that doesn’t come up.
If your morality does have absolute rules like that, there isn’t any reason why those rules wouldn’t come in conflict. But even then, I wouldn’t say “this is a true moral dilemma” so much as “the moral system is self-contradictory”. Not that this is a great help to someone who does discover this about themselves.
Ideally, though, you’d only have one truly absolute rule, and a ranking between the rules, Laws of Robotics style.
So, Kant for example thought that such moral conflicts were impossible, and he would have agreed with you that no moral theory can be both true, and allow for moral conflicts. But it’s not obvious to me that the inference from ‘allows for moral conflict’ to ‘is a false moral theory’ is valid. I don’t have some axe to grind here, I was just curious if anyone had an argument defending that move (or attacking it for that matter).
I don’t think that it means it’s a false moral theory, just an incompletely defined one. In cases where it doesn’t tell you what to do (or, equivalently, tells you that both options are wrong), it’s useless, and a moral theory that did tell you what to do in those cases would be better.
That one thing a couple years ago qualifies.
But unless you get into self-referencing moral problems, no. I can’t think of one off the top of my head, but I suspect that you can find ones among decisions that affect your decision algorithm and decisions where your decision-making algorithm affects the possible outcomes. Probably like Newcomb’s problem, only twistier.
(Warning: this may be basilisk territory.)
(Double-post, sorry)