If we have four people on a life boat and food for three, morality must provide a mechanism for deciding who gets the food.
That’s an interesting claim. Are you saying that true moral dilemmas (i.e. a situation where there is no right answer) are impossible? If so, how would you argue for that?
I think they are impossible. Morality can say “no option is right” all it wants, but we still must pick an option, unless the universe segfaults and time freezes upon encountering a dilemma. Whichever decision procedure we use to make that choice (flip a coin?) can count as part of morality.
I take it for granted that faced with a dilemma we must do something, so long as doing nothing counts as doing something. But the question is whether or not there is always a morally right answer. In cases where there isn’t, I suppose we can just pick randomly, but that doesn’t mean we’ve therefore made the right moral decision.
Are we ever damned if we do, and damned if we don’t?
When someone is in a situation like that, they lower their standard for “morally right” and try again. Functional societies avoid putting people in those situations because it’s hard to raise that standard back to it’s previous level.
Right, but choosing the lesser of two evils is simple enough. That’s not the kind of dilemma I’m talking about. I’m asking whether or not there are wholly undecidable moral problems. Choosing between one evil and a lesser evil is no more difficult than choosing between an evil and a good.
But if you’re saying that in any hypothetical choice, we could always find something significant and decisive, then this is good evidence for the impossibility of moral dilemmas.
Suppose we define a “moral dilemma for system X” as a situation in which, under system X, all possible actions are forbidden.
Consider the systems that say “Actions that maximize this (unbounded) utility function are permissible, all others are forbidden.” Then the situation “Name a positive integer, and you get that much utility” is a moral dilemma for those systems; there is no utility maximizing action, so all actions are forbidden and the system cracks. It doesn’t help much if we require the utility function to be bounded; it’s still vulnerable to situations like “Name a real number less than 30, and you get that much utility” because there isn’t a largest real number less than 30. The only way to get around this kind of attack by restricting the utility function is by requiring the range of the function to be a finite set. For example, if you’re a C++ program, your utility might be represented by a 32 bit unsigned integer, so when asked “How much utility do you want” you just answer “2^32 − 1” and when asked “How much utility less than 30.5 do you want” you just answer “30”.
That is an awesome example. I’m absolutely serious about stealing that from you (with your permission).
Do you think this presents a serious problem for utilitarian ethics? It seems like it should, though I guess this situation doesn’t come up all that often.
ETA: Here’s a thought on a reply. Given restrictions like time and knowledge of the names of large numbers, isn’t there in fact a largest number you can name? Something like Graham’s number won’t work (way too small) because you can always add one to it. But trans-finite numbers aren’t made larger by adding one. And likewise with the largest real number under thirty, maybe you can use a function to specify the number? Or if not, just say ’29.999....′ and just say nine as many times as you can before the time runs out (or until you calculate that the utility benefit reaches equilibrium with the costs of saying ‘nine’ over and over for a long time).
That is an awesome example. I’m absolutely serious about stealing that from you (with your permission).
Sure, be my guest.
Do you think this presents a serious problem for utilitarian ethics? It seems like it should, though I guess this situation doesn’t come up all that often.
Honestly, I don’t know. Infinities are already a problem, anyway.
Would you say that there are true practical dilemmas? Is there ever a situation where, knowing everything you could know about a decision, there isn’t a better choice?
If I know there isn’t a better choice, I just follow my decision. Duh. (Having to choose between losing $500 and losing $490 is equivalent to losing $500 and then having to choose between gaining nothing and gaining $10: yes, the loss will sadden me, but that had better have no effect on my decision, and if it does it’s because of emotional hang-ups I’d rather not have. And replacing dollars with utilons wouldn’t change much.)
Depends on what you mean by “undecidable”. There may be situations in which it’s hard in practice to decide whether it’s better to do A or to do B, sure, but in principle either A is better, B is better, or the choice doesn’t matter.
So, for example, suppose a situation where a (true) moral system demands both A and B, yet in this situation A and B are incomposssible. Or it forbids both A and B, yet in this situation doing neither is impossible. Those examples have a pretty deontological air to them...could we come up with examples of such dilemmas within consequentialism?
could we come up with examples of such dilemmas within consequentialism?
Well, the consequentialist version of a situation that demands A and B is one in which A and B provide equally positive expected consequences and no other option provides consequences that are as good. If A and B are incompossible, I suppose we can call this a moral dilemma if we like.
And, sure, consequentialism provides no tools for choosing between A and B, it merely endorses (A OR B). Which makes it undecidable using just consequentialism.
There are a number of mechanisms for resolving the dilemma that are compatible with a consequentialist perspective, though (e.g., picking one at random).
So, for example, suppose a situation where a (true) moral system demands both A and B, yet in this situation A and B are incomposssible. Or it forbids both A and B, yet in this situation doing neither is impossible.
Then, either the demand/forbiddance is not absolute or the moral system is broken.
How are you defining morality? If we use a shorthand definition that morality is a system that guides proper human action, then any “true moral dilemmas” would be a critique of whatever moral system failed to provide an answer, not proof that “true moral dilemmas” existed.
We have to make some choice. If a moral system stops giving us any useful guidance when faced with sufficiently difficult problems, that simply indicates a problem with the moral system.
ETA: For example, if I have completely strict sense of ethics based upon deontology, I may feel an absolute prohibition on lying and an absolute prohibition on allowing humans to die. That would create an moral dilemma for that system in the classical case of Nazis seeking Jews that I’m hiding in my house. So I’d have to switch to a different ethical system. If I switched to a system of deontology with a value hierarchy, I could conclude that human life has a higher value than telling the truth to governmental authorities under the circumstances and then decide to lie, solving the dilemma.
I strongly suspect that all true moral dilemmas are artifacts of the limitations of distinct moral systems, not morality per se. Since I am skeptical of moral realism, that is all the more the case; if morality can’t tell us how to act, it’s literally useless. We have to have some process for deciding on our actions.
I’m not: I anticipate that your answer to my question will vary on the basis of what you understand morality to be.
If we use a shorthand definition that morality is a system that guides proper human action, then any “true moral dilemmas” would be a critique of whatever moral system failed to provide an answer, not proof that “true moral dilemmas” existed.
Would it? It doesn’t follow from that definition that dilemmas are impossible. This:
I strongly suspect that all true moral dilemmas are artifacts of the limitations of distinct moral systems, not morality per se.
I’m really confused about the point of this discussion.
The simple answer is: either a moral system cares whether you do action A or action B, preferring one to the other, or it doesn’t. If it does, then the answer to the dilemma is that you should do the action your moral system prefers. If it doesn’t, then you can do either one.
Obviously this simple answer isn’t good enough for you, but why not?
The tricky task is to distinguish between those 3 cases—and to find general rules which can do this in every situation in a unique way, and represent your concept of morality at the same time.
Well, yes, finding a simple description of morality is hard. But you seem to be asking if there’s a possibility that it’s in principle impossible to distinguish between these 3 cases for some situation—and this is what you call a “true moral dilemma”—and I don’t see how the idea of that is coherent.
Most dilemmas are situations where similar-looking moral guidelines lead to different decisions, or situations where common moral rules are inconsistent or not well-defined. In those cases, it is hard to decide whether the moral system prefers one action or the other, or does not care.
It seems to me to omit a (maybe impossible?) possibility: for example that a moral system cares about whether you do A or B in the sense that it forbids both A and B, and yet ~(A v B) is impossible. My question was just whether or not cases like these were possible, and why or why not.
I admit that I hadn’t thought of moral systems as forbidding options, only as ranking them, in which case that doesn’t come up.
If your morality does have absolute rules like that, there isn’t any reason why those rules wouldn’t come in conflict. But even then, I wouldn’t say “this is a true moral dilemma” so much as “the moral system is self-contradictory”. Not that this is a great help to someone who does discover this about themselves.
Ideally, though, you’d only have one truly absolute rule, and a ranking between the rules, Laws of Robotics style.
But even then, I wouldn’t say “this is a true moral dilemma” so much as “the moral system is self-contradictory”.
So, Kant for example thought that such moral conflicts were impossible, and he would have agreed with you that no moral theory can be both true, and allow for moral conflicts. But it’s not obvious to me that the inference from ‘allows for moral conflict’ to ‘is a false moral theory’ is valid. I don’t have some axe to grind here, I was just curious if anyone had an argument defending that move (or attacking it for that matter).
I don’t think that it means it’s a false moral theory, just an incompletely defined one. In cases where it doesn’t tell you what to do (or, equivalently, tells you that both options are wrong), it’s useless, and a moral theory that did tell you what to do in those cases would be better.
But unless you get into self-referencing moral problems, no. I can’t think of one off the top of my head, but I suspect that you can find ones among decisions that affect your decision algorithm and decisions where your decision-making algorithm affects the possible outcomes. Probably like Newcomb’s problem, only twistier.
I would make the more limited claim that the existence of irreconcilable moral conflicts is evidence for moral anti-realism.
In short, if you have a decision process (aka moral system) that can’t resolve a particular problem that is strictly within its scope, you don’t really have a moral system.
Which makes figuring out what we mean by moral change / moral progress incredibly difficult.
In short, if you have a decision process (aka moral system) that can’t resolve a particular problem that is strictly within its scope, you don’t really have a moral system.
This seems to be to be a rephrasing and clarifying of your original claim, which I read as saying something like ‘no true moral theory can allow moral conflicts’. But it’s not yet an argument for this claim.
I’m suddenly concerned that we’re arguing over a definition. It’s very possible to construct a decision procedure that tells one how to decide some, but not all moral questions. It might be that this is the best a moral decision procedure can do. Is it clearer to avoid using the label “moral system” for such a decision procedure?
This is a distraction from my main point, which was that asserting our morality changes when our economic resources change is an atypical way of using the label “morality.”
Is it clearer to avoid using the label “moral system” for such a decision procedure?
No, but if I understand what you’ve said, a true moral theory can allow for moral conflict, just because there are moral questions it cannot decide (the fact that you called them ‘moral questions’ leads me to think you think that these questions are moral ones even if a true moral theory can’t decide them).
This is a distraction from my main point, which was that asserting our morality changes when our economic resources change is an atypical way of using the label “morality.”
You’re certainly right, this isn’t relevant to your main point. I was just interested in what I took to be the claim that moral conflicts (i.e. moral problems that are undecidable in a true moral theory) are impossible:
If we have four people on a life boat and food for three, morality must provide a mechanism for deciding who gets the food.
This is a distraction from you main point in at least one other sense: this claim is orthogonal to the claim that morality is not relative to economic conditions.
If we have four people on a life boat and food for three, morality must provide a mechanism for deciding who gets the food. This is a distraction from you main point in at least one other sense: this claim is orthogonal to the claim that morality is not relative to economic conditions.
Yes, you correct that this was not an argument, simply my attempt to gesture at what I meant by the label “morality.” The general issue is that human societies are not rigorous about the use of the label morality. I like my usage because I think it is neutral and specific in meta-ethical disputes like the one we are having. For example, moral realists must determine whether they think “incomplete” moral systems can exist.
But beyond that, I should bow out, because I’m an anti-realist and this debate is between schools of moral realists.
That’s an interesting claim. Are you saying that true moral dilemmas (i.e. a situation where there is no right answer) are impossible? If so, how would you argue for that?
I think they are impossible. Morality can say “no option is right” all it wants, but we still must pick an option, unless the universe segfaults and time freezes upon encountering a dilemma. Whichever decision procedure we use to make that choice (flip a coin?) can count as part of morality.
I take it for granted that faced with a dilemma we must do something, so long as doing nothing counts as doing something. But the question is whether or not there is always a morally right answer. In cases where there isn’t, I suppose we can just pick randomly, but that doesn’t mean we’ve therefore made the right moral decision.
Are we ever damned if we do, and damned if we don’t?
When someone is in a situation like that, they lower their standard for “morally right” and try again. Functional societies avoid putting people in those situations because it’s hard to raise that standard back to it’s previous level.
Well, if all available options are indeed morally wrong, we can still try to see if any are less wrong than others.
Right, but choosing the lesser of two evils is simple enough. That’s not the kind of dilemma I’m talking about. I’m asking whether or not there are wholly undecidable moral problems. Choosing between one evil and a lesser evil is no more difficult than choosing between an evil and a good.
But if you’re saying that in any hypothetical choice, we could always find something significant and decisive, then this is good evidence for the impossibility of moral dilemmas.
It’s hard to say, really.
Suppose we define a “moral dilemma for system X” as a situation in which, under system X, all possible actions are forbidden.
Consider the systems that say “Actions that maximize this (unbounded) utility function are permissible, all others are forbidden.” Then the situation “Name a positive integer, and you get that much utility” is a moral dilemma for those systems; there is no utility maximizing action, so all actions are forbidden and the system cracks. It doesn’t help much if we require the utility function to be bounded; it’s still vulnerable to situations like “Name a real number less than 30, and you get that much utility” because there isn’t a largest real number less than 30. The only way to get around this kind of attack by restricting the utility function is by requiring the range of the function to be a finite set. For example, if you’re a C++ program, your utility might be represented by a 32 bit unsigned integer, so when asked “How much utility do you want” you just answer “2^32 − 1” and when asked “How much utility less than 30.5 do you want” you just answer “30”.
(Ugh, that paragraph was a mess...)
That is an awesome example. I’m absolutely serious about stealing that from you (with your permission).
Do you think this presents a serious problem for utilitarian ethics? It seems like it should, though I guess this situation doesn’t come up all that often.
ETA: Here’s a thought on a reply. Given restrictions like time and knowledge of the names of large numbers, isn’t there in fact a largest number you can name? Something like Graham’s number won’t work (way too small) because you can always add one to it. But trans-finite numbers aren’t made larger by adding one. And likewise with the largest real number under thirty, maybe you can use a function to specify the number? Or if not, just say ’29.999....′ and just say nine as many times as you can before the time runs out (or until you calculate that the utility benefit reaches equilibrium with the costs of saying ‘nine’ over and over for a long time).
Transfinite cardinals aren’t, but transfinite ordinals are. And anyway transfinite cardinals can be made larger by exponentiating them.
Good point. What do you think of Chrono’s dilemma?
“Twenty-nine point nine nine nine nine …” until the effort of saying “nine” again becomes less than the corresponding utility difference. ;-)
Sure, be my guest.
Honestly, I don’t know. Infinities are already a problem, anyway.
My view is that a more meaningful question than ‘is this choice good or bad’ is ‘is this choice better or worse than other choices I could make’.
Would you say that there are true practical dilemmas? Is there ever a situation where, knowing everything you could know about a decision, there isn’t a better choice?
If I know there isn’t a better choice, I just follow my decision. Duh. (Having to choose between losing $500 and losing $490 is equivalent to losing $500 and then having to choose between gaining nothing and gaining $10: yes, the loss will sadden me, but that had better have no effect on my decision, and if it does it’s because of emotional hang-ups I’d rather not have. And replacing dollars with utilons wouldn’t change much.)
So you’re saying that there are no true moral dilemmas (no undecidable moral problems)?
Depends on what you mean by “undecidable”. There may be situations in which it’s hard in practice to decide whether it’s better to do A or to do B, sure, but in principle either A is better, B is better, or the choice doesn’t matter.
So, for example, suppose a situation where a (true) moral system demands both A and B, yet in this situation A and B are incomposssible. Or it forbids both A and B, yet in this situation doing neither is impossible. Those examples have a pretty deontological air to them...could we come up with examples of such dilemmas within consequentialism?
Well, the consequentialist version of a situation that demands A and B is one in which A and B provide equally positive expected consequences and no other option provides consequences that are as good. If A and B are incompossible, I suppose we can call this a moral dilemma if we like.
And, sure, consequentialism provides no tools for choosing between A and B, it merely endorses (A OR B). Which makes it undecidable using just consequentialism.
There are a number of mechanisms for resolving the dilemma that are compatible with a consequentialist perspective, though (e.g., picking one at random).
Thanks, that was helpful. I’d been having a hard time coming up with a consequentialist example.
Then, either the demand/forbiddance is not absolute or the moral system is broken.
How are you defining morality? If we use a shorthand definition that morality is a system that guides proper human action, then any “true moral dilemmas” would be a critique of whatever moral system failed to provide an answer, not proof that “true moral dilemmas” existed.
We have to make some choice. If a moral system stops giving us any useful guidance when faced with sufficiently difficult problems, that simply indicates a problem with the moral system.
ETA: For example, if I have completely strict sense of ethics based upon deontology, I may feel an absolute prohibition on lying and an absolute prohibition on allowing humans to die. That would create an moral dilemma for that system in the classical case of Nazis seeking Jews that I’m hiding in my house. So I’d have to switch to a different ethical system. If I switched to a system of deontology with a value hierarchy, I could conclude that human life has a higher value than telling the truth to governmental authorities under the circumstances and then decide to lie, solving the dilemma.
I strongly suspect that all true moral dilemmas are artifacts of the limitations of distinct moral systems, not morality per se. Since I am skeptical of moral realism, that is all the more the case; if morality can’t tell us how to act, it’s literally useless. We have to have some process for deciding on our actions.
I’m not: I anticipate that your answer to my question will vary on the basis of what you understand morality to be.
Would it? It doesn’t follow from that definition that dilemmas are impossible. This:
Is the claim I’m asking for an argument for.
I’m really confused about the point of this discussion.
The simple answer is: either a moral system cares whether you do action A or action B, preferring one to the other, or it doesn’t. If it does, then the answer to the dilemma is that you should do the action your moral system prefers. If it doesn’t, then you can do either one.
Obviously this simple answer isn’t good enough for you, but why not?
The tricky task is to distinguish between those 3 cases—and to find general rules which can do this in every situation in a unique way, and represent your concept of morality at the same time.
If you can do this, publish it.
Well, yes, finding a simple description of morality is hard. But you seem to be asking if there’s a possibility that it’s in principle impossible to distinguish between these 3 cases for some situation—and this is what you call a “true moral dilemma”—and I don’t see how the idea of that is coherent.
I did not call anything “true moral dilemma”.
Most dilemmas are situations where similar-looking moral guidelines lead to different decisions, or situations where common moral rules are inconsistent or not well-defined. In those cases, it is hard to decide whether the moral system prefers one action or the other, or does not care.
It seems to me to omit a (maybe impossible?) possibility: for example that a moral system cares about whether you do A or B in the sense that it forbids both A and B, and yet ~(A v B) is impossible. My question was just whether or not cases like these were possible, and why or why not.
I admit that I hadn’t thought of moral systems as forbidding options, only as ranking them, in which case that doesn’t come up.
If your morality does have absolute rules like that, there isn’t any reason why those rules wouldn’t come in conflict. But even then, I wouldn’t say “this is a true moral dilemma” so much as “the moral system is self-contradictory”. Not that this is a great help to someone who does discover this about themselves.
Ideally, though, you’d only have one truly absolute rule, and a ranking between the rules, Laws of Robotics style.
So, Kant for example thought that such moral conflicts were impossible, and he would have agreed with you that no moral theory can be both true, and allow for moral conflicts. But it’s not obvious to me that the inference from ‘allows for moral conflict’ to ‘is a false moral theory’ is valid. I don’t have some axe to grind here, I was just curious if anyone had an argument defending that move (or attacking it for that matter).
I don’t think that it means it’s a false moral theory, just an incompletely defined one. In cases where it doesn’t tell you what to do (or, equivalently, tells you that both options are wrong), it’s useless, and a moral theory that did tell you what to do in those cases would be better.
That one thing a couple years ago qualifies.
But unless you get into self-referencing moral problems, no. I can’t think of one off the top of my head, but I suspect that you can find ones among decisions that affect your decision algorithm and decisions where your decision-making algorithm affects the possible outcomes. Probably like Newcomb’s problem, only twistier.
(Warning: this may be basilisk territory.)
(Double-post, sorry)
There are plenty of situations where two choices are equally good or equally bad. This is called “indifference”, not “dilemma”.
Those aren’t the situations I’m talking about.
I would make the more limited claim that the existence of irreconcilable moral conflicts is evidence for moral anti-realism.
In short, if you have a decision process (aka moral system) that can’t resolve a particular problem that is strictly within its scope, you don’t really have a moral system.
Which makes figuring out what we mean by moral change / moral progress incredibly difficult.
This seems to be to be a rephrasing and clarifying of your original claim, which I read as saying something like ‘no true moral theory can allow moral conflicts’. But it’s not yet an argument for this claim.
I’m suddenly concerned that we’re arguing over a definition. It’s very possible to construct a decision procedure that tells one how to decide some, but not all moral questions. It might be that this is the best a moral decision procedure can do. Is it clearer to avoid using the label “moral system” for such a decision procedure?
This is a distraction from my main point, which was that asserting our morality changes when our economic resources change is an atypical way of using the label “morality.”
No, but if I understand what you’ve said, a true moral theory can allow for moral conflict, just because there are moral questions it cannot decide (the fact that you called them ‘moral questions’ leads me to think you think that these questions are moral ones even if a true moral theory can’t decide them).
You’re certainly right, this isn’t relevant to your main point. I was just interested in what I took to be the claim that moral conflicts (i.e. moral problems that are undecidable in a true moral theory) are impossible:
This is a distraction from you main point in at least one other sense: this claim is orthogonal to the claim that morality is not relative to economic conditions.
Yes, you correct that this was not an argument, simply my attempt to gesture at what I meant by the label “morality.” The general issue is that human societies are not rigorous about the use of the label morality. I like my usage because I think it is neutral and specific in meta-ethical disputes like the one we are having. For example, moral realists must determine whether they think “incomplete” moral systems can exist.
But beyond that, I should bow out, because I’m an anti-realist and this debate is between schools of moral realists.