I take it for granted that faced with a dilemma we must do something, so long as doing nothing counts as doing something. But the question is whether or not there is always a morally right answer. In cases where there isn’t, I suppose we can just pick randomly, but that doesn’t mean we’ve therefore made the right moral decision.
Are we ever damned if we do, and damned if we don’t?
When someone is in a situation like that, they lower their standard for “morally right” and try again. Functional societies avoid putting people in those situations because it’s hard to raise that standard back to it’s previous level.
Right, but choosing the lesser of two evils is simple enough. That’s not the kind of dilemma I’m talking about. I’m asking whether or not there are wholly undecidable moral problems. Choosing between one evil and a lesser evil is no more difficult than choosing between an evil and a good.
But if you’re saying that in any hypothetical choice, we could always find something significant and decisive, then this is good evidence for the impossibility of moral dilemmas.
Suppose we define a “moral dilemma for system X” as a situation in which, under system X, all possible actions are forbidden.
Consider the systems that say “Actions that maximize this (unbounded) utility function are permissible, all others are forbidden.” Then the situation “Name a positive integer, and you get that much utility” is a moral dilemma for those systems; there is no utility maximizing action, so all actions are forbidden and the system cracks. It doesn’t help much if we require the utility function to be bounded; it’s still vulnerable to situations like “Name a real number less than 30, and you get that much utility” because there isn’t a largest real number less than 30. The only way to get around this kind of attack by restricting the utility function is by requiring the range of the function to be a finite set. For example, if you’re a C++ program, your utility might be represented by a 32 bit unsigned integer, so when asked “How much utility do you want” you just answer “2^32 − 1” and when asked “How much utility less than 30.5 do you want” you just answer “30”.
That is an awesome example. I’m absolutely serious about stealing that from you (with your permission).
Do you think this presents a serious problem for utilitarian ethics? It seems like it should, though I guess this situation doesn’t come up all that often.
ETA: Here’s a thought on a reply. Given restrictions like time and knowledge of the names of large numbers, isn’t there in fact a largest number you can name? Something like Graham’s number won’t work (way too small) because you can always add one to it. But trans-finite numbers aren’t made larger by adding one. And likewise with the largest real number under thirty, maybe you can use a function to specify the number? Or if not, just say ’29.999....′ and just say nine as many times as you can before the time runs out (or until you calculate that the utility benefit reaches equilibrium with the costs of saying ‘nine’ over and over for a long time).
That is an awesome example. I’m absolutely serious about stealing that from you (with your permission).
Sure, be my guest.
Do you think this presents a serious problem for utilitarian ethics? It seems like it should, though I guess this situation doesn’t come up all that often.
Honestly, I don’t know. Infinities are already a problem, anyway.
I take it for granted that faced with a dilemma we must do something, so long as doing nothing counts as doing something. But the question is whether or not there is always a morally right answer. In cases where there isn’t, I suppose we can just pick randomly, but that doesn’t mean we’ve therefore made the right moral decision.
Are we ever damned if we do, and damned if we don’t?
When someone is in a situation like that, they lower their standard for “morally right” and try again. Functional societies avoid putting people in those situations because it’s hard to raise that standard back to it’s previous level.
Well, if all available options are indeed morally wrong, we can still try to see if any are less wrong than others.
Right, but choosing the lesser of two evils is simple enough. That’s not the kind of dilemma I’m talking about. I’m asking whether or not there are wholly undecidable moral problems. Choosing between one evil and a lesser evil is no more difficult than choosing between an evil and a good.
But if you’re saying that in any hypothetical choice, we could always find something significant and decisive, then this is good evidence for the impossibility of moral dilemmas.
It’s hard to say, really.
Suppose we define a “moral dilemma for system X” as a situation in which, under system X, all possible actions are forbidden.
Consider the systems that say “Actions that maximize this (unbounded) utility function are permissible, all others are forbidden.” Then the situation “Name a positive integer, and you get that much utility” is a moral dilemma for those systems; there is no utility maximizing action, so all actions are forbidden and the system cracks. It doesn’t help much if we require the utility function to be bounded; it’s still vulnerable to situations like “Name a real number less than 30, and you get that much utility” because there isn’t a largest real number less than 30. The only way to get around this kind of attack by restricting the utility function is by requiring the range of the function to be a finite set. For example, if you’re a C++ program, your utility might be represented by a 32 bit unsigned integer, so when asked “How much utility do you want” you just answer “2^32 − 1” and when asked “How much utility less than 30.5 do you want” you just answer “30”.
(Ugh, that paragraph was a mess...)
That is an awesome example. I’m absolutely serious about stealing that from you (with your permission).
Do you think this presents a serious problem for utilitarian ethics? It seems like it should, though I guess this situation doesn’t come up all that often.
ETA: Here’s a thought on a reply. Given restrictions like time and knowledge of the names of large numbers, isn’t there in fact a largest number you can name? Something like Graham’s number won’t work (way too small) because you can always add one to it. But trans-finite numbers aren’t made larger by adding one. And likewise with the largest real number under thirty, maybe you can use a function to specify the number? Or if not, just say ’29.999....′ and just say nine as many times as you can before the time runs out (or until you calculate that the utility benefit reaches equilibrium with the costs of saying ‘nine’ over and over for a long time).
Transfinite cardinals aren’t, but transfinite ordinals are. And anyway transfinite cardinals can be made larger by exponentiating them.
Good point. What do you think of Chrono’s dilemma?
“Twenty-nine point nine nine nine nine …” until the effort of saying “nine” again becomes less than the corresponding utility difference. ;-)
Sure, be my guest.
Honestly, I don’t know. Infinities are already a problem, anyway.