If we have four people on a life boat and food for three, morality must provide a mechanism for deciding who gets the food. Suppose that decision is made, then Omega magically provides sufficient food for all—morality hasn’t changed, only the decision that morality calls for.
Technological advancement has certainly caused moral change (consider society after introduction of the Pill). But having more resources does not, in itself, change what we think is right, only what we can actually achieve.
If we have four people on a life boat and food for three, morality must provide a mechanism for deciding who gets the food.
That’s an interesting claim. Are you saying that true moral dilemmas (i.e. a situation where there is no right answer) are impossible? If so, how would you argue for that?
I think they are impossible. Morality can say “no option is right” all it wants, but we still must pick an option, unless the universe segfaults and time freezes upon encountering a dilemma. Whichever decision procedure we use to make that choice (flip a coin?) can count as part of morality.
I take it for granted that faced with a dilemma we must do something, so long as doing nothing counts as doing something. But the question is whether or not there is always a morally right answer. In cases where there isn’t, I suppose we can just pick randomly, but that doesn’t mean we’ve therefore made the right moral decision.
Are we ever damned if we do, and damned if we don’t?
When someone is in a situation like that, they lower their standard for “morally right” and try again. Functional societies avoid putting people in those situations because it’s hard to raise that standard back to it’s previous level.
Right, but choosing the lesser of two evils is simple enough. That’s not the kind of dilemma I’m talking about. I’m asking whether or not there are wholly undecidable moral problems. Choosing between one evil and a lesser evil is no more difficult than choosing between an evil and a good.
But if you’re saying that in any hypothetical choice, we could always find something significant and decisive, then this is good evidence for the impossibility of moral dilemmas.
Suppose we define a “moral dilemma for system X” as a situation in which, under system X, all possible actions are forbidden.
Consider the systems that say “Actions that maximize this (unbounded) utility function are permissible, all others are forbidden.” Then the situation “Name a positive integer, and you get that much utility” is a moral dilemma for those systems; there is no utility maximizing action, so all actions are forbidden and the system cracks. It doesn’t help much if we require the utility function to be bounded; it’s still vulnerable to situations like “Name a real number less than 30, and you get that much utility” because there isn’t a largest real number less than 30. The only way to get around this kind of attack by restricting the utility function is by requiring the range of the function to be a finite set. For example, if you’re a C++ program, your utility might be represented by a 32 bit unsigned integer, so when asked “How much utility do you want” you just answer “2^32 − 1” and when asked “How much utility less than 30.5 do you want” you just answer “30”.
That is an awesome example. I’m absolutely serious about stealing that from you (with your permission).
Do you think this presents a serious problem for utilitarian ethics? It seems like it should, though I guess this situation doesn’t come up all that often.
ETA: Here’s a thought on a reply. Given restrictions like time and knowledge of the names of large numbers, isn’t there in fact a largest number you can name? Something like Graham’s number won’t work (way too small) because you can always add one to it. But trans-finite numbers aren’t made larger by adding one. And likewise with the largest real number under thirty, maybe you can use a function to specify the number? Or if not, just say ’29.999....′ and just say nine as many times as you can before the time runs out (or until you calculate that the utility benefit reaches equilibrium with the costs of saying ‘nine’ over and over for a long time).
That is an awesome example. I’m absolutely serious about stealing that from you (with your permission).
Sure, be my guest.
Do you think this presents a serious problem for utilitarian ethics? It seems like it should, though I guess this situation doesn’t come up all that often.
Honestly, I don’t know. Infinities are already a problem, anyway.
Would you say that there are true practical dilemmas? Is there ever a situation where, knowing everything you could know about a decision, there isn’t a better choice?
If I know there isn’t a better choice, I just follow my decision. Duh. (Having to choose between losing $500 and losing $490 is equivalent to losing $500 and then having to choose between gaining nothing and gaining $10: yes, the loss will sadden me, but that had better have no effect on my decision, and if it does it’s because of emotional hang-ups I’d rather not have. And replacing dollars with utilons wouldn’t change much.)
Depends on what you mean by “undecidable”. There may be situations in which it’s hard in practice to decide whether it’s better to do A or to do B, sure, but in principle either A is better, B is better, or the choice doesn’t matter.
So, for example, suppose a situation where a (true) moral system demands both A and B, yet in this situation A and B are incomposssible. Or it forbids both A and B, yet in this situation doing neither is impossible. Those examples have a pretty deontological air to them...could we come up with examples of such dilemmas within consequentialism?
could we come up with examples of such dilemmas within consequentialism?
Well, the consequentialist version of a situation that demands A and B is one in which A and B provide equally positive expected consequences and no other option provides consequences that are as good. If A and B are incompossible, I suppose we can call this a moral dilemma if we like.
And, sure, consequentialism provides no tools for choosing between A and B, it merely endorses (A OR B). Which makes it undecidable using just consequentialism.
There are a number of mechanisms for resolving the dilemma that are compatible with a consequentialist perspective, though (e.g., picking one at random).
So, for example, suppose a situation where a (true) moral system demands both A and B, yet in this situation A and B are incomposssible. Or it forbids both A and B, yet in this situation doing neither is impossible.
Then, either the demand/forbiddance is not absolute or the moral system is broken.
How are you defining morality? If we use a shorthand definition that morality is a system that guides proper human action, then any “true moral dilemmas” would be a critique of whatever moral system failed to provide an answer, not proof that “true moral dilemmas” existed.
We have to make some choice. If a moral system stops giving us any useful guidance when faced with sufficiently difficult problems, that simply indicates a problem with the moral system.
ETA: For example, if I have completely strict sense of ethics based upon deontology, I may feel an absolute prohibition on lying and an absolute prohibition on allowing humans to die. That would create an moral dilemma for that system in the classical case of Nazis seeking Jews that I’m hiding in my house. So I’d have to switch to a different ethical system. If I switched to a system of deontology with a value hierarchy, I could conclude that human life has a higher value than telling the truth to governmental authorities under the circumstances and then decide to lie, solving the dilemma.
I strongly suspect that all true moral dilemmas are artifacts of the limitations of distinct moral systems, not morality per se. Since I am skeptical of moral realism, that is all the more the case; if morality can’t tell us how to act, it’s literally useless. We have to have some process for deciding on our actions.
I’m not: I anticipate that your answer to my question will vary on the basis of what you understand morality to be.
If we use a shorthand definition that morality is a system that guides proper human action, then any “true moral dilemmas” would be a critique of whatever moral system failed to provide an answer, not proof that “true moral dilemmas” existed.
Would it? It doesn’t follow from that definition that dilemmas are impossible. This:
I strongly suspect that all true moral dilemmas are artifacts of the limitations of distinct moral systems, not morality per se.
I’m really confused about the point of this discussion.
The simple answer is: either a moral system cares whether you do action A or action B, preferring one to the other, or it doesn’t. If it does, then the answer to the dilemma is that you should do the action your moral system prefers. If it doesn’t, then you can do either one.
Obviously this simple answer isn’t good enough for you, but why not?
The tricky task is to distinguish between those 3 cases—and to find general rules which can do this in every situation in a unique way, and represent your concept of morality at the same time.
Well, yes, finding a simple description of morality is hard. But you seem to be asking if there’s a possibility that it’s in principle impossible to distinguish between these 3 cases for some situation—and this is what you call a “true moral dilemma”—and I don’t see how the idea of that is coherent.
Most dilemmas are situations where similar-looking moral guidelines lead to different decisions, or situations where common moral rules are inconsistent or not well-defined. In those cases, it is hard to decide whether the moral system prefers one action or the other, or does not care.
It seems to me to omit a (maybe impossible?) possibility: for example that a moral system cares about whether you do A or B in the sense that it forbids both A and B, and yet ~(A v B) is impossible. My question was just whether or not cases like these were possible, and why or why not.
I admit that I hadn’t thought of moral systems as forbidding options, only as ranking them, in which case that doesn’t come up.
If your morality does have absolute rules like that, there isn’t any reason why those rules wouldn’t come in conflict. But even then, I wouldn’t say “this is a true moral dilemma” so much as “the moral system is self-contradictory”. Not that this is a great help to someone who does discover this about themselves.
Ideally, though, you’d only have one truly absolute rule, and a ranking between the rules, Laws of Robotics style.
But even then, I wouldn’t say “this is a true moral dilemma” so much as “the moral system is self-contradictory”.
So, Kant for example thought that such moral conflicts were impossible, and he would have agreed with you that no moral theory can be both true, and allow for moral conflicts. But it’s not obvious to me that the inference from ‘allows for moral conflict’ to ‘is a false moral theory’ is valid. I don’t have some axe to grind here, I was just curious if anyone had an argument defending that move (or attacking it for that matter).
I don’t think that it means it’s a false moral theory, just an incompletely defined one. In cases where it doesn’t tell you what to do (or, equivalently, tells you that both options are wrong), it’s useless, and a moral theory that did tell you what to do in those cases would be better.
But unless you get into self-referencing moral problems, no. I can’t think of one off the top of my head, but I suspect that you can find ones among decisions that affect your decision algorithm and decisions where your decision-making algorithm affects the possible outcomes. Probably like Newcomb’s problem, only twistier.
I would make the more limited claim that the existence of irreconcilable moral conflicts is evidence for moral anti-realism.
In short, if you have a decision process (aka moral system) that can’t resolve a particular problem that is strictly within its scope, you don’t really have a moral system.
Which makes figuring out what we mean by moral change / moral progress incredibly difficult.
In short, if you have a decision process (aka moral system) that can’t resolve a particular problem that is strictly within its scope, you don’t really have a moral system.
This seems to be to be a rephrasing and clarifying of your original claim, which I read as saying something like ‘no true moral theory can allow moral conflicts’. But it’s not yet an argument for this claim.
I’m suddenly concerned that we’re arguing over a definition. It’s very possible to construct a decision procedure that tells one how to decide some, but not all moral questions. It might be that this is the best a moral decision procedure can do. Is it clearer to avoid using the label “moral system” for such a decision procedure?
This is a distraction from my main point, which was that asserting our morality changes when our economic resources change is an atypical way of using the label “morality.”
Is it clearer to avoid using the label “moral system” for such a decision procedure?
No, but if I understand what you’ve said, a true moral theory can allow for moral conflict, just because there are moral questions it cannot decide (the fact that you called them ‘moral questions’ leads me to think you think that these questions are moral ones even if a true moral theory can’t decide them).
This is a distraction from my main point, which was that asserting our morality changes when our economic resources change is an atypical way of using the label “morality.”
You’re certainly right, this isn’t relevant to your main point. I was just interested in what I took to be the claim that moral conflicts (i.e. moral problems that are undecidable in a true moral theory) are impossible:
If we have four people on a life boat and food for three, morality must provide a mechanism for deciding who gets the food.
This is a distraction from you main point in at least one other sense: this claim is orthogonal to the claim that morality is not relative to economic conditions.
If we have four people on a life boat and food for three, morality must provide a mechanism for deciding who gets the food. This is a distraction from you main point in at least one other sense: this claim is orthogonal to the claim that morality is not relative to economic conditions.
Yes, you correct that this was not an argument, simply my attempt to gesture at what I meant by the label “morality.” The general issue is that human societies are not rigorous about the use of the label morality. I like my usage because I think it is neutral and specific in meta-ethical disputes like the one we are having. For example, moral realists must determine whether they think “incomplete” moral systems can exist.
But beyond that, I should bow out, because I’m an anti-realist and this debate is between schools of moral realists.
Rephrasing the original question: if we can anticipate the guiding principles underlying the morality of the future, ought we apply those principles to our current circumstances to make decisions, supposing they are different?
Though you seem to be implicitly assuming that the guiding principles don’t change, merely the decisions, and those changed decisions are due to the closest implementable approximation of our guiding principles varying over time based on economic change. (Did I understand that right?)
Pretty much. Though it feels totally different from the inside. Athens could not have thrived without slave labor, and so you find folks arguing that slavery is moral, not just necessary. Since you can’t say “Action A is immoral but economically necessary, so we shall A” you instead say “Action A is moral, here are some great arguments to that effect!”
And when we have enough money, we can even invent new things to be upset about, like vegetable rights.
On your view, is there any attempt at internal coherence?
For example, given an X such that X is equally practical (economically) in an Athenian and post-Athenian economy, and where both Athenians and moderns would agree that X is more “consistent with” slavery than non-slavery, would you expect Athenians to endorse X and moderns to reject it, or would you expect other (non-economic) factors, perhaps random noise, to predominate? (Or some third option?)
I can’t think of a concrete example that doesn’t introduce derailing specifics. Let me try a different question that gets at something similar: do you think that all choices a society makes that it describes as “moral” are economic choices in the sense you describe here, or just that some of them are?
Edit: whoops! got TimS and thomblake confused.
Um.
Unfortunately, that changes nothing of consequence: I still can’t think of a concrete example that doesn’t derail. But my followup question is not actually directed to Tim. Or, rather, ought not have been.
Probably a good counterexample would be the right for certain groups to work any job they’re qualified for, for example women or people with disabilities. Generally, those changes were profitable and would have been at any time society accepted it.
I don’t understand the position you are arguing and I really want to. Either illusion of transparency or I’m an idiot. And TheOtherDave appears to understand you. :(
I’m not really arguing for a position—the grandparent was a counterexample to the general principle I had proposed upthread, since the change was both good and an immediate economic benefit, and it took a very long time to be adopted.
(nods) Yup, that’s one example I was considering, but discarded as too potentially noisy.
But, OK, now that we’re here… if we can agree for the sake of comity that giving women the civil right to work any job would have been economically practical for Athenians, and that they nevertheless didn’t do so, presumably due to some other non-economic factors… I guess my question is, would you find it inconsistent, in that case, to find Athenians arguing that doing so would be immoral?
What is progress with respect to either? Could you possibly mean that moral states—the moral conditions of a society—follow from the economic state—the condition and system of economy. I do find it hard to see a clear, unbiased definition of moral or economic progress.
Moral progress is a trend or change for the better in the morality of members of a society. For example, when the United States went from widespread acceptance of slavery to widespread rejection of slavery, that was moral progress on most views of morality.
Economic progress is a trend or change that results in increased wealth for a society.
In general, widespread acceptance of a moral principle, like our views on slavery, animal rights, vegetable rights, and universal minimum income, only comes after we can afford it.
I think he’s trying to say that having resources is a prerequisite to spending them on moral things like universal pay, so we need to pursue wealth if we want to pursue morality. Technically, economic progress is more of a prerequisite to moral progress than a sufficient cause though, as economic progress can also result in bad moral outcomes depending on what we do with our wealth.
What is moral progress? - Is having a society with a vast disparity between rich and poor where the poor support the rich through the resource of their labor considered morally progressed from a more egalitarian tribal state? Is the progress of the empire to a point of collapse and the start of some new empire considered moral progress?
What is economic progress? - Is having a society with a vast disparity between rich and poor where the poor support the rich through the resource of their labor considered morally progressed from the primitive hunter-gatherer society where everyone had more free time considered economic progress? Is the progress of the empire to a point where the disparity in wealth incites revolution or causes collapse considered economic progress?
The points you raise are not responsive to the points that either he or I made.
If it increases total aggregate utility. Tribes were small, there weren’t very many people. I’m also not sure how happy most tribes were. Additionally, bad moral societies might be necessary to transition to awesome ones.
You conflate moral and economic progress in your second paragraph.
A financial system which collapses probably isn’t too healthy. It still might have improved things overall through its pre-collapse operations though.
My first reaction is to want to say that economic progress is an increase in purchasing power. However, purchasing power is measured with reference to the utility of goods. That would be fine as a solution, except that those definitions would mean that it would be literally impossible for an increase in economic progress to be bad on utilitarian grounds. That’s not what “economic progress” is generally taken to mean, so I won’t use that definition.
Instead, I’ll say that economic progress is an increase in the ability to produce goods, whether those goods are good or bad. This increase can be either numerical or qualitative, I don’t care. Now, it might not be possible to quantify this precisely, but that’s not necessary to determine that economic progress occurs. Clearly, we are now farther economically progressed than we were in the Dark Ages.
Moral progress would be measured depending on the moral theory you’re utilizing. I would use a broad sort of egoism, personally, but most people here would use utilitarianism.
With an egoist framework, you could keep track of how happy or sad you were directly. You could also measure the prevalence of factors that tend to make you happy and then subtract the prevalence of factors that tend to make you sad (while weighting for relative amounts of happiness and sadness, of course), in order to get a more objective account of your own happiness.
With a utilitarian framework, you would measure the prevalence of things that tend to make all people happy, and then subtract the prevalence of things that tend to make all people sad. If there was an increase in the number of happy people, then that would mean moral progress in the eyes of a utilitarian.
You make no argument. You merely ask a question. If you have a general counterargument or want to refute the specifics of any of my points, feel free. So far, you haven’t done anything like that. Also, although it might not be possible to quantify economic or moral progress precisely, we can probably do it well enough for most practical purposes. I don’t understand the purpose of the points you’re trying to raise here.
I think he’s trying to say …. we need to pursue wealth if we want to pursue morality. …. economic progress can also result in bad moral outcomes depending on what we do with our wealth.
You do not like the questions, the Socratic? Ok, I asserted the basis of the argument and the point of the questions:
A clear, unbiased definition of moral or economic progress does not exist.
You present models for deciding both. There exists models where economic progress varies inversely with moral progress, such as possible outcomes from the utilitarian perspective that are covered in ethics 101 at most colleges, and the manifest reality of a system where economic progress has been used for justifying an abundance of atrocities. There also exist models in either category which define progress in entirely different directions and so any statement of progress is inherently biased.
There is a link between economic states/systems and moral conditions, and it appeared that the author of the statement: “Moral progress proceeds from economic progress.” may have been oversimplifying the issue to a point of of making it unintelligible.
You mentioned wealth which implies an inherent bias also. I can personally assert a different version of wealth which excludes much of what most people consider wealth. If most people think wealth includes assets like cash or gold which I see as having an immoral nature and so their idea of accumulating wealth is immoral in my pov. (I do not include a lengthy moral case, but rather assert such a case exists). So if you see progress and wealth as interrelated then I would ask for a definition of wealth?
You also assert that economic progress is an increased ability to produce goods. I assert that there are many modes of production of which the current industrial mode finds value in quantity, as you state is the measure. Two biases arise:
1 - The bias inherent to the mode: quantity is not the only measure of progress. Competing values include quality in aesthetics, ergonomics, environmental impact, functionality, modular in use (consider open source values). I do not think having more stuff is a sign of economic progress and I am not alone in finding that the measure you have asserted says nothing of “progress”—you of course argue differently and thus we can say one measure or another of progress may differ and are thus inherently biased.
2- What mode of production is more progressed? I do not think industrialization is progress. I see many flaws in the results. Too much damage from that mode imho. I am not here to argue that position but rather to assert it exists.
Is my point about the bias inherent in describing progress clear, or do you think that there exists some definition we all agree upon as to what progress in any area is?
You say that economic production and moral progress aren’t the same. I have already said the same thing; I have already said that increased economic production might lead to morally wrong outcomes depending on how those products end up being used.
You can assert a different definition of wealth if you want, sure. I don’t understand what argument this is supposed to be responsive to. There’s a common understanding of wealth and just because different people define wealth differently, that wouldn’t invalidate my point. Having resources is key to investing them, investing resources is key to doing moral things.
You say that quantity isn’t the sole realm of value. I think that’s true. But if you take the quantity of goods and multiply them by the quality of goods (that is, the utility of the goods, like I mentioned before) then that is a sufficient definition of total economic value.
The mode of production that is most progressed is the one which produces the most.
If we can afford it.
Moral progress proceeds from economic progress.
Morality is contextual.
If we have four people on a life boat and food for three, morality must provide a mechanism for deciding who gets the food. Suppose that decision is made, then Omega magically provides sufficient food for all—morality hasn’t changed, only the decision that morality calls for.
Technological advancement has certainly caused moral change (consider society after introduction of the Pill). But having more resources does not, in itself, change what we think is right, only what we can actually achieve.
That’s an interesting claim. Are you saying that true moral dilemmas (i.e. a situation where there is no right answer) are impossible? If so, how would you argue for that?
I think they are impossible. Morality can say “no option is right” all it wants, but we still must pick an option, unless the universe segfaults and time freezes upon encountering a dilemma. Whichever decision procedure we use to make that choice (flip a coin?) can count as part of morality.
I take it for granted that faced with a dilemma we must do something, so long as doing nothing counts as doing something. But the question is whether or not there is always a morally right answer. In cases where there isn’t, I suppose we can just pick randomly, but that doesn’t mean we’ve therefore made the right moral decision.
Are we ever damned if we do, and damned if we don’t?
When someone is in a situation like that, they lower their standard for “morally right” and try again. Functional societies avoid putting people in those situations because it’s hard to raise that standard back to it’s previous level.
Well, if all available options are indeed morally wrong, we can still try to see if any are less wrong than others.
Right, but choosing the lesser of two evils is simple enough. That’s not the kind of dilemma I’m talking about. I’m asking whether or not there are wholly undecidable moral problems. Choosing between one evil and a lesser evil is no more difficult than choosing between an evil and a good.
But if you’re saying that in any hypothetical choice, we could always find something significant and decisive, then this is good evidence for the impossibility of moral dilemmas.
It’s hard to say, really.
Suppose we define a “moral dilemma for system X” as a situation in which, under system X, all possible actions are forbidden.
Consider the systems that say “Actions that maximize this (unbounded) utility function are permissible, all others are forbidden.” Then the situation “Name a positive integer, and you get that much utility” is a moral dilemma for those systems; there is no utility maximizing action, so all actions are forbidden and the system cracks. It doesn’t help much if we require the utility function to be bounded; it’s still vulnerable to situations like “Name a real number less than 30, and you get that much utility” because there isn’t a largest real number less than 30. The only way to get around this kind of attack by restricting the utility function is by requiring the range of the function to be a finite set. For example, if you’re a C++ program, your utility might be represented by a 32 bit unsigned integer, so when asked “How much utility do you want” you just answer “2^32 − 1” and when asked “How much utility less than 30.5 do you want” you just answer “30”.
(Ugh, that paragraph was a mess...)
That is an awesome example. I’m absolutely serious about stealing that from you (with your permission).
Do you think this presents a serious problem for utilitarian ethics? It seems like it should, though I guess this situation doesn’t come up all that often.
ETA: Here’s a thought on a reply. Given restrictions like time and knowledge of the names of large numbers, isn’t there in fact a largest number you can name? Something like Graham’s number won’t work (way too small) because you can always add one to it. But trans-finite numbers aren’t made larger by adding one. And likewise with the largest real number under thirty, maybe you can use a function to specify the number? Or if not, just say ’29.999....′ and just say nine as many times as you can before the time runs out (or until you calculate that the utility benefit reaches equilibrium with the costs of saying ‘nine’ over and over for a long time).
Transfinite cardinals aren’t, but transfinite ordinals are. And anyway transfinite cardinals can be made larger by exponentiating them.
Good point. What do you think of Chrono’s dilemma?
“Twenty-nine point nine nine nine nine …” until the effort of saying “nine” again becomes less than the corresponding utility difference. ;-)
Sure, be my guest.
Honestly, I don’t know. Infinities are already a problem, anyway.
My view is that a more meaningful question than ‘is this choice good or bad’ is ‘is this choice better or worse than other choices I could make’.
Would you say that there are true practical dilemmas? Is there ever a situation where, knowing everything you could know about a decision, there isn’t a better choice?
If I know there isn’t a better choice, I just follow my decision. Duh. (Having to choose between losing $500 and losing $490 is equivalent to losing $500 and then having to choose between gaining nothing and gaining $10: yes, the loss will sadden me, but that had better have no effect on my decision, and if it does it’s because of emotional hang-ups I’d rather not have. And replacing dollars with utilons wouldn’t change much.)
So you’re saying that there are no true moral dilemmas (no undecidable moral problems)?
Depends on what you mean by “undecidable”. There may be situations in which it’s hard in practice to decide whether it’s better to do A or to do B, sure, but in principle either A is better, B is better, or the choice doesn’t matter.
So, for example, suppose a situation where a (true) moral system demands both A and B, yet in this situation A and B are incomposssible. Or it forbids both A and B, yet in this situation doing neither is impossible. Those examples have a pretty deontological air to them...could we come up with examples of such dilemmas within consequentialism?
Well, the consequentialist version of a situation that demands A and B is one in which A and B provide equally positive expected consequences and no other option provides consequences that are as good. If A and B are incompossible, I suppose we can call this a moral dilemma if we like.
And, sure, consequentialism provides no tools for choosing between A and B, it merely endorses (A OR B). Which makes it undecidable using just consequentialism.
There are a number of mechanisms for resolving the dilemma that are compatible with a consequentialist perspective, though (e.g., picking one at random).
Thanks, that was helpful. I’d been having a hard time coming up with a consequentialist example.
Then, either the demand/forbiddance is not absolute or the moral system is broken.
How are you defining morality? If we use a shorthand definition that morality is a system that guides proper human action, then any “true moral dilemmas” would be a critique of whatever moral system failed to provide an answer, not proof that “true moral dilemmas” existed.
We have to make some choice. If a moral system stops giving us any useful guidance when faced with sufficiently difficult problems, that simply indicates a problem with the moral system.
ETA: For example, if I have completely strict sense of ethics based upon deontology, I may feel an absolute prohibition on lying and an absolute prohibition on allowing humans to die. That would create an moral dilemma for that system in the classical case of Nazis seeking Jews that I’m hiding in my house. So I’d have to switch to a different ethical system. If I switched to a system of deontology with a value hierarchy, I could conclude that human life has a higher value than telling the truth to governmental authorities under the circumstances and then decide to lie, solving the dilemma.
I strongly suspect that all true moral dilemmas are artifacts of the limitations of distinct moral systems, not morality per se. Since I am skeptical of moral realism, that is all the more the case; if morality can’t tell us how to act, it’s literally useless. We have to have some process for deciding on our actions.
I’m not: I anticipate that your answer to my question will vary on the basis of what you understand morality to be.
Would it? It doesn’t follow from that definition that dilemmas are impossible. This:
Is the claim I’m asking for an argument for.
I’m really confused about the point of this discussion.
The simple answer is: either a moral system cares whether you do action A or action B, preferring one to the other, or it doesn’t. If it does, then the answer to the dilemma is that you should do the action your moral system prefers. If it doesn’t, then you can do either one.
Obviously this simple answer isn’t good enough for you, but why not?
The tricky task is to distinguish between those 3 cases—and to find general rules which can do this in every situation in a unique way, and represent your concept of morality at the same time.
If you can do this, publish it.
Well, yes, finding a simple description of morality is hard. But you seem to be asking if there’s a possibility that it’s in principle impossible to distinguish between these 3 cases for some situation—and this is what you call a “true moral dilemma”—and I don’t see how the idea of that is coherent.
I did not call anything “true moral dilemma”.
Most dilemmas are situations where similar-looking moral guidelines lead to different decisions, or situations where common moral rules are inconsistent or not well-defined. In those cases, it is hard to decide whether the moral system prefers one action or the other, or does not care.
It seems to me to omit a (maybe impossible?) possibility: for example that a moral system cares about whether you do A or B in the sense that it forbids both A and B, and yet ~(A v B) is impossible. My question was just whether or not cases like these were possible, and why or why not.
I admit that I hadn’t thought of moral systems as forbidding options, only as ranking them, in which case that doesn’t come up.
If your morality does have absolute rules like that, there isn’t any reason why those rules wouldn’t come in conflict. But even then, I wouldn’t say “this is a true moral dilemma” so much as “the moral system is self-contradictory”. Not that this is a great help to someone who does discover this about themselves.
Ideally, though, you’d only have one truly absolute rule, and a ranking between the rules, Laws of Robotics style.
So, Kant for example thought that such moral conflicts were impossible, and he would have agreed with you that no moral theory can be both true, and allow for moral conflicts. But it’s not obvious to me that the inference from ‘allows for moral conflict’ to ‘is a false moral theory’ is valid. I don’t have some axe to grind here, I was just curious if anyone had an argument defending that move (or attacking it for that matter).
I don’t think that it means it’s a false moral theory, just an incompletely defined one. In cases where it doesn’t tell you what to do (or, equivalently, tells you that both options are wrong), it’s useless, and a moral theory that did tell you what to do in those cases would be better.
That one thing a couple years ago qualifies.
But unless you get into self-referencing moral problems, no. I can’t think of one off the top of my head, but I suspect that you can find ones among decisions that affect your decision algorithm and decisions where your decision-making algorithm affects the possible outcomes. Probably like Newcomb’s problem, only twistier.
(Warning: this may be basilisk territory.)
(Double-post, sorry)
There are plenty of situations where two choices are equally good or equally bad. This is called “indifference”, not “dilemma”.
Those aren’t the situations I’m talking about.
I would make the more limited claim that the existence of irreconcilable moral conflicts is evidence for moral anti-realism.
In short, if you have a decision process (aka moral system) that can’t resolve a particular problem that is strictly within its scope, you don’t really have a moral system.
Which makes figuring out what we mean by moral change / moral progress incredibly difficult.
This seems to be to be a rephrasing and clarifying of your original claim, which I read as saying something like ‘no true moral theory can allow moral conflicts’. But it’s not yet an argument for this claim.
I’m suddenly concerned that we’re arguing over a definition. It’s very possible to construct a decision procedure that tells one how to decide some, but not all moral questions. It might be that this is the best a moral decision procedure can do. Is it clearer to avoid using the label “moral system” for such a decision procedure?
This is a distraction from my main point, which was that asserting our morality changes when our economic resources change is an atypical way of using the label “morality.”
No, but if I understand what you’ve said, a true moral theory can allow for moral conflict, just because there are moral questions it cannot decide (the fact that you called them ‘moral questions’ leads me to think you think that these questions are moral ones even if a true moral theory can’t decide them).
You’re certainly right, this isn’t relevant to your main point. I was just interested in what I took to be the claim that moral conflicts (i.e. moral problems that are undecidable in a true moral theory) are impossible:
This is a distraction from you main point in at least one other sense: this claim is orthogonal to the claim that morality is not relative to economic conditions.
Yes, you correct that this was not an argument, simply my attempt to gesture at what I meant by the label “morality.” The general issue is that human societies are not rigorous about the use of the label morality. I like my usage because I think it is neutral and specific in meta-ethical disputes like the one we are having. For example, moral realists must determine whether they think “incomplete” moral systems can exist.
But beyond that, I should bow out, because I’m an anti-realist and this debate is between schools of moral realists.
Rephrasing the original question: if we can anticipate the guiding principles underlying the morality of the future, ought we apply those principles to our current circumstances to make decisions, supposing they are different?
Though you seem to be implicitly assuming that the guiding principles don’t change, merely the decisions, and those changed decisions are due to the closest implementable approximation of our guiding principles varying over time based on economic change. (Did I understand that right?)
Pretty much. Though it feels totally different from the inside. Athens could not have thrived without slave labor, and so you find folks arguing that slavery is moral, not just necessary. Since you can’t say “Action A is immoral but economically necessary, so we shall A” you instead say “Action A is moral, here are some great arguments to that effect!”
And when we have enough money, we can even invent new things to be upset about, like vegetable rights.
(nods) Got it.
On your view, is there any attempt at internal coherence?
For example, given an X such that X is equally practical (economically) in an Athenian and post-Athenian economy, and where both Athenians and moderns would agree that X is more “consistent with” slavery than non-slavery, would you expect Athenians to endorse X and moderns to reject it, or would you expect other (non-economic) factors, perhaps random noise, to predominate? (Or some third option?)
Or is such an X incoherent in the first place?
Can you give a more concrete example? I don’t understand your question.
I can’t think of a concrete example that doesn’t introduce derailing specifics.
Let me try a different question that gets at something similar: do you think that all choices a society makes that it describes as “moral” are economic choices in the sense you describe here, or just that some of them are?
Edit: whoops! got TimS and thomblake confused. Um. Unfortunately, that changes nothing of consequence: I still can’t think of a concrete example that doesn’t derail. But my followup question is not actually directed to Tim. Or, rather, ought not have been.
Probably a good counterexample would be the right for certain groups to work any job they’re qualified for, for example women or people with disabilities. Generally, those changes were profitable and would have been at any time society accepted it.
I don’t understand the position you are arguing and I really want to. Either illusion of transparency or I’m an idiot. And TheOtherDave appears to understand you. :(
I’m not really arguing for a position—the grandparent was a counterexample to the general principle I had proposed upthread, since the change was both good and an immediate economic benefit, and it took a very long time to be adopted.
(nods) Yup, that’s one example I was considering, but discarded as too potentially noisy.
But, OK, now that we’re here… if we can agree for the sake of comity that giving women the civil right to work any job would have been economically practical for Athenians, and that they nevertheless didn’t do so, presumably due to some other non-economic factors… I guess my question is, would you find it inconsistent, in that case, to find Athenians arguing that doing so would be immoral?
I don’t think so. I’m pretty sure lots of things can stand in the way of moral progress.
What is progress with respect to either? Could you possibly mean that moral states—the moral conditions of a society—follow from the economic state—the condition and system of economy. I do find it hard to see a clear, unbiased definition of moral or economic progress.
Moral progress is a trend or change for the better in the morality of members of a society. For example, when the United States went from widespread acceptance of slavery to widespread rejection of slavery, that was moral progress on most views of morality.
Economic progress is a trend or change that results in increased wealth for a society.
In general, widespread acceptance of a moral principle, like our views on slavery, animal rights, vegetable rights, and universal minimum income, only comes after we can afford it.
I think he’s trying to say that having resources is a prerequisite to spending them on moral things like universal pay, so we need to pursue wealth if we want to pursue morality. Technically, economic progress is more of a prerequisite to moral progress than a sufficient cause though, as economic progress can also result in bad moral outcomes depending on what we do with our wealth.
What is moral progress? - Is having a society with a vast disparity between rich and poor where the poor support the rich through the resource of their labor considered morally progressed from a more egalitarian tribal state? Is the progress of the empire to a point of collapse and the start of some new empire considered moral progress?
What is economic progress? - Is having a society with a vast disparity between rich and poor where the poor support the rich through the resource of their labor considered morally progressed from the primitive hunter-gatherer society where everyone had more free time considered economic progress? Is the progress of the empire to a point where the disparity in wealth incites revolution or causes collapse considered economic progress?
You’re not making arguments.
The points you raise are not responsive to the points that either he or I made.
If it increases total aggregate utility. Tribes were small, there weren’t very many people. I’m also not sure how happy most tribes were. Additionally, bad moral societies might be necessary to transition to awesome ones.
You conflate moral and economic progress in your second paragraph.
A financial system which collapses probably isn’t too healthy. It still might have improved things overall through its pre-collapse operations though.
Universal pay does not even seem possible now.
You do not answer the question and conflate the questions
How is economic progress measured—if you say the aggraegate utility please explain how that is measured.?
How is moral progress measured?
My argument is simple—the measure of either of these is based on poor heuristics.
My first reaction is to want to say that economic progress is an increase in purchasing power. However, purchasing power is measured with reference to the utility of goods. That would be fine as a solution, except that those definitions would mean that it would be literally impossible for an increase in economic progress to be bad on utilitarian grounds. That’s not what “economic progress” is generally taken to mean, so I won’t use that definition.
Instead, I’ll say that economic progress is an increase in the ability to produce goods, whether those goods are good or bad. This increase can be either numerical or qualitative, I don’t care. Now, it might not be possible to quantify this precisely, but that’s not necessary to determine that economic progress occurs. Clearly, we are now farther economically progressed than we were in the Dark Ages.
Moral progress would be measured depending on the moral theory you’re utilizing. I would use a broad sort of egoism, personally, but most people here would use utilitarianism.
With an egoist framework, you could keep track of how happy or sad you were directly. You could also measure the prevalence of factors that tend to make you happy and then subtract the prevalence of factors that tend to make you sad (while weighting for relative amounts of happiness and sadness, of course), in order to get a more objective account of your own happiness.
With a utilitarian framework, you would measure the prevalence of things that tend to make all people happy, and then subtract the prevalence of things that tend to make all people sad. If there was an increase in the number of happy people, then that would mean moral progress in the eyes of a utilitarian.
You make no argument. You merely ask a question. If you have a general counterargument or want to refute the specifics of any of my points, feel free. So far, you haven’t done anything like that. Also, although it might not be possible to quantify economic or moral progress precisely, we can probably do it well enough for most practical purposes. I don’t understand the purpose of the points you’re trying to raise here.
My original post refuted the statement:
You interjected:
You do not like the questions, the Socratic? Ok, I asserted the basis of the argument and the point of the questions:
A clear, unbiased definition of moral or economic progress does not exist.
You present models for deciding both. There exists models where economic progress varies inversely with moral progress, such as possible outcomes from the utilitarian perspective that are covered in ethics 101 at most colleges, and the manifest reality of a system where economic progress has been used for justifying an abundance of atrocities. There also exist models in either category which define progress in entirely different directions and so any statement of progress is inherently biased.
There is a link between economic states/systems and moral conditions, and it appeared that the author of the statement: “Moral progress proceeds from economic progress.” may have been oversimplifying the issue to a point of of making it unintelligible.
You mentioned wealth which implies an inherent bias also. I can personally assert a different version of wealth which excludes much of what most people consider wealth. If most people think wealth includes assets like cash or gold which I see as having an immoral nature and so their idea of accumulating wealth is immoral in my pov. (I do not include a lengthy moral case, but rather assert such a case exists). So if you see progress and wealth as interrelated then I would ask for a definition of wealth?
You also assert that economic progress is an increased ability to produce goods. I assert that there are many modes of production of which the current industrial mode finds value in quantity, as you state is the measure. Two biases arise:
1 - The bias inherent to the mode: quantity is not the only measure of progress. Competing values include quality in aesthetics, ergonomics, environmental impact, functionality, modular in use (consider open source values). I do not think having more stuff is a sign of economic progress and I am not alone in finding that the measure you have asserted says nothing of “progress”—you of course argue differently and thus we can say one measure or another of progress may differ and are thus inherently biased.
2- What mode of production is more progressed? I do not think industrialization is progress. I see many flaws in the results. Too much damage from that mode imho. I am not here to argue that position but rather to assert it exists.
Is my point about the bias inherent in describing progress clear, or do you think that there exists some definition we all agree upon as to what progress in any area is?
You say that economic production and moral progress aren’t the same. I have already said the same thing; I have already said that increased economic production might lead to morally wrong outcomes depending on how those products end up being used.
You can assert a different definition of wealth if you want, sure. I don’t understand what argument this is supposed to be responsive to. There’s a common understanding of wealth and just because different people define wealth differently, that wouldn’t invalidate my point. Having resources is key to investing them, investing resources is key to doing moral things.
You say that quantity isn’t the sole realm of value. I think that’s true. But if you take the quantity of goods and multiply them by the quality of goods (that is, the utility of the goods, like I mentioned before) then that is a sufficient definition of total economic value.
The mode of production that is most progressed is the one which produces the most.