A confusion about deontology and consequentialism
I think there’s a confusion in our discussions of deontology and consequentialism. I’m writing this post to try to clear up that confusion. First let me say that this post is not about any territorial facts. The issue here is how we use the philosophical terms of art ‘consequentialism’ and ‘deontology’.
The confusion is often stated thusly: “deontological theories are full of injunctions like ‘do not kill’, but they generally provide no (or no interesting) explanations for these injunctions.” There is of course an equivalently confused, though much less common, complaint about consequentialism.
This is confused because the term ‘deontology’ in philosophical jargon picks out a normative ethical theory, while the question ‘how do we know that it is wrong to kill?’ is not a normative but a meta-ethical question. Similarly, consequentialism contains in itself no explanation for why pleasure or utility are morally good, or why consequences should matter to morality at all. Nor does consequentialism/deontology make any claims about how we know moral facts (if there are any). That is also a meta-ethical question.
Some consequentialists and deontologists are also moral realists. Some are not. Some believe in divine commands, some are hedonists. Consequentialists and deontologists in practice always also subscribe to some meta-ethical theory which purports to explain the value of consequences or the source of injunctions. But consequentialism and deontology as such do not. In order to avoid strawmaning either the consequentialist or the deontologist, it’s important to either discuss the comprehensive views of particular ethicists, or to carefully leave aside meta-ethical issues.
This Stanford Encyclopedia of Philosophy article provides a helpful overview of the issues in the consequentialist-deontologist debate, and is careful to distinguish between ethical and meta-ethical concerns.
- 31 Jan 2015 2:04 UTC; 0 points) 's comment on Effective Altruism and Utilitarianism by (EA Forum;
This is right in spirit but wrong in letter:
It’s not a confusion it’s just something that isn’t true. Deontological theories routinely provide explanations for these injunctions and some of these explanations are interesting (though I guess that’s subjective).
No it isn’t. “Why is it wrong to kill?” is a great example of a normative question! Utilitarianism provides an answer. So does deontology. A meta-ethical question would be “what does it mean to say, ‘it’s wrong to kill’”. An applied ethics question would be “in circumstances x, y and z, is it wrong to kill?”. Normative theories are absolutely supposed to answer this question.
While I guess this could be logically possible, anyone who is not a moral realist needs to provide some kind of explanation for what exactly a normative theory is supposed to be doing and what it mean’s to assert one if there are no moral facts. I say this as a non-realist who is pretty confused about what everyone thinks they’re arguing over.
To be absolutely clear, my post is about the way academic philosophy happens to organize a certain debate, and I cite that SEP article as my major source. It will be very helpful to me if you point out where you disagree with the SEP article (and on what basis), or where you think I’ve misread it. (Look specifically at this section: http://plato.stanford.edu/entries/ethics-deontological/#DeoTheMet
Again, there is no fact of the matter about what is a normative and what is a meta-ethical question, just a convention.
Being a moral anti-realist is compatible with having, and following, a moral theory: you just think you have reasons to be moral which are not based on mind-independent facts. For example, you might think convention gives you reason to be moral, where conventionalism is traditionally described as a form of non-realism. (see: http://plato.stanford.edu/entries/moral-anti-realism/#ChaMorAntRea
Being a deontologist (I think, and my post assumes) is even compatible with being a moral nihilist: “Moral principles must come in the form of injunctions, and there are no such injunctions.”
Well there is a fact of the matter, it’s just a fact about a convention.
Yes, I understand what your post was arguing and I’m familiar with the way academic philosophy organizes this debate. And yes, deontology does not presume any particular metaethics. Your error, as far as I can tell, is in not getting what counts as a meta-ethical question and what doesn’t. “Why is murder wrong?” is a straightforward question for normative theory. Kantian deontology, for instance, answers by saying “Murder is wrong because it violates the Categorical Imperative.” And then there are a lot of details about what the Categorical Imperative is and how murder violates it. Rule utilitarianism says that murder is wrong because a rule that prohibits murder provides for the greatest good for the greatest number. And so on. Normative theories exist precisely to explain why certain actions are moral and other actions are immoral. A normative theory that can’t explain why murder is (usually) immoral is a terribly incomplete normative theory.
Meta-ethics isn’t about asking why normative claims are true. It is about asking what it means to make a moral claim. Thus the “meta”. E.g. questions like “are there moral facts?”
At no point have I mentioned credentials to try and win a philosophical debate on Less Wrong. But if there is anything my philosophy degree makes me a minimal expert in, it’s jargon.
I realize this, but this resembles just about no one interested in debating consequentialism vs. deontology.
Right. Like I said, it isn’t logically impossible. It’s just silly and sociologically implausible.
Um, that’s not a very interesting question, is it. Making a moral claim means, more or less: “I am right and you are wrong and you should do what I say”. Note that this is not a morally absolutist view in the meta-ethical sense: even moral relativists make such claims all the time, they just admit that one’s peculiar customs or opinions might affect the kinds of moral claims one makes.
What’s a more interesting question is, “what should happen when folks make incompatible moral claims, or claim incompatible rights”. This is what ethics (in the Rushworth Kidder sense of setting “right against right”) is all about. When we do ethics, we abandon what might be called (in a perhaps naïve and philosophically incorrect way) “moral absolutism” or the simple practice of just making moral claims, and start debating them in public. Law, politics and civics are a further complication: they arise when societies get more complex and less “tribal”, so simple ethical reasoning is no longer enough and we need more of a formal structure.
Well your attempt to explain what a normative claim is actually includes a normative claim so I don’t think you’ve successfully dissolved the question. You are “right” about what? Facts? The world? What kind of facts? What kind of evidence can you offer to demonstrate that you are right and I am wrong?
That “should” is there again.
I don’t imagine there ever was a “simple practice of just making moral claims”. Moral claims are generally claims made on others and they are speech acts which means they exist to communicate something. People don’t spend a lot of time making moral claims that everyone agrees with and abides by which means it’s pretty much in the nature of a moral claim to be part of a debate or discussion.
I can’t see the importance or the force of the distinction you are trying to make.
Who says I need “evidence” to argue that you should do something? I could rely on my perceived authority—in fact, you could take this as a definition of what “moral authority” is all about. Sometimes that moral authority comes from religion (or cosmology, more generally), sometimes it’s derived from tradition, etc. So I have to dispute your claim that:
since it is quite self-evident that many people and institutions have made moral claims in the past that were not perceived as propely being part of a “debate” or “discussion”. It’s true that, sometimes, moral claims are seen in such a way—especially when they’re seen as originating from individual instinct and cognition, and thus leading people to think of themselves as being on the “right side” of an ethical dilemma or conflict. And yet, at some level, more formalized systems like law and politics presumably rely on widespread trust in the “system” itself as a moral authority, if only one with a very limited scope.
So, you’re never going to get an answer to the question of “what a normative claim is”, because the whole concept involves a kind of tension. There’s an “authority to be followed” side, and an “internal moral cognition” side, and both can be right to some degree and even interact in a fruitful way.
I still feel like we’re talking past each other. I made a straightforward empirical claim in my post. So all we need to do is find some empirical evidence. If you accept that SEP typically and in this case represents the academic state of the art and conventional usage, then look at the last section of the SEP article I linked to. It agrees with me (I think).
If you don’t think the SEP article represents the convention accurately, just say that and we can move on to another source. There’s no sense in arguing about whether or not the distinction between normative and meta ethics reported in the SEP article makes sense. I agree that it does not. But we’re not arguing about that. We’re arguing about what the convention actually is.
The SEP does not agree with you. No where in that section does it say that the “Why is murder wrong?” is a meta-ethical question. All it says is that while deontology does not assume a meta-ethical position, though certain meta-ethical positions are more hospitable to it. I agree with you and the SEP here.
I’m not saying deontology is a meta-ethical theory. It isn’t. As I said:
By convention “why is murder wrong?” is a question for normative theory. Your sentence in the post, this one:
is wrong. The SEP does not say otherwise. In any way. “Why is it wrong to kill?” is a normative question. Maybe what is tripping you up is this sentece from the SEP?
I could see how that could be read as “reasons for the truth of deontological morality”. But these are questions actually about the epistemology of moral claims—“how do we know x is immoral?”, is actually different from “why x is immoral?” Obviously these questions are usually connected but they don’t have to be. It is logically possible to think that the Categorical Imperative makes murder wrong but that the way we learn that is by God speaking to us or by studying physics or whatever.
The distinction makes plenty of sense. It just isn’t what you think it is.
Great, I assume this means you think the SEP article is representing the convention. Let me know if that’s not the case, since if it isn’t, we’re wasting our time talking about my interpretation of it.
Anyway, suppose someone were to come along and say ‘Moral truths come primarily in the form of absolute injunctions!’ (or whatever would fix him as a deontologist). We ask him for an example of such an injunction, and he says ‘Do not kill.’ So far, we agree that this whole discussion has taken place within normative ethics.
Now we ask him ‘Why shouldn’t we kill?’ This is a pretty ambiguous question, and we could be asking a clearly normative question to which the answer might be ‘because there’s an injunction to the effect that you shouldn’t’. But this isn’t the kind of question I’m talking about in my (perhaps poorly phrased) initial post. What the confused person I discuss wants is not an answer to the question ‘what is right and wrong’, from the deontologist, he wants answers to questions like ‘what makes a particular injunction true?’ ‘How do you know this injunction is true?’ and so on.
What this confused person often complains about (I know you’ve had some recent experience with this on “Philosophical Landmines”) is that the only explanations they get, explanations which are obviously inadequate, are explanations like ‘Because God said so in the Bible’. In complaining about this, the confused person implies that this is the kind of answer they want, but that it’s a very poor one.
A deontologist who gives this kind of answer is, I think we will agree, endorsing some form of divine command theory. So what kind of a thing is ‘divine command theory’, and what kind of answer is ‘because God said so’? Is it meta-ethical, or normative? Well, the SEP article says this:
Notice that Divine command theory is on the list of things next to ‘expressivist’, ‘constructivist’, and other meta-ethical positions, implying that ‘because God said so’ (the kind of answer the confused person is asking for) is not a claim within normative ethics (which would rather involve claims about what, exactly, God said), but a meta-ethical claim. After all, even if we accept we should do what God says, we have not yet committed to either deontology or consequentialism, much less any specific deontological or consequentialist claims like ‘do not kill’ or ‘minimize deaths’.
So I grant that ‘why is it wrong to kill’ was a poor phrase: this question is ambiguously normative or meta-ethical. If this is all you meant by ‘right in spirit but wrong in letter’, then I agree, and I’ll now try to come up with a way to make my post clearer.
Nevertheless, according to the SEP article anyway, I’ve correctly identified the conventional line between normative ethics and meta ethics, and so I’ve correctly diagnosed a confusion. What do you think?
Okay. I think I see what is happening. The whole issue get’s weirdly skewed by divine command theory, which is so simple it is hard to see the distinction and which implies a very particular formula for a normative theory. Let me outline the position:
Metaethics: Divine Command theory. In answer to the question “What is morality?” they answer “the will/decree of God”.
Normative Ethics: In answer to the question “Why is murder immoral?” they provide a proof that God decrees murder to be immoral, say, a justification for the Bible as the word of God and a citation of the Ten Commandments. Non-judeo-christian divine command theorists would say something else. Some normative theories under the umbrella of divine command theory could even be consequentialist, “God told me in a dream to maximize preference satisfaction.” These answers assume divine command theory but they’re still normative theory.
Now in a real life debate with a divine command theorist they may emphasize the “God said so part” instead of the “here is where he said it” part. But that’s just pragmatics: you don’t care about the normative proof until you share the meta-ethic so it is reasonable for a divine command theorists to skip straight to the major point of contention.
In the case of divine command deontology the “non-answer” issue is pretty much entirely about the meta-ethical assumptions and not the actual normative theory. So I can see why you were emphasizing the fact that deontology is logically independent of any particular meta-ethical framework.
It might be less confusing to just emphasize that “deontology” isn’t a particular normative theory—just a class of normative theory determined by a particular feature (just like consequentialism) and that there is nothing necessarily mysterious or magical about that feature; that that association is due to a particular sort of deontological normative theory which is popular among non-philosophers, a theory which assumes a stupid meta-ethics even though there is no need for deontologists to embrace that meta-ethics.
To summarize: I’m not sure that you’ve correctly identified the conventional line between normative ethics and meta-ethics, but I can see why the context of divine command theory makes the question “why is murder wrong?” seem like a meta-ethical one. When I said you were right in spirit I meant that I agreed that people were strawmaning deontology but disagreed as to the nature of the error. I don’t think it’s that “why is murder wrong?” isn’t a normative question. Rather, it’s that people assume deontology refers to a particular kind of deontology which assume an unhelpful and uninteresting metaethics and this leads that brand of deontology to be unable to given interesting answers to “why” questions.
Any of that make sense?
Yes, and I don’t think we have any further disagreement. Thanks for the interesting discussion.
I’m not sure that divine command theory implies “a very particular formula for a normative theory”. In practice, many divine command theorists pay a lot of attention to things like casuistry (i.e. case-based reasoning) and situational ethics. In other words, they do morality “case by case” or “fable by fable”. Surely any such moral theory must contain a lot of non-trivial normative content. It’s not at all the case that all arguing happens on the meta-ethical, “God said it” level.
This is a good point.
The answer to this question actually depends on whether you are doing normative ethics, or talking about morality. In the former case, a sensible answer would be: “because, as a matter of fact, most individuals and societies agree that “non-killing” is a morally relevant ‘value’, where ‘value’ means a conative ambition (i.e. what “should” we do?). As a normative ethicist, I fall back on such widely-shared values”.
When doing morality in a sort of common-sense way, the answer is more complicated. Generally speaking, you’re going to find that such ‘values’ (or, again, conative ambitions of the “should” variety) are a part of the “moral core” of individuals, what they take their “morality” to be about. This moral core is influenced by many factors, including their biology (so, yes, they’re generally going to share most other humans’ values), society, perceived moral authorities, etc. It can also be influenced by ethical debates they take part in: most people can be convinced that they should drop some moral values and take up others.
All of this means that the real world is quite complicated, and does not fully reflect any of the “moral positions” that philosophers like to talk about.
That is doubtlessly true, though I wonder if its an entirely fair criterion. While most ethicists would agree that the right view should reflect actual everyday moral judgements, nothing in particular holds them to that. It’s simply possible that no one is presently good, and that the everyday moral judgement people make are terribly corrupt and over-complicated compared to the correct judgements.
Note that “the way academic philosophy happens to organize” debates about ethics and morality should be taken with a huge grain of salt. Most people who engage in moral/ethical judgment in everyday life pay very little attention to moral philosophy in the academic sense.
In fact, as it happens, most of the public debate about ethics and morals takes place outside academic philosophy, and is hard to disentangle from debate involving politics, law and general worldviews or “cosmologies” (in the anthropological sense).
Very true, though I think it’s important to acknowledge two things: a) philosophers like Mill and Kant have had a huge impact on everyday moral thinking in the west, and b) the kinds of moral debates we typically have on this site are not independent of academic philosophy.
A moral non-realist can have moral theories in the “If, then” form. If you value A.B.C, then you value D.
If you’re a paper clip maximizer, then …
Except since those are simply hypothetical imperatives, the Moral Non-Realist won’t see the need to call these theories ‘moral’ in nature. The Error Theorist agrees that if you want A then you should do B, but he wouldn’t call that a theory of morality.
There are all kinds of preferences, and distinguishing moral preferences from other types of preferences is still useful, even if you don’t believe that those preferences are commands from existence.
The Error Theorist might not call that a theory of morality. My reply to him is that what others call moral preferences have practical differences to hat preferences. Treating them all the same is throwing out the conceptual baby with the bathwater.
And others, perhaps you, might not want to call these theories “moral” either, because you seem to want “imperatives”, and my account of morality doesn’t include imperatives from the universe, or anything else.
The problem is that the line between what has felt like a “moral” preference and what has felt like some other kind of preference has been different in different social contexts. There may not even be agreement in a particular culture.
For example, some folks think an individual’s sexual preferences are “moral preferences,” such that a particular preference can be immoral. Other folks think a sexual preference is more like a gastric preference. Some people like broccoli, some don’t. Good and evil don’t enter into that discussion at all.
If the error theory were false, I would expect the line dividing different types of preferences would be more stable over time, even if value drift caused moral preferences to change over time. In other words, the Aztecs thought human sacrifice was good, we now think it is evil. But the question has always been understood as a moral question. I’m asserting that some questions have not always been seen as “moral” questions, and the movement of that line is evidence for the error theory.
The line between “truth” and “belief” is also not stable across cultures.
The line between “true” and “not true” is different in different cultures? I wasn’t aware that airplanes don’t work in China.
I meant in the same sense that you meant the statement about cultures, i.e., if you ask an average member of the culture, you’ll get different answers for what is true depending on the culture.
I was talking about community consensus, not whatever nonsense is being spouted by the man-on-the-street.
As you noted, the belief of the average person is seldom a reliable indicator (our even all that coherent). That’s why we don’t measure a society’s scientific knowledge that way.
Ok, my point still stands.
That’s still a moral theory.
Which was the point I was making.
“A moral non-realist can have moral theories …” So I presented the form of the moral theory a moral non-realist could have.
Sorry, I was in a hurry when I posted the grandparent and was unclear:
Specifically my point was that the form of extreme be-yourself-ism implicit in your statement is still a moral theory, one that would make statements like:
“If you’re a paper clip maximizer, then maximize paperclips.”
“If you’re a Nazi, kill Jews.”
“If you’re a liberal, try to stop the Nazis.”
Those aren’t accurate statements of the kinds of moral theories I was speaking of.
I gave the example:
That’s not an imperative, it’s an identification of the relationship between different values, in this case that A,B,C imply D.
Ok, that’s not a moral theory unless you’re sneaking in the statements I made in the parent as connotations.
To me, a theory that identifies a moral value implied by other moral values would count as a moral theory.
What kind of theory do you want to call it?
I think I agree with Eugine_Nier that it isn’t a moral theory to be able to draw conclusions. One doesn’t need to commit to any ethical or meta-ethical principles to notice that Clippy’s preferences will be met better if Clippy creates some paperclips.
At the level of abstraction we are talking in now, moral theories exist to tell us what preferences to have, and meta-ethical theories tell us what kinds of moral theories are worth considering.
Does one need to commit to a theory to have one?
It sounds to me like you only think a person has a moral theory then the moral theory has them.
For you, under your moral theories. Not for me. I’m happy to have theories that tell me what moral values I do have, and what moral values other people have.
What do you want to call those kinds of theories?
Obviously not—but it isn’t your moral theory that tells you how Clippy will maximize its preferences.
Alice the consequentialist and Bob the deontologist disagree about moral reasoning. But Bob does not need to become a consequentialist to predict what Alice will maximize, and vice versa.
Reasoning? More generally, thinking (and caring about) the consequences of actions is not limited to consequentialists. A competent deontologist knows that pointing guns at people and pulling the trigger tends to cause murder—that’s why she tends not to do that.
I should be working now, but I don’t want to. So I’m here, relaxing and discussing philosophy. But I am committing a minor wrong in that I am acting on a preference that is inconsistent with my moral obligation to support my family (as I see my obligations). Does that type of inconsistency between preference and right action never happen to you?
I wonder if it would be more useful, instead of talking about consequentialist vs. deontological positions, to talk about consequence-based and responsibility/rights-based inference steps, which can possibly coexist in the same moral system; or possibly consequence-based and responsibility/rights-based descriptions of morally desirable conditions?
I think that’s an excellent suggestion.
Prior art on the subject.
_TL;DR: I see lots of debates flinging around “consequentialism” and “utilitarianism” and “moral realism” and “subjectivism” and various other philosophical terms, but each time I look up one of them or ask for an explanation, it inevitably ends up being something I already believe, even when it comes from both sides of a heated argument. So it turns out “I am a X” for nearly all X I’ve ever seen on LessWrong. Here’s what I think about all of this, in honest lay-it-out-there form. For a charitable reading, assume there is no sarcasm or trolling anywhere in this comment._
Hmm. So...
I believe that there is an objective system of verifiable, moral facts which can be true or false. [3]
These facts depend on certain objective features of the universe. [2]
However, if one is to ask a moral question without including a specific group-referent (though usually, “all humans” or “most humans” is implicit) from which one can extract that objective algorithm that makes things moral or not, then there is no “final word” or “ultimate truth” about which answer is right, and in fact the question seems hopelessly self-contradictory to me. [1]
To my understanding, since something inside humans determines moral judgments and also determines our opinions on morality, they are correlated, but by a separate cause that seems all too often ignored. I believe that eventually we may be able to understand how this separate black box inside humans does decisions on morality, and then formulate equations to calculate how moral something is for a particular agent. [5]
Given that this “morality” thing only depends on the minds of people, it can also be said to be only about what these people think of it, in a very wide sense of the phrase. However, what opinions people generate and what turns out to be objectively moral are correlated, but from a third cause—one that is still a black box which we cannot describe very accurately (otherwise, you’d be able to show me exactly which neurons fire and in which order and exactly why that makes someone think and say that killing is, ceteris paribus, just simply bad and wrong).
Based on the above, if one were to remove humans altogether then I believe there would be no “right” or “wrong” or “moral” left at all, at least not in the way we mean those words. [1]
Since humans humans can influence the state of reality, and there’s an algorithm somewhere that determines what we find moral, and humans “prefer” things that are moral (are programmed to act in a way that brings about higher quantities of this “moral” stuff), then if they do things which probably lead to more of it, they prefer that result, and if otherwise, they would have preferred that first result. It follows from this that humans should do things which (probably) lead to higher values of this moral stuff.
I would even go so far as to claim that anything that does not do the above, therefore breaks the rules of morality, and is not maximizing the algorithm of morality—they are breaking the rules and doing something outright wrong as a simple matter of mathematics. If they did the right thing, they would have more moral results. [4]
...So, what “am” I? What labels do I “get”, having hereby cited, to the best of my understanding, the primary points and positions of all the sides of the debates here, with in my mind no contradiction whatsoever in any of the above?
Deontology and consequentialism aren’t what’s confusing me. What’s confusing me is that there is all this confusion about the above points, and why people keep arguing about all of the above while to me they always seem to just be talking past eachother and seem to show clear signs of having the exact same model of the world (though sometimes assigning different names to different nodes or even to the model itself), or at least make the same predictions about morality.
Foundations, background and prior beliefs (“justifications”), to avoid more needless confusion:
0 - There is an objective, shared reality that we all live in that determines our experiences, not the other way around. This is simply the most natural, simplest way for the universe to function, and despite many claiming that there’s a dragon in their garage, every single human I’ve ever met has always acted as if the above were true. With no exceptions.
1 - By studying the anthropic principle, physics, evolution, and some long-term history, I arrive at the conclusion that the universe isn’t built for humans, that humans are a random artifact in it, and that if there were never any humans in the universe (or if we all go extinct), the rest of the universe will go on not giving a shit about us (as in, it can’t give a shit, it doesn’t have a mind, or even if it does, this mind just obviously doesn’t do things according to human morality, otherwise we’d live in what humans would consider an ultimate heavenly utopia) and running along on its course of cruel physics and lifeforms suffering horribly before winking out of existence entirely for no reason or justification we might find valid or comforting right now.
2 - The Map is not the Territory, but the map is in the territory. Therefore, any part of the map is also an objective element of the landscape, an objective feature of reality. This includes human minds and human thoughts and human debates about morality.
3 - Since human minds are part of objective reality, they can be analyzed and objective, verifiable propositions can be stated about them.
4 - Numerically, some results will be better than others. However, if we assume that humans have multiple values as part of this “morality” thing and some of them have no relative ratios or bases of comparison, we run into game-theoretic issues of having to choose one of the pareto optimums in a significant number of possible games. It is theoretically possible in the real world that some issues will be this ambiguous, but in my experience in the vast majority of cases a more careful evaluation of the same morality algorithm will reveal that some of the possible choices which in the immediate seem to fulfill different values ambiguously will ultimately lead to strictly dominant outcomes when weighed over their effects on the world and opportunities for more fulfillment of values that are part of what is being currently valued.
In other words, while some possible choices may have multiple “optimal” terminal-value payoffs in a way that makes it ambiguous if calculated naively, the instrumental contribution of each choice to future worldstates will almost always make one of the outcomes strictly better than all the others because of the additional current value of generating worldstates that will give better odds of generating more value in future games.
5 - I reject all forms of dualism or claims that we can never possibly understand what goes on in human minds, on the basis of the same arguments and evidence cited in the Generalized Solution to P-Zombies. I can elaborate slightly more on this on request, but I personally consider the matter long resolved (as in, dissolved entirely such that I see no questions left to ask).
Edit: Added TLDR and fixed some of the formatting.
If you think of your map as a set of sentences that models the territory, an objective fact can be defined as a sentence in this set. So morality is objective in this regard if what determines your moral judgments are sentences in your map. Now consider the following counterfactual: in this world the algorithms that determine your decisions are very different. They are so different that counterfactual-you thinks torturing and murdering innocent people is the most moral thing one can do.
Now I ask (non-counterfactual) you: is it moral for counter-factual you to torture and murder innocent people? Most people say “no”. This is because our moral judgments aren’t contingent on our beliefs about the algorithms in our head. That is, they are not objective facts. We just run the moral judgment software we have and project that judgment onto the map. I developed this argument further here.
I think we’re fairly close, but have one major difference.
I’d say there are moral facts. These moral facts are objective features of the universe. These facts are about the evaluations that could be made by the moral algorithms in our heads. Where I differ with you is in the number of black boxes. “We” don’t have “a” black box. “Each” of us has our own black box.
Moral, as evaluated by you, is the result of your algorithm given the relevant information and sufficient processing time. I think this is somewhat in line with EY, though I can never tell if he is a universalist or not. Moral is the result of an idealized calculation of a moral algorithm, where the result of the idealization is often different than the actual because of lack of information and processing time.
A case could be made for this view to fall into many of the usual categories. Moral relativism. Ethical Subjectivism. Moral Realism. Moral Anti Realism. About the only thing ruled out is Universalism.
For Deontology vs. Consequentialism, it gets similarly murky.
Do consequentialists really do de novo analysis of the entire state of the universe again and again all day? If I shoot a gun at you, but miss, is it “no harm, no foul”? When a consequentialist actually thinks about it, all of a sudden I expect a lot of rules of behavior to come up. There will be some rule consequentialilsm. Then “acts” will be seen as part of the consequences too. Very quickly, we’re seeing all sorts of aspects of deontology when a consequentialist works out the details.
The same thing with deontologists. Does the rule absolutely always apply? No? Maybe it depends on context? Why? Does it have something to do with the consequences in the different contexts? I bet it often does. Similarly, the “though the heavens fall, I shall do right” attitude is rarely taken in hypotheticals, and would be more rarely taken in actual fact. You won’t tell a lie to keep everyone in the world from a fiery death? Really? I doubt it.
I’d expect a social animal to have both consequentialist and deontologist moral algorithms, but that there’d be significant feedback between the two. I’d expect the relative weighting of those algorithms to vary from animal to animal, much in the same way Haidt finds the relative strengths of the moral modalities he has identified vary between people.
Most of the argument over consequentialism and deontology probably comes more from how they are used as rationalizations for your preferences in moral modalities than the relative weighting of your consequentialist and deontological algorithms anyway. The meta argument over consequentialism vs. deontology is a way to avoid hard thinking that drives both algorithms to a settled conclusion.
This doesn’t seem to be a point on which we differ at all. In this later comment I’m saying pretty much the same thing.
Indeed, I wouldn’t be surprised if each of us has hundreds of processes that feel like they’re calculating “morality”, and aren’t evaluating according to the same inputs. Some might have outputs that are not quite easy to directly compare, or impossible to.
OK. I see your other comment. I think I was mainly responding to this:
You can’t extract “an” objective algorithm even if you do specify a group of people, unless your algorithm returns the population distribution of their moral evaluations, and not a singular moral evaluation. Any singular statistic would be one of an infinite set of statistics on that distribution.
Thanks for the very clear direct account of your view. I do have one question: it seems that on your view it should be impossible to act according to your preferences, but morally wrongly. This is at least a pretty counterintuitive result, and may explain some of the confusion people have experienced with your view.
As stated, this is correct. I don’t quite think this is what you were going for, though ;)
Basically, yes fully true even in spirit, IFF: morality is the only algorithm involved in human decision-making AND human decision-making is the only thing that determines what the rest of my brain, nervous system, and my body actually end up doing.
Hint: All of the above conditions are, according to my evidence, CLEARLY FALSE.
Which means there are competing elements within human brains that do not seek morality, and these are a component of what people usually refer to when they think of their “preferences”, such as “I would prefer having a nice laptop even though I know it costs one dead kid.”
If we recenter the words, in terms of “it is impossible to decide that it is more moral to act against one’s moral preferences”, then… yeah. I think that logically follows, and to me sounds almost like a tautology.
Once the equations are balanced and x is solved for and isolated, one’s moral preferences is what one decides is more moral to act in accordance with, which is what one morally prefers as per the algorithm that runs in some part of the brain.
So judging from this, the solution might simply be to taboo and reduce more stuff. Thanks, your question and comment were directly useful to me and clear.
Okay, thanks for clarifying. I still have a similar worry though: it seems to be impossible that anyone should act on their own moral preferences, yet morally wrongly. This still seems quite counterintuitive.
You are correct in that conclusion. I think it is impossible for one to act on their own (true) moral preferences yet morally wrongly.
There are two remaining points, for me. First is that it’s difficult to figure out one’s own exact moral preferences. The second is that it becomes extremely important to never forget to qualify “morally wrongly” with a parent.
Frank can never act on Frank’s true moral preferences and yet act Frank’s-Evaluation-Of morally wrongly.
Bob can never act on Bob’s true moral preferences and yet act Bob’s-Evaluation-Of morally wrongly.
However, since it is not physically required in the laws of the universe that Frank’s “Evaluation of Morally Wrong” function == Bob’s “Evaluation of Morally Wrong” function, this can mean that:
Frank CAN act on Frank’s true moral preferences and yet act Bob’s-Evaluation-Of morally wrongly.
So to attempt to resolve the whole brain-wracking nightmare that ensues, it becomes important to see whether Bob and Frank have common parts in their evaluation of morality. It also becomes important to notice that it’s highly likely that a fraction of Frank’s evaluation of morality depends on the results of Bob’s evaluation of morality, and vice-versa.
Thus, we can get cases where Frank’s moral preferences will depend on the moral preferences of Bob, at least in part, which means if Frank is really acting according to what Frank’s moral preferences really say about Frank not wanting to act completely against Bob’s moral preferences, then Frank is usually also acting partially according to most of Bob’s preferences.
It is counterintuitive, I’ll grant that. I find it much less counterintuitive than Quantum Physics, though, and as the latter exemplifies it’s not uncommon for human brains to not find reality intuitive. I don’t mean this association connotatively; I don’t really have other examples. My point is that human intuition is a poor tool to evaluate advanced notions like these.
This is sensible enough as a theory of morality, but you still haven’t accounted for ethics, or the practice of engaging in interpersonal arguments about moral values. If Bob!morality is so clearly distinct from Frank!morality, why would Bob and Frank even want to engage in ethical reasoning and debate? Is it just a coincidence that we do, or is there some deeper explanation?
A possible explanation: we need to use ethical debate as a way of compromising and defusing potential conflicts. If Bob and Frank couldn’t debate their values, they would probably have to resort to violence and coercion, which most folks would see as morally bad.
Well, I agree with your second paragraph as a possible reason, which on its own I think would be enough to make most actual people do ethics.
And while Bob and Frank have clearly distinct moralities, since both of them were created by highly similar circumstances and processes (i.e. those that produce humans brains), it seems very likely that there’s more than just one or two things on which they would agree.
As for other reasons to do ethics, I think the part of Frank!morality that takes Bob!morality as an input is usually rather important, at least in a context where Frank and Bob are both humans in the same tribe. Which means Frank wants to know Bob!morality, otherwise Frank!morality has incomplete information with which to evaluate things, which is more likely to lead to sub-optimal estimates of Frank’s moral preferences as they would be if Frank had known Bob’s true moral preferences.
Frank wants to maximize the true Frank!morality, which has a component for Bob!morality, and probability says incomplete information on Bob!morality leads to lower expected Frank!morality.
If we add more players, eventually it gets to a point where you can’t keep track of all the X!morality, and so you try to build approximations and aggregations of common patterns of morality and shared values among members of the groups that Frank!morality evaluates over. Frank also wants to find the best possible game-theoretic “compromise”, since others having more of their morality means they are less likely to act against Frank!morality by social commitment, ethical reasoning, game-theoretic reasoning, or any other form of cooperation.
Ethics basically appears to me like a natural Nash equilibrium, and meta-ethics the best route towards Pareto optima. These are rough pattern-matching guesses, though, since what numbers would I be crunching? I don’t have the actual algorithms of actual humans to work with, of course.
Point 2 is terrific, and bears repeating in some other threads.
But those “objective” facts would only be about the intuitions of individual minds,
Same problem. A thinks it is moral to kill B, B thinks it is not moral to be killed by A. Where is the objective moral fact there? Objective moral facts (or at least intersubjective ones) need to resolve conflicts between individuals. You have offered nothing that can do that.. Morality cannot just be a case of what an individual should do, because indiviuals interact.
Then morlaity is not so objective that it is graven into the very fabric of the universe. The problem remains that what you have presented is too subjective to do anything useful. By all means present a theory of human morality that is indexed to humans, but let it regulate interactions between humans.
That is hard to inpterpret. Why should opinions be what is “objectively moral”? You might mean there is nothing more to morality than people’s jugements about what is good or bad, but that is not an objective feature of the universe, it is mind projection. That the neural mechanisms involved are objective does not make what is projected by them objective. If objective neural activity makes me dream of unicorns, unicorns are not thereby objective.
And in any case, what is important is co-ordinating the judgements of individuals in the case of conflict.
“We” individually, or “we” collectively? That is a very important point to skate over.
THat seems to be saying that it is instrumentally in people’s interests to be moral. But if that were always straightforwardly the case, then there would be no issues of sacrifices and self-restraint involve in morality, which is scarcely credible. If I lay down my life for my country, that might lead to the greater good, but how good is it for me? The issue is much more complex than you have stated.
(part 2 of two-part response, see below or above for the first)
See this later comment but this one especially (the first is mostly for context) to see that I do indeed take that into account.
The key point is that “morality” isn’t straightforwardly “what people want” at all. What people consider moral when they evaluate all the information available to them and what people actually do (even with that information available) are often completely different things.
Note also that context and complicated conditionals become involved in Real Issues™. To throw out a toy example:
Julie might find it moral to kill three humans because she values the author of this post saying “Shenanigans” out loud only a bit less than their lives, and the author has committed to saying it three times out loud for each imaginary person dead in this toy example. However, Jack doesn’t want those humans dead, and has credibly signaled that he will be miserable forever if those three people die. Jack also doesn’t care about me saying “Shenanigans”.
Thus, because Julie cares about Jack’s morality (most humans, I assume, have values in their morality for “what other people of my tribe consider moral or wrong”), she will “make a personal sacrifice and use self-restrain” to not kill the three nameless, fortunate toy humans. The naive run of her morality over the immediate results says “Bah! Things could have been more fun.”, but game-theoretically she gains an advantage in the long term—Jack now cooperates with her, which means she incurs far less losses overall and still gains some value from her own people-alive moral counter and from Jack’s people-alive moral counter as well.
I think you are vastly confusing “good”, “greater good”, and “good for me”. These need to be tabooed and reduced. Again, example time:
Tom the toy soldier cares about his life. Tom cares about the lives of his comrades. Tom cares about the continuation of the social system that can be summarized as “his country”.
If Tom dies without any reason or effect, this is clearly bad. However, Tom values the end of his country as 1⁄2 of his life. So far, he’s still not going to die for it. Tom also values each comrade life at 1/10th of his life. Still not going to die for his country. Tom also knows that the end of his country means 95% chance that 200 of his comrades will die, with the other 5% they all live. If the country does not end, there’s a 50% chance that 100 of his comrades will die anyway, with 50% they live.
If Tom lives, there is 95% chance (as far as Tom knows, to his evidence, etc. etc.) that the country will end. If Tom sacrifices himself, the country is saved (with “certainty”, usual disclaimers etc. etc.).
So if Tom lives, Tom’s values go to −1/2 plus .95 chance of .95 chance of −20. If Tom sacrifices himself, the currently-alive Tom values this at −1 plus .5 chance of −10. Values are in negative utility only for simplicity of calculation, but this could be described at length in any other system you want (with a bit more effort though).
So the expected utility comes out at −18.55 if Tom lives, and −6 if Tom sacrifices himself, since Tom is a magical toy human and isn’t biased in any way and always shuts up and calculates and always knows exactly his own morality. So knowing all of this, Tom lays down his life for his country and what he would think of as “the greater good”.
I really don’t see how I’ve excluded this or somehow claimed that all of this was magically whisked away by any of what I said.
Overall, I think the only substantive disagreement we had is in your assessment that I didn’t think of / say anything useful towards solving interpersonal moral conflicts (I’m pretty sure I did, but mostly implicitly). I think the issue of what “morality” is for is entirely an empty word problem and should be ignored.
I’ll gladly attempt to reduce or taboo for reasonable requests to do so. If you think there are other issues we disagree on, I’d like them to be said. However, I would much appreciate efforts to avoid logical rudeness, and would also greatly appreciate if in further responses you (or anyone else replying) assumed that I haven’t thought through this only at the single-tier, naive level without giving this much more than five minutes of thought.
Or, to rephrase positively: Please assume you’re speaking to someone who has thought of most of the obvious implications, has thought about this for a very considerable amount of time, has done some careful research, and thinks that this all adds up to normality.
Tom will sacrifice himself if his values lead him too, and not if they don’t. He might desert or turn traitor. You would still call that all moral because it is an output of the neurological module you have labelled “moral”.
I think it isn’t. If someone tries to persuade you that you are wrong about morality, it is useful to consider the “what is morality for” question.
Do you think any of this adds up to any extent of a solution to the philosphical problems of morality/ethics?
Yes!
.
(this space intentionally left blank)
.
.
What specific philosophical problems? Because yes, it does help me clarify my thoughts and figure out better methods of arriving at solutions.
Does it directly provide solutions to some as-yet-unstated philosophical problems? Well, probably not, since the search space of possible philosophical problems related to morality or ethics is pretty, well, huge. The odds that my current writings provide a direct solution to any given random one of them are pretty low.
If the question is whether or not my current belief network contains answers to all philosophical problems pertaining to morality and ethics, then a resounding no. Is it flabbergasted by many of the debates and many of the questions still being asked, and does it consider many of them mysterious and pointless? A resounding yes.
Consequentualism versus deontology, objectivism versus subjectivism, as in the context.
Any would be good Metaethics is sometimes touted as a solve problem on LW.
Oh. Yep.
As I said originally, both of those “X versus Y” and many others are just confusing and mysterious-sounding to me.
They seem like the difference between Car.Accelerate() and AccelerateObject(Car) in programming. Different implementations, some slightly more efficient for some circumstances than others, and both executing the same effective algorithm—the car object goes faster.
Oh. Well, yeah, it does sound kind-of solved.
Judging by the wikipedia description of “meta-ethics” and the examples it gives, I find the meta-ethics sequence on LW gives me more than satisfactory answers to all of those questions.
You previously said something much more definite-sounding:
“I believe that there is an objective system of verifiable, moral facts which can be true or false”
..although it has turned out you meant something like “there are objective facts about de facto moral reasoning”.
The alleged solution seems as elusive as the Snark to me.
You seem to misunderstand most of my beliefs, so I’ll try to address that first before I go any further to avoid confusion.
No. Just no. No no no no no no no no no no no no no. NO! NO!
The objective fact is that there is a brain made mostly of neurons and synapses and blood and other kinds of juicy squishyness inside which a certain bundle of those synapses is set in a certain particularly complex (as far as we know) arrangement, and when something is sent as input to that bundle of synapses of the form “Kill this child?”, the bundle sends queries to other bundles: “Benefits?” “People who die if child lives?” “Hungry?” “Have we had sex recently?” “Is the child real?” etc.
Then, an output is produced, “KILLING CHILD IS WRONG” or “KILLING CHILD IS OKAY HERE”.
Human consciousnesses, the “you” that is you and that wouldn’t randomly decide to start masturbating in public while sleepwalking (you don’t want to be the guy whom that happened to, seriously), doesn’t have access to the whole thing that the bundle of synapses called “morality” inside the brain actually does. It only has output, and sometimes glimpses of some of the queries that the bundle sent to other bundles.
In other words, intuitions.
What I refer to as an “objective fact”, the “objective” morality of that individual, is the entire sum of the process, the entire bundle + reviewing by conscious mind on each individual process + what the conscious mind would want to fix in order to be even more moral by the morals of the same bundle of synapses (i.e. self-reflectivity). The exact “objective morality” of each human is a complicated thing that I’m not even sure I grasp entirely and can describe adequately, but I’m quite certain that it is not limited to intuitions and that those intuitions are not entirely accurate.
The “objective moral fact” (to use your words), in this toy problem, is that IF AND ONLY IF A is correct when A thinks it is moral for A’s morality system to kill B, and B is correct when B thinks it is moral for B’s system to not be killed by A, then and only then it is moral for A to kill B and it is moral for B to not be killed by A. There is no contradictions, the universe is just fucked up and lets shit like this happen.
What? No. First, that’s called ethics, the thing about how individuals should interact. The reason ethics is hard is because each individual has a slightly different morality, but the reason it’s feasible at all is because most humans are fairly similar even in this.
Most humans, when faced with the toy problem of saving ten young lives versus three old ones, will save the ten young. Most humans, when they see a child get horribly mutilated or have their flesh melt off of their bones, will be revolted and feel that this is many kinds of Very Wrong.
For most humans, if they have a small something they value a little bit, but that if they give it up temporarily they know they can make another human’s morality become much much better, while if they stick to keeping their small something to themselves that human will feel horribly wronged, will give up that little bit for the benefit of the other human’s morality.
This seems to indicate that most humans have a component, somewhere in this bundle of synapses, that tries to estimate what the other bundles of synapses in other brains are doing, so as to not upset them too much. This is also part of what helps ethics be feasible at all.
I don’t even understand what you’re getting at. I’m not trying to come up with a system of norms that tells everyone what they should do to interact with other humans. How is it too subjective to be useful?
I’ve merely presented my current conclusions, the current highest-probability results of computing together all the evidence available to me. These are guesses and tentative assessments of reality, an attempt at approximating and describing what actually goes on out there in human brains that gives rise to humans talking about morality and not wanting to coat children with burning napalm. (sorry if this strikes political chords, I can’t think of a better example of something public-knowledge that the vast majority of humans who learned about it described as clearly wrong)
As for being “too subjective to do anything useful”… what? If I tell you that two cars have different engines, so you can’t use the exact same mathematical formula for calculating their velocity and traveled distance as they accelerate, is this useless subjective information? Because what I’m saying is that humans have different engines in terms of morality, and while like the car engines they have major similarities in the logical principles involved and how they operate, there are key differences that must be taken into consideration to produce any useful discussion about the velocities and positions of each car.
Apologies for being unclear. Opinions are not what is objectively moral, I was saying that the bundle of synapses I described above is both the main part of what is objectively moral (well, the algorithms implemented by the synapses anyway), and what comes out of the bundle of synapses is also what generates the opinions. They are correlated, but not perfectly so, let alone equivalent/equal.
So more often than not, one’s opinion that it is wrong to suddenly start killing and pillaging everyone in the nearest city is a correct assessment about their own morality. On average, most clear-cut moral judgments will be fairly accurate, because they come out of the same algorithms in different manners.
The latter two sentences of this last quote seem to aptly rephrase exactly what I was trying to say. The are objective algorithms and mechanisms in the bundles of nerves, but just because the conscious mind is getting a rough idea of what it thinks they might be doing after having a “KILLING CHILD IS WRONG” output a hundred times, the output still doesn’t have access to the whole thing, and even if it did there are things one would want to correct in order to avoid errors due to bias.
I can’t really be more precise or confident in exactly what is morality in a human’s brain, because I haven’t won five nobels in breakthrough neurobiology, philosophy, peace, ethics and psychology. I think that’s about the minimum award that would go to someone who had entirely solved and located exactly everything that makes humans moral and exactly how it works.
The ambiguity is appropriate, though unintentional. The first response is “we” individually, but to some extent there are many things that all humans find moral, and many more things that most humans find moral. Again the example of napalm-flavored youngsters.
So each of us has a separate algorithm, but if you were to examine them all individually, you could probably (with enough effort and smarts) come up with an algorithm that finds moral only what all humans find moral, or finds moral whatever at least 60% of humans find moral, or some other filtering or approximation.
To example you, “2x − 6” will return a positive number as long as x > 3 (let’s not count zero). Similarly, “3x − 3“ will return positive as long as x > 1. If positive numbers represent a “This is moral and good” output, then clearly they’re not the same morality. However, “x > 3” will guarantee a space of solutions that both moralities find moral and favorable.
(two-part comment, see above or below for the rest)
That’s still not the point. The entire bundle still isn’t Objective Morality, because the entire bundle is still insie one person’s head. Objective morality is what all ideal agents would converge on.
The way you have expressed this is contradiictory. You said “it is moral”, simpliciter, rather than, it is moral-for-A, but immora-for-B. Although to do that would have made ii obvious you are talking about subjective morality. And no, it isn’t the universes fault fault. The universe allows agents to have contradictory and incompatible impulses, but it is you choice to call those implulses “moral” despite the fact that they don’t resole conflicts, or take others’ interestes into account. I wouldn’t call them that. I think the contraiction means at least one of the agent’s I-think-this-is-moral beliefs is wrong
I don’t think so
Ethics ” Moral principles that govern a person’s or group’s behavior.” “1. ( used with a singular or plural verb ) a system of moral principles: the ethics of a culture. 2. the rules of conduct recognized in respect to a particular class of human actions or a particular group, culture, etc.: medical ethics; Christian ethics. 3. moral principles, as of an individual: His ethics forbade betrayal of a confidence. 4. ( usually used with a singular verb ) that branch of philosophy dealing with values relating to human conduct, with respect to the rightness and wrongness of certain actions and to the goodness and badness of the motives and ends of such actions. ”
Then what are you doing? The observation that facts about brains a relevant to descriptive ethics is rather obvious.
If you allow indiiviudal drivers to choose which side of the road to drive on, you have a uselessly subjective system of traffic law.
Their own something. I don’t think you are going to convince an error theorist that morality exists by showing them brain scans. And the terms “consicience” and “superego” cover internal regulation of behaviour without prejudice to the philosophical issues.
Has no bearing on the philosophy, again. All you have their is the intersection of a set of tablets.
Okay. That is clearly a word problem, and you are arguing my definition.
You assumed I was being deliberately sophistic and creating confusion on purpose. After I explicitly requested twice that things be interpreted the other way around where possible. I thought that it was very clear from context that what I meant was that:
IFF It is moral-A that A kills B
&& It is moral-B that B is not killed by A
&& There are no other factors influencing moral-A or moral-B
THEN:
It is moral for A that A kills B and it is likewise moral for B to not be killed by A. Let the fight begin.
Really? You’re going there?
Please stop this. I’m seeing more and more evidence that you’re deliberately ignoring my arguments and what I’m trying to say, and that you’re just equating everything I say with “This is not a perfect system of normative ethics, therefore it is worthless”.
I have a hard time even inferring what you mean by this rather irrelevant-seeming metaphor. I’m not talking about laws and saying “The law should only punish those that act against their intuitions of morality, oh derp!”—I’m not even talking about justice or legal systems or ideal societies at all! Have I somewhere accidentally made the claim that we should just let every single human build their own model of their own system of morality with incomplete information and let chaos ensue?
Yes. And in case that wasn’t painfully obvious yet, this “something” of their own is exactly what I mean to say when I use the word “morality”!
I’m not attempting to convince anyone that “morality” “exists”. To engage further on this point I would necessitate those two to be tabooed, because I honestly have no idea what you’re getting at or what you even mean by that sentence or the one after it.
Yup. If I agree to use your words, then yes. There’s an intersection of a set of tablets. These tablets give us some slightly iffy commandments that even the owner of the tablet would want to fix. The counterfactual edited version of the tablet after the owner has made the fixes, checked again to see if they want to fix anything, and are happy with the result, is exactly what I am pointing at here. I’ve used the words “objective morality” and “true moral preferences” and “moral algorithms” before, and all of those were pointing exactly at this. Yes, I claim that there’s nothing else here, move along.
If you want to have something more, some Objective Morality (in the sense you seem to be using that term) from somewhere else, humans are going to have to invent it. And either it’s going to be based on an intersection of edited tablets, or a lot of people are going to be really unhappy.
I can see that it is a word problem, and I woud argue that anyone would be hard pressed to guess what you meant by “objective moral facts”.
What fight? You have added the “for A” and “for B” clauses that were missing last time. Are you hilding me to blame for taking you at your word?
You claimed a distinction in meaning between “morality” and “ethics” that doesn’t exist. Pointing that out is useful for clarity of communication. It was not intended to prove anything at the object level.
I don’t know how accidental it was , but your “moral for A” and “moral for B” comment does suggest that two people can in contradiciton and yet both right.
I am totally aware of that. But you don’t get to call anything by any word. I was challenging the appriopriateness of making substantive claims based on a naming ceremony.
You said there were objective facts about it!
You haven’t explained that or how or why different individuals would converge on a single objective reality by refining their intuitions. And no, EY doesn’t either.
if they haven’t already.
So values and intuitions are a necessary ingredient. Any number of others could be as well.
If individual moralities have enough of a common component that we can point to principles and values that are widely-shared among living people and societies, that would certainly count as a “fact” about morality, which we could call a “moral fact”. And that fact is certainly “objective” from the POV of any single individual, although it’s not objective at all in the naïve Western sense of “objectivity” or God’s Eye View.
Dictionary definitions are worthless, especially in specialized domains. Does a distinction between “morality” and “ethics” (or even between “descriptive morality” and “normative morality”, if you’re committed to hopelessly confused and biased naming choices by academic philosophers) cut reality at its joints? I maintain that it does.
And it is stll not an objective moral fact in the sense of Moral Objectivism, in the sense of a first-order fact that makes some moral propositions mind independently true. It’s a second order fact.
I’ve never seen that distinction in the specialised domain in question.
I don’t think that’s a coincidence. Whether there is some kind of factual (e.g. biological) base for morality is an interesting question, but it’s generally a question for psychology and science, not philosophy. People who try to argue for such a factual basis in a naïve way usually end up talking about something very different than what we actually mean by “morality” in the real world. For an unusually clear example, see Ayn Rand’s moral theory, incidentally also called “Objectivism”.
Just got bashed several times, while presenting the fragility of values idea in Oxford, for using the term “descriptive morality”. I was almost certain Eliezer used the term, hence, I was blaming him for my bashing. But it seems he doesn’t, and the above comment is the solely instance of the term I could find. I’m blaming you them! Not really though, it seems I’ve invented this term on my own—and I’m not proud of it. So far, I’ve failed to find a correlated term either in meta-ethics or in the Sequences. In my head, I was using it to mean what would be the 0 step for CEV. It could be seen as the object of study of descriptive ethics (a term that does exist), but it seems descriptive ethics uses a pluralistic or relativistic view, while I needed a term to describe the morality shared by all humans.
So it’s even worse than I thought? When ethicists do any “descriptive” research, they are studying morality, whether they care to admit it or not. The problem with calling such things “ethics” is not so much that it implies a pluralist/relativist view—if anything, it makes the very opposite mistake: it does not take moralities seriously enough, as they exist in the real world. In common usage, the term “ethics” is only appropriate for very broadly-shared values (of course, whether such values exist after all is an empirical question), or else for the kind of consensus-based interplay of values or dispute resolution that we all do when we engage in ethical (or even moral!) reasoning in the real world.
Sooo, not objective then. Definition debates are stupid, but there is no reason at all to be this loose with language. Seriously, this reads like a deconstructionist critique of a novel from an undergraduate majoring in English. Complete with scare quotes around words that are actually terms of art.
Well, yes. I’m using scare quotes around the terms “objective” and “fact”, precisely to point out that I am using them in a more general way than the term of art is usually defined. Nonetheless, I think this is useful, since it may help dissolve some philosophical questions and perhaps show them to be ill-posed or misleading.
Needless to say, I do not think this is “being loose with language”. And yes, sometimes I adopt a distinctive writing style in order to make a point as clearly as possible.
If I’ve understood your position correctly, it’s extremely similar to what I would call the “high-level LW metaethical consensus.” Luke’s sequence on Pluralistic Moral Reductionism, Eliezer’s more recent posts about metaethics and a few posts by Jack all illustrate comparable theories to yours. If others have written extensively about metaethics on LW, I may have missed them.
These seem different from each other to me.
How so?
I don’t see (explicit) pluralism in EY. Jack’s approach is so deflationary it could be an error theory.
Either D-ology or C-ism can be taken meta-ethically or at the object level (ie following rules blindly or calculating consequences without knowing why).
Surely most are. C-ism is moral realism justified empirically, D-ology is moral realism jusitfied logically. Out of the two uses, the former, the meta ethical is more usual.
I think if someone said this, what they probably mean (i.e., would say once you cleared up their confusion about terminology and convention) is something like “deontology does not seem compatible with any meta-ethical theories that I find plausible, while consequentialism does, and that is one reason why I’m more confident in consequentialism than in deontology.” Is this statement sufficiently unconfused?
Yes, that sounds perfectly clear and unproblematic to me, as well as a good way to get at issues which may help decide the consequentialism vs deontology debate.
The best distinction I’ve seen between the two consists in whether you honour or promote your values.
Say I value not-murdering.
If I’m a consequentialist, I’ll act on this by trying to maximise the amount of non-murdering (or minimising the amount of murdering). This might include murdering someone who I knew was a particularly prolific murderer.
If I’m a deontologist, I’ll act on this value by honouring it: I’ll withhold from murdering anyone, even if this might increase the total amount of murdering.
Unfortunately I can’t remember offhand who came up with this analysis.
This sounds like they are, in fact, valuing different things altogether. The consequentialist negvalues the amount of murdering there is, while the deontologist negvalues doing the murdering.
If the deontologist and consequentialist both value not-murdering-people, then the consequentialist takes the action which leads to them not having murdered someone (so they don’t murder, even if it means more total murdering), and the deontologist is as quoted.
If they both negvalue the total amount of murders, the deontologist will honour not-doing-things-which-are-more-total-murder, which by logical necessity implies ¬( not murdering this one time), which means they also murder for the sake of less murdering.
It seems the distinction is, again, merely one of degree and probability estimates, and a difference in the general conceptspace of where people from both “camps” tend to usually pinpoint their values. To rephrase, this means it seems like the only real difference between consequentialists and deontologists is the language and the general empirical clusters of things they value more, including different probability estimates for certain values of the likelihood of some things.
I think it isn’t precise to say that they value different things, since the deontologist doesn’t decide in terms of values. Speaking of values is practical from the point of view of a consequentialist, who compares different possible states (or histories) of the world; values are then functions defined over the set of world states which the decider tries to maximise. A pure ideal deontologist doesn’t do that; his moral decisions are local (i.e. they take into account only the deontologist’s own action and perhaps its immediate context) and binary (i.e. the considered action is either approved or not, it isn’t compared to other possible actions). If more actions are approved the deontologist may use whatever algorithm to choose between them, but this choice is outside the domain of deontologist ethics.
Deontologist rules can’t force one to act as if one valued some total amount of murders (low or high), as the total amount of murders isn’t one’s own action. Formulating the preference as a “deontological” rule of “you shouldn’t do things that would lead you to believe that the total amount of murders would increase” is sneaking consequentialism into deontology.
This is not at all clear to me. The Kantian Categorical Imperative is usually seen as a deontological rule, even though it’s really a formulation of ‘reflective’ concerns (viz., ‘you should not act as you would not have everyone act’, akin to the Silver and Golden Rule) that could be seen as meta-ethical in their own right.
Good point. This also explains why we are so willing to delegate “killing” to external entities, such as job occupations (when the “killing” involves chickens and cattle) and authorities (when we target war enemies, terrorists and the like. Of course this comes with very strict safeguards and due processes.) More recently, we have also started delegating our “killing” to machines such as drones; admittedly, this ignores the truism that drones don’t kill people, people kill people.
Maybe if we were less deontological and more consequentialist in our outlook, there would be less of this kind of delegation.
Depends, a deontological outlook with a maxim that you are responsible for what you have done in your name would be even more effective.
To make sure I understood this post correctly:
This would mean the correct common argument would instead be “The type of moral theory that leads to deontology provides no (or no interesting) explanation for the specific injunctions that are in the type of deontology followed.”
Is this correct?
Also, is there a name for the philosophy being criticized in the above argument?
Right, a non-confused attack on the deontologist in the spirit of the confused attack would say something like “your meta-ethical theory does not sufficiently explain the injunctions included in your normative, deontological theory.” But as you imply, this is a criticism of a meta-ethical theory, or better yet an ethicist’s whole view. This is not an attack on deontology as such.
And I don’t think there’s any name for those who make the mistake I point out. Its not even really a mistake, just a confusion about how a certain academic discussion is organized, which leads, in this case, to a lot of strawmaning.
Sorry, looks like I should have been clearer on the last point. I wasn’t asking for the name of a fallacy, I was asking if there is a name for the type of meta-ethics that leads to deontology.
As to the name of the fallacy, I’m not sure. I suppose it’s something like a misplaced expectation? The mistake is thinking that a certain theoretical moving part should do more work than it is rightly expected to do, while refusing to examine those moving parts which are rightly expected to do that work. EDIT: An example of a similar mistake might be thinking that a decision theory should tell you what to value and why, or that evolution should give an account of bio-genesis.
The SEP article’s last section, on deontology and metaethics is very helpful here: