Consolidated Nature of Morality Thread
My intended next OB post will, in passing, distinguish between moral judgments and factual beliefs. Several times before, this has sparked a debate about the nature of morality. (E.g., Believing in Todd.) Such debates often repeat themselves, reinvent the wheel each time, start all over from previous arguments. To avoid this, I suggest consolidating the debate. Whenever someone feels tempted to start a debate about the nature of morality in the comments thread of another post, the comment should be made to this post, instead, with an appropriate link to the article commented upon. Otherwise it does tend to take over discussions like kudzu. (This isn’t the first blog/list where I’ve seen it happen.)
I’ll start the ball rolling with ten points to ponder about the nature of morality...
It certainly looks like there is an important distinction between a statement like “The total loss of human life caused by World War II was roughly 72 million people” and “We ought to avoid a repeat of World War II.” Anyone who argues that these statements are of the same fundamental kind must explain away the apparent structural differences between them. What are the exact structural differences?
We experience some of our morals and preferences as being voluntary choices, others as involuntary perceptions. I choose to play on the side of Rationality, but I don’t think I could choose to believe that death is good any more than I could choose to believe the sky is green. What psychological factors account for these differences in my perceptions of my own preferences?
At a relatively young age, children begin to believe that while the teacher can make it all right to stand on your chair by giving permission, the teacher cannot make it all right to steal from someone else’s backpack. (I can’t recall the exact citation on this.) Do young children in a religious environment believe that God can make it all right to steal from someone’s backpack?
Both individual human beings and civilizations appear to change at least some of their moral beliefs over the course of time. Some of these changes are experienced as “decisions”, others are experienced as “discoveries”. Is there a systematic direction to at least some of these changes? How does this systematic direction arise causally?
To paraphrase Alfred Tarski, the statement “My car is painted green” is true if and only if my car is painted green. Similarly, someone might try to get away with asserting that the statement “Human deaths are bad” is true if and only if human deaths are bad. Is this valid?
Suppose I involuntarily administered to you a potion which would cause you to believe that human deaths were good. Afterward, would you believe truly that human deaths were good, or would you believe falsely that human deaths were good?
Although the statement “My car is painted green” is presently false, I can make it true at a future time by painting my car green. However, I can think of no analogous action I could take which would make it right to kill people. Does this make the moral statement stronger, weaker, or is there no sense in making the comparison?
There does not appear to be any “place” in the environment where the referents of moral statements are stored, analogous to the place where my car is stored. Does this necessarily indicate that moral statements are empty of content, or could they correspond to something else? Is the statement 2 + 2 = 4 true? Could it be made untrue? Is it falsifiable? Where is its content?
The phrase “is/ought” gap refers to the notion that no ought statement can be logically derived from any number of is statements, without at least one ought statement in the mix. For example, suppose I have a remote control with two buttons, and the red button kills an innocent prisoner, and the green button sets them free. I cannot derive the ought-statement, “I ought not to press the red button”, without both the is-statement “If I press the red button, an innocent will die” and the ought-statement “I ought not to kill innocents.” Should we distinguish mixed ought-statements like “I ought not to press the red button” from pure ought-statements like “I ought not to kill innocents”? If so, is there really any such thing as a “pure” ought-statement, or do they all have is-statements mixed into them somewhere?
The statement “This painting is beautiful” could be rendered untrue by flinging a bucket of mud on the painting. Similarly, in the remote-control example above, the statement “It is wrong to press the red button” can be rendered untrue by rewiring the remote. Are there pure aesthetic judgments? Are there pure preferences?
- [SEQ RERUN] Consolidated Nature of Morality Thread by 2 Jun 2011 11:39 UTC; 6 points) (
- 12 Oct 2007 18:36 UTC; 3 points) 's comment on A Priori by (
- 30 Apr 2011 12:20 UTC; 3 points) 's comment on What is Metaethics? by (
- 24 Mar 2008 21:45 UTC; 2 points) 's comment on The Beauty of Settled Science by (
- 6 Dec 2007 0:19 UTC; 1 point) 's comment on Resist the Happy Death Spiral by (
- 8 May 2007 22:48 UTC; 0 points) 's comment on The Third Alternative by (
People who want to comment on moral issues should do so at some post on morality, but I don’t see why that must be this particular post. We have a whole category of posts on Morality, after all, and I plan to make more posts in this category.
Am I missing something, or is #6 too easy to belong here?
It depends on what it means to “believe” something, and what “is good” means.
Case 1) Goods are good because and to thge extent to which they are valued
1a: If the belief-change were permanent, then it would be equivalent to a desire or drive, and the death of humans would be a finite subjective good, commensurable with other goods. Therefore, the belief would be true, but so would many other beliefs weighing contrarily.
1b: Otherwise—i.e. if it were the equivalent of normal habituation and therefore alterable—then it would be improbable to the extent that it disagreed with other judgments and past experience. Insofar as other people might want to translate subjective probabilities into binary true/false categories, the belief would be false.
Case 2) Good has absolute meaning independent of situations and/or the constitution of the rational being in question.
In this case, there is no possible answer—or, at best, the answer would only extend to marking out what a non-contradictory answer might be, a la Kant.
I suppose I should specify that I really mean to say, not that #6 can easily be answered in a meaningful way, but rather that it has no determinate meaning at all without much more qualification.
Or maybe you meant to do that as kind of a flypaper strategy for people like me? (I’ve suspected, for instance, that the entire Platonic corpus is a trap for people who live in their heads too much, to keep them out of real trouble.)
Robin, this is meant as a drop slot for comments about the nature of morality on posts that are not primarily about the nature of morality.
On the difference between moral judgements and factual beliefs, I find it helpful to think like this:
To give some plausibility to ‘idealist’ philosophies like Plato’s, we can point to certain things which, while they certainly exist, would not exist if there were not minds, like humor.
In the same way, moral judgements certainly exist, but they would not exist if there were not minds. Moral judgements do not correspond to things in the outside world.
Facts, on the other hand, correspond to things outside minds, and factual beliefs are things inside minds that correspond to things outside minds.
This relates to a few of your points as follows:
Your point 1: There is a difference between your factual belief and your moral judgement in that the first corresponds to things in the outside world and the second corresponds to things in the mind.
Your point 5: You can truly assert that the car is green by referring to the outside world, and you can truly assert that human deaths are bad by referring to your own mind. Also, you can not truly assert that the car is green by referring only to your own mind, nor can you truly assert that human deaths are bad by referring only to the outside world.
Your point 8: The place in the environment where moral judgements are stored is in your mind.
The cognitive bias that confuses us about the difference between moral judgements and factual beliefs is a version of the ‘notational bias,’ namely the ‘reification error,’ which causes us to think that because moral judgements are nouns, stated in sentences like factual statements, that they have an existence as objects.
It seems to me that most moral questions can be reframed to be questions about net benefits and costs of our actions. Collective morality looks at the net impact of an action on the social group. Individual morality looks at the net impact on oneself as well as the group .
In your question 1, consider the statement, “Net human welfare will be improved if we avoid a repeat of World War II.” This is a factual statement which arguably carries the message of the “ought” version.
For question 2, death is a very deep and strong moral (or ethical) rule. Consider some that are not so strong, like the rule, “Giving to poor beggars on the street is good.” You can probably imagine reversing your belief on this moral issue. Doesn’t it largely come down to the net benefit of your actions?
Question 3, standing on a chair makes no one any worse, but stealing from a backpack causes harm.
Question 4, I would predict that achieving a greater understanding of the impact of one’s actions, positive or negative, might cause them to be viewed in a new moral light. In terms of discoveries vs decisions, perhaps discoveries happen when a new understanding makes it clear that the net impact of some action is good or bad; and decisions would occur where the net effects are less clear, and we have to choose whether to err on the side of caution.
For questions 5-7, again I think the example of human deaths is not the best choice for working on these issues. It is so strong and absolute. Plus, in the framework I am proposing of net benefit, death is something of a special case since dead people no longer experience anything. In some ways one might as well ask about the impact of our actions on unborn or hypothetical individuals. I think if you change the examples to less extreme questions like the one I gave, giving to the poor, these questions are easier to make progress on.
Well, this is getting a bit long...
Eli; what did you think as a young child in a religious environment?
It seems clear that there is a systematic direction in most or all cultures towards application and generalization of moral/ethical vocalizations as wealth increases. There is a less clear trend towards broadening circles of moral consideration, but this may be an instance of the first trend.
3. Eli; what did you think as a young child in a religious environment?
I haven’t the vaguest clue what I thought at that age. My episodic memories of childhood are very weak.
4. It seems clear that there is a systematic direction in most or all cultures towards application and generalization of moral/ethical vocalizations as wealth increases. There is a less clear trend towards broadening circles of moral consideration, but this may be an instance of the first trend.
While that certainly makes for an interesting moral direction, it must have been carried out, at least at the beginning, by people who didn’t start out knowing that this was a good direction. So then what is the causal account of how this directionality occurred? (I’m assuming that everyone’s an adult here and we can rule out mystical rot like “it was built into the fabric of the universe” or similar pleasant absurdities.)
Eliezer, was your previous thread “Your Rationality is My Business” about moral judgments or factual beliefs? It seems you want to discuss the nature of morality WITHOUT discussing more “elementary” assumptions about our relationships to “reality” or “the world at large” or “the universe”, etc… Even the wording above is controversial and a matter of debate. Sweeping these questions “under the rug” is surely not the way to clarify the discussion about “morality” and reach any kind of consensus. Most especially if you pretend to have a special right to oversee other people thinking. Cheap arguments cannot be used to justify “thought police”.
Of course few to no people will read this but...
yes, they are different 2-4 empirical questions
no
n/a
no sense
2+2=4 is true given the commonly accepted definitions of the terms involved. Given an assumed systemization of morality moral statements could be “true” relative to that systematization in the same sense that 2+2=4 is true relative to commonly accepted arithmetic. However, I don’t consider this a particularly useful way of thinking about morality.
any ought-statement can be converted (in principle) into a “pure” ought-statement by rephrasing it as an implication of the original statement from a sufficiently detailed set of factual assumptions. 10.same as 9
It is striking to me that people who want to think more carefully about moral issues seem to feel little inclination to read the academic literature on this subject. There are in fact specialists who consider these issues; why reinvent the wheel?
It is possible that there are moral rules that apply universally, perhaps we just haven’t discovered them yet. After all, who would have predicted that the universe had a natural speed limit. So 8 could be false. The rest of the statements assume the viewpoint that there is such a thing as universal morality, asking tricky questions about how to apply or calculate morality. I don’t have answers as I predict morality is mostly a psychological device imposed by genetics to promote group genetic survival mixed with some childhood conditioning. We will find out the true nature of morality when we invent AI, if AIs that have no initial moral constraints develop a moral sense that is the same as ours, we can suspect that it is universal. I think the reality is that AIs will be entirely goal orientiated, no matter how smart they are and lying, cheating, stealing, killing etc is all OK if that serves their purpose. Of course they may want us to think they are moral (for their survival purposes) so we had better be careful.
In the meantime, it makes me happier to be a conventionally moral person because of my conditioning or genetics or whatever, so that’s what I do
I should clarify the above statement—I mean it is possible that moral rules can be derived logically, like the speed of light, from the structure of the universe.
“It is striking to me that people who want to think more carefully about moral issues seem to feel little inclination to read the academic literature on this subject. There are in fact specialists who consider these issues; why reinvent the wheel?”
Morality seems to be a bit like evolution: everyone thinks they understand it. (Language is another thing that very smart and well-informed people think they can pontificate about despite not knowing anything beyond their own anecdotes. People claim that English is the language with the most words (cite your source!), that English has no rules and so is hard to learn, that speaking different languages makes a big difference to how one thinks, etc.)
One almost never hears people say something like “I don’t want to talk about meta-ethical issues until I have become acquainted with the basic literature on the subject”. Of course, people can have a purchase on some moral questions without knowing the philosophical literature, but some purchase on specific questions of normative ethics is very different from an understanding of the difficulty and intricacy of questions in meta-ethics.
How to get acquainted with meta-ethics, i.e. the area of philosophy that deals with explaining the nature of the ethical realm, e.g. whether ethical statements are true relative a culture or else empirical generalizations or the result of a priori intuitions? Try the Stanford and Routledge Encyclopedias of Philosophy (don’t trust Wikipedia for philosophy) or start with Peter Singer’s Oxford Readings in Ethics. From there, move to something more sophisticated of your choosing (e.g. Harman, Tom Nagel, Hilary Putnam, Allan Gibbard). You might think that you already understand ethics better than these philosophers (some people on this blog seem to think so). If so, then reading this stuff will force you to confront the objections of good philosophers, and so help to hone and refine your position. Also, this stuff is not hard to read—use the encyclopedias above to look up any technical terms you don’t know and you″ll be fine.
I am painfully aware that as Robin points out many great minds have debated these issues since at least the time of the ancient Greeks. Nevertheless this is a blog, not an academic journal, and we won’t have much of a conversation if we all remain silent except when we can add to the insights of the greats.
I want to point out that even strong moral rules like the one against killing have exceptions. Most people believe that killing is right in some circumstances. Many people support the death penalty for murderers, for example. Another case is the doctrine of “just war”, where killing enemy soldiers may be seen as part of a greater good. At somewhat the other side of the political spectrum, many would support killing someone who was on life support and had previously expressed a desire not to be kept alive in that state.
Hal, like any consequentialist I am sympathetic to the costs vs benefits way of looking at things. However, there is no commonly accepted scale for measuring these things. There are some who claim to be utilitarians, but there really is no such thing as “utility”, no unit called a “util”, no way to measure it and it is still contested whether a purely hedonic approach is appropriate.
Robin and Bob have suggested that I consult the specialists in the field of ethics. I will repeat my question from the previous thread. How do I know whether the specialists in ethics have anything more to say than those in theology or astrology? There is no falsifiability in ethics and no evidence I know of that studying ethics makes one an authority on anything.
TGGP, What do you mean that you are a consequentialist, if you are so sure ethics is meaningless?
To say that the concepts of true and false do not apply to moral statements is not the same thing as saying that ethics is meaningless. For one thing, one can be committed personally to a particular ethical view without necessarily believing that there is any objective criterion by which it is superior to others. Also, ethics serve a real world purpose in co-ordinating the behaviour of agents with different goals; one can judge the efficacy of moral systems in fulfilling this purpose without necessarily either approving of it or making the mistake of confusing utility with truth.
Putting these together (actually the first is sufficient, but I threw the second one in anyway), one can both be in favor of some set of real world consequences, and judge moral systems on how well they promote these consequences (ie be a consequentalist) without making the mistake of attributing objective truth (or whatever) to the moral systems you therefore favor. There is thus no contradiction in being a consequentialist and denying the existence of any objective morality.
You took the words right out of my mouth, simon.
My causal account of moral directionality is that cognitive resources are scarce and that people try to conserve them. One class of cognitive resource people try to conserve is complexity of model, driving a tendency towards parsimonious explanations. In other words, ethical consistency is a sub-set of consistency in general, which is a luxury good. As non-cognitive wealth increases relative to cognitive wealth people purchase more parsimony and noise in early highly random but constrained ethical systems gets locked onto strong attractor dynamics. Islamists, whose meme complexes come from desert nomads with no parsimonious “do onto others” statement explicitly claiming to be the core of ethics, become more misogynistic and violent. Almost everyone else, all of whom hold meme-complexes originating in urban life in the axial age and possessing such a statement, expands their circle of empathy. We don’t know what happens to rich hunter-gatherer cultures, as we haven’t seen any.
“Robin and Bob have suggested that I consult the specialists in the field of ethics. I will repeat my question from the previous thread. How do I know whether the specialists in ethics have anything more to say than those in theology or astrology? There is no falsifiability in ethics and no evidence I know of that studying ethics makes one an authority on anything.”
There is no falsifiability in the philosophy of science, and yet you seem to fully embrace a very specific and contestable doctrine within the philosophy of science and epistemology, namely, Popperian falsificationism. What experimental evidence do you think would falsify falsificationism? Could a physicist falsify this theory by playing around with cloud chambers or particle accelerators?
And if you’re willing to believe in non-falsifiable doctrines in the philosophy of science, then why not believe in non-falsifiable doctrines in philosophical ethics?
Bear in mind that the fact that something can’t be falsified by experimental evidence does not preclude its being argued against. Here is an argument against falsificationism (which hardly any philosophers of science subscribe to these days anyway—look up Bayesian epistemology if you want an update):
Falsificationism says that theories can never be confirmed. Remember that Popper thought that Hume’s problem of induction could not be solved, and so there could never be positive evidence for believing a particular theory. Confirming a generalization (e.g. metals expand when heated) would require evidence of infinitely many cases, which is impossible. But falsifying theories only requires one bit of evidence: a counterexample.
On this view, scientists just falsify theories, and stick with the ones that avoid falsification. My question is how one can (on this view) even make sense of the act of falsification of a particular theory.
Suppose I want to try falsifying the theory that drug X is safe. Then I need to demonstrate a person being harmed by drug X. But I can only demonstrate that fact by confirming a more particular theory, that some person has actually been harmed by the drug. For example, suppose I wanted to show that John had been harmed by the drug. Then I would have to confirm that John had actually been harmed by the drug. But there is no such thing as confirmation on Popper’s theory. So I can’t provide the counterexamples that are meant to generate falsifications. (In short: The very idea of falsification presupposes a notion of confirmation. So it is inconsistent to suppose that you can have one without the other).
A different example: Suppose I’m a biologist, and I make a prediction on the basis of a theory T that if I manipulate genes x,y,z in a rabbit zygote, then the resulting rabbit will have fluorescent green fur. I do the manipulation and the rabbit ends up with fluorescent green fur. Surely this is strong evidence for theory T. This would be seen as a scientific breakthrough. Yet for the Popperian, the only progress I’ve made here is falsifying the negation of theory T. There is no positive progress, no reason to believe that T is more likely than other theories that I have yet to falsify. I have no more reason to believe theory T than the competing theory that God made the rabbit green.
I’m sympathetic to Robin on this one. For people who are interested in thinking seriously about these questions, I think a good first thing to do would be to run a search for metaethics on the stanford encylopedia of philosophy (http://plato.stanford.edu/). If people like that, then it might be good to buy a book that would serve as an introduction to metaethics, maybe an anthology, or a textbook. I’m not familiar with much of the literature, but I can say that Michael Smith’s “The Moral Problem,” serves as a pretty good introduction to a wide number of metaethical debates, though it’s not written as an introductory book. I’m sure the bibliographies on the stanford encylopedia of philosophy articles would also be helpful.
Yes, Bob, Bayes does trump Popper. Eliezer has explained that pretty well already. However, I don’t see how that saves ethics. There is no disconfirming evidence, and as a result no confirming evidence. There is no utility in knowing ethics, as far as I know, as it will not enable me to make better predictions or do neat things like sending a rocket to the moon. So I ask you, what makes ethics different from theology or astrology so that I should care what experts in it say?
Oh sorry, missed the comments after Robin’s discussing the wisdom of consulting metaethicists about these questions. Suffice to say, Bob’s right that there are some very compelling arguments against Popperian Falsificationism. They are compelling enough, in my opinion, that it’s hard to think that its having fallen out of favor in the philosophical professional represents anything but progress.
Also, here’s a relevant difference between consulting philosophers about metaethical issues and consulting astrologers and theologians about the issues they discuss. With astrologers and theologians, you (I suppose “you” refers to TGGP here, but I’m also describing my opinion on the subject) think there are fundamental methodological problems with the way that they come up with answers to questions—they rely on false premises, and use unreliable forms of reasoning. So, you shouldn’t expect hard thought (which theologians, if not astrologers, have certainly engaged in) in those disciplines to lead to true beliefs. Since you don’t think that your reasoning about theological and astrological matters suffers from these foundational problems, you should trust your own thinking on these matters over that of theologians and astrologers.
However, unless you have similar foundational worries about the methods used by philosophers working in metaethics, you shouldn’t be similarly inclined to trust your conclusions over theirs. Unless you think there’s something systematically wrong with the way they approach things (which it’s hard to see why you’d think if you were unfamiliar with the literature), then you don’t have any reason to think the effort they’ve expended in thinking about the subject is less likely to lead to the truth than the effort you’ve expended (after all, it’s not your job. at the very least, you’ve spent less time thinking about the issues than they have).
I think these observations suggest a general lesson. If you’re going to trust your own opinion over that of those who’ve spent more time thinking about an issue than you have, you should be able to identify some systematic unreliability in the methods that they use to think about the issue.
It might be worth considering whether metaethicists, theologians, and astrologers from radically different backgrounds tended to converge upon some core set of conclusions substantially different from those reached by non-metaethicists. Metaethical, astrological or theological equivalents of Darwin and Wallace or of Newton and Leibniz would be significant confirmation of their disciplinary soundness.
Hey Michael,
I’d be a bit surprised if you could find positive, substantive conclusions that metaethicists tend to converge on. My impression is that there’s a great deal of disgagreement in the field.
However, I suspect you could find convergence on negative issues—that is, there are certain views, or at least certain combinations of views, that they might all agree should be rejected. Since I don’t know metaethics well enough, I won’t try to offer an example, but I do know that this happens in other areas of philosophy. To take an example that’s already been mentioned in this thread, I think the fact that most people who give serious thought to Popperian Falsificationism converge on the conclusion that it’s wrong (even while many people who haven’t thought seriously about it find it plausible) is some evidence that they’re getting things right.
I wouldn’t be surprised if there are substantive theories in metaethics that might seem plausible to people who haven’t given them serious thought, but which philosophers have come to reject. If that’s the case, then absent foundational worries about their methods, I think we should tend to think that their convergence is evidence that they’re right to reject those theories. If we’re interested in the questions that Eliezer posted, we should look at what philosophers have had to say about them—even if we’re not likely to get the right answer this way, we may be able to eliminate some wrong answers.
TGGP, that’s not the point at all. The point is that if you buy any doctrine in philosophy of science, you’re believing something nonfalsifiable on the same level as meta-ethics.
Michael, convergence in opinions could just mean these opinions are based on common human biases.
Why do I not put much stock in theology and astrology? Because they have never produced anything useful. If astrologers were regularly winning the lottery based on the numbers they knew to be lucky, I wouldn’t really care how idiotic their methods seem, because ignoring their ideas would result in worse outcomes for me. Physicists are able to make correct predictions and invent neat stuff, so even though quantum mechanics and relativity don’t make complete sense to me I believe them and conclude that they do not have “fundamental methodological problems”. What will happen if I ignore ethicists? I might do something “bad” (although there is no evidence people who take study ethics behave in accordance with what ethicists preach). There will no way to detect the effect of my “badness”, so it will be indistinguishable from being judged by a “God” who watches from on high but does not intervene. At least the theologians promise some effect in the afterlife, and if it turns out that I was mistaken in not believing in God I’ll have all eternity to regret it. What is the downside to ignoring ethicists? None whatsoever.
Regarding Popperian falsificationism, here is what Eliezer had to say about it “Falsification is much stronger than confirmation. This is a consequence of the earlier point that very strong evidence is not the product of a very high probability that A leads to X, but the product of a very low probability that not-A could have led to X. This is the precise Bayesian rule that underlies the heuristic value of Popper’s falsificationism.” Even if I ignored Bayesianism and stuck with Popper (which I’m not going to do), at least I would be following a heurisitic that would help me some of the time to avoid believing in the kind of nonsense Popper skewered and have more faith in the fields he contrasted with them. So that approach would be sub-optimal, but still have some value. In contrast, what value would I gain from believing in ethics?
Daniel, the foundational problem with meta-ethics (as done by philosophers) is that they start from the presumption that morality is something “out there”.
For non-consequentialists this seems to usually result in them either simply relying on a combination of intuition (not as much a fault in ethics as in other subjects, but we should try to do better) and axiomatic systems. When intuition collides with an axiomatic system, or different axiomatic systems contradict one another, they don’t have the ability to resolve the issue.
A moral prescription can be judged by how well it satisfies some goal. The goal is ultimately “arbitrary”—it is up to any person making a judgement about a prescriptive system. Seperating out prescriptions from goals is perhaps not logically necessary, but I think it is useful to distinguish between moral disagreements that can be eliminated through gaining and spreading knowledge (any disagreement assuming common goals) and those that can’t (goal disagreements).
Even when philosophers correctly recognise that a goal is necessary to judge prescriptions, they tend to think of some way of deriving a goal (typically some form of utilitarianism) as being objectively right. This leads to a tendency to deny evidence that their own personal judgements of prescriptive systems (and those of others) in fact derive from different goals. It seems to me, however, that most consequentialists haven’t properly distinguished between prescriptions and goals by which to judge prescriptions, which leads to more confusion (rule consequentialism is a clumsy attempt to get around this, but as commonly understood it is not very general, as a moral prescription need not be a set of simple general rules).
Simon,
While I’m not sure what you mean by saying that most philosophers working in ethics think that morality is something “out there” I suspect that on a suitable clarification of “out there” it will turn out that lots of constructivists, quasi-realists, and anti-realists of various varieties will not think that morality is out there.
Benquo, question #6 was too easy.
Bruce thank you for your point 8, it made me think.
Hal: Individual morality looks at the net impact on oneself as well as the group.
Thanks. Your answer regarding question 4 made me think.
Robin: It is striking to me that people who want to think more carefully about moral issues seem to feel little inclination to read the academic literature on this subject. There are in fact specialists who consider these issues; why reinvent the wheel?
Sometimes even the specialists need to be reviewed:)Maybe law and moral have many things in common?
If anybody has a moment, I am curious to know how morals can exist without faith?
Anna
Sigh, from your last comment. I presume that you are of a religion? Anyway, if you want the Darwinian origin of morality, here it is:
Protohumans that had adapted an altruistic nature had a higher likelihood of survival than those that did not. Over time, this caused morality to be biologically hardwired into the gene pool. I’m not quite sure what you mean by faith, however. If you mean belief, that is, a concept not proven by evidence, then I don’t see the correlation between faith and morality. If you mean religion, then I disagree. That would suggest that humanity is by nature amoral, which I do not believe. If you’d prefer factual evidence, then I will add that there is no correlation between a lack of religion and immoral behavior. I think history has shown us that fear is not a good source of morality. Edit: Religion tends to be a detriment to societal morality. In a vein similar to racism, unfounded beliefs will inevitably cause conflict. The moral benefits are only observed in a microcosm.
The sheer diversity of moral theories actually applied by some human society at some point in history makes this claim extremely difficult to accept.
It’s likely that most moral positions are consistent with decision theory (i.e. Tit-for-Tat wins many iterated Prisoner’s dilemma tournaments). But that doesn’t require that morality be “baked in” by evolution. The generalized view of organisms as adaption-executors seems sufficient to explain why basic decision theory bears some resemblance to the relatively uncontroversial moral positions.
I was unclear. I apologize. I misrepresented a general inclination to perform conventionally “good” acts as moral and ethical convention. Thank you for your scrutiny. I will ensure to accurately represent my views in the future. Also, “Dilemma” should be capitalized if “Prisoner’s” is.
The ‘relatively uncontroversial’ positions are such because of the extent to which they’ve been permanently wired into human intuition.
No—the “relatively uncontroversial” positions are the ones most consistent with decision theory over repeated iterations.
To the extent that iterated decision theory accurately models historical selection pressures which shaped our intuitions, I agree with you. However, moral positions like “violently victimizing someone from your own tribe for trivial personal gain is bad and should be heavily discouraged” have been uncontroversial since before decision theory was formalized.
Imagine a simple decision game: Should I eat the poisonous fruit: Yes (-100), No (0). Obviously, No is the superior answer, and it didn’t take publication of this decision theory result for humans to realize this. Making the decision game is writing the expected payouts of the environment—not setting them.
To take your example, as long as increasing the power of the tribe provides benefits to you (and I agree that it usually will), then reducing inter-tribe squabbling is the better long-term choice. Decision theory doesn’t disagree, but isn’t necessary for the conclusion. However, the incentive is already there, so there’s no reason why evolution would select for a “baked-in” preference.
The fact that the environment rewards certain choices is a sufficient reason for those choices to be favored. I referenced decision theory only to have a way to rigorously identify which choices are favor by pre-existing reward structures.
Note that the user you’re responding to hasn’t posted on LW since 2008, so is unlikely to read your reply.
Valid point. Thank you.
Daniel: philosophers are not all wrong about everything but between them they seem to support every theory that a reasonable person could hold and many more, so they aren’t very useful as a guide as to what to believe. In principle their arguments could still be useful, but in practice I am not impressed by, for example, the arguments against moral skepticism, nor do I find that the arguments for it add anything particularly useful to my knowledge that I could not think of myself.
Anna : If anybody has a moment, I am curious to know how morals can exist without faith?
Huh? Aren’t you confusing morality with fear of retribution? I am curious to know what you think morality is about!
TGGP, if you think that all the people who have specialized in a subject over centuries have made no progress and have nothing valuable to say, they why would you think that you or anyone on this blog would have anything valuable to say?
Robin, I don’t think there is anything valuable to say in the fields of theology and astrology either, but if this blog were to have discussions on those topics I expect I would still enjoy reading them and making the same sorts of comments I am making here.
I would be interested to know in what ways you think the field of ethics has progressed and what things of value have been discovered.
TGGP, do you think your comments have a better than random chance of being true? If yes, why wouldn’t spending more time thinking about the subject improve one’s chances? If not, how could you enjoy making random claims?
TGGP, You used an example of moral progress produced by a philosopher: the word consequentialist.
Kevembuangga: Aren’t you confusing morality with fear of retribution? I am curious to know what you think morality is about!
For me, morality is about the ethical behavior of individuals or groups.
Many people associate morality with faith. The example of the fear of retribution is what makes them strive to be moralistic. I was curious to know what happens when you take faith out of the equation? Will people no longer strive to be moralistic? Can ethical behavior exist without the rules and regulations that have been governed by faith?
I know these questions don’t fit into the main discussion of the thread, my apology.
I know these questions don’t fit into the main discussion of the thread, my apology.
Weirder and weirder, why would this thread have been titled “Consolidated Nature of Morality”?
For me, morality is about the ethical behavior of individuals or groups.
Playing with definitions, morality see ethics, ethics see morality, what’s the point?
Can ethical behavior exist without the rules and regulations that have been governed by faith?
As far as I know ethical behavior is not “governed” by faith, it is endorsed by faith. The primary source of morality is innate repulsion for acts which will damage the “fitness” of the human specie, incest, killing of kins or children. On top of that idiosyncratic cultural traits have been built which sometimes run counter the “basics” (head hunting, even of children) but are still rooted in social emotions like conformity to group values. The various religious faiths only piggyback upon those as a special case of customs.
Weird because I apologized for bringing up faith when the thread was about Morality?
I have no idea how this reflects the question I was asking regarding that if faith is taken out of the equation will people be more or less inclined to want to be moralistic. I guess this weird one is not smart enough to grasp your intellectual ideas. Thanks for your time, it has been interesting.
Robin writes: do you think your comments have a better than random chance of being true? That’s tough to answer. It can be hard to distinguish the things I’ve stated with those who claim to disagree with me. Eliezer agrees that there is no “moral stuff”, but states that he has a different reaction (while I also deny having the reaction he denies). So what would it mean for my ideas to be false? It would mean that normative claims have some truth values, which in my interpretation means that Universe A where normative claim X is true must be detectably different from Universe B where normative claim X is false. If someone took a different interpretation that a claim can have truth value while that value makes no detectable difference it makes me wonder how different such a claim is from those of the type “Colorless green ideas sleep furiously” and why anyone would care about the truth value of such a claim. So if I am wrong it would essentially mean that normative facts can be empirically discovered in an objective manner. People have been trying to figure out what is “good” for an extremely long time, and there is not yet a generally agreed upon body of knowledge in that area nor any method for building one. Proponents of religions at least hold out the possibility of the divine manifesting itself to us or an afterlife in which we encounter it, but disagreements on norms seem as likely to be settled as an argument about what Sherlock Holmes’ favorite color was. So what would it mean if your views on the nature of morality were wrong?
If yes, why wouldn’t spending more time thinking about the subject improve one’s chances? Well, I don’t expect that I would become more wrong if I read and thought more, just as is the case with theology and astrology. I also likely wouldn’t become more wrong about any of those subjects (plus morality) if I spent more time reading the backs of cereal boxes. Sure, nobody has ever discovered any moral facts by reading them, but I already stated I didn’t think anyone had made any progress reading and thinking about ethics. It would certainly be odd if everyone else was incapable of making such discoveries but I was not.
If not, how could you enjoy making random claims? If you randomly picked a digit from 0 to 9 (inclusive) I could have a good time arguing that it was the best of the bunch (that’s why I can feel free to muse about waking up with a blue tentacle when I know it won’t happen). Eventually I would get bored of that and ask why we care what the best digit is, which is less resemblant of a “random” claim and more of the claim I’ve made which is in dispute here. To me arguments about what book/movie/etc is better than another are essentially the same, except that I get to pick which one I stick up for.
Douglas writes: You used an example of moral progress produced by a philosopher: the word consequentialist. I first encountered consequentialism in verbal form with the joke “Why did the chicken cross the road?”. I don’t know the philosopher who came up with it, but I can be confident that I would have come across it otherwise, even without reading any philosophy. I don’t consider the word “consequentialism” to be an advancement in ethics, it is more meta-ethics. It is not even generally agreed by people who disagree with me that consequentialism is true, so I don’t know how it can be considered an “advancement”. How much of an advancement in it in other fields if some facts are not established with any degree of certainty but the uncertain facts themselves are given names?
Anna writes if faith is taken out of the equation will people be more or less inclined to want to be moralistic. I admitted to myself I was agnostic/atheist/agnotheist around when I came to the conclusion about morality discussed above, but I don’t think my behavior has really changed much. I suppose that back when I had strong religious beliefs I had planned not to do such things as having myself taken off life support in the event that such a thing was an issue because of the sinful nature of suicide, but that was far-off enough I can’t really know how I would have acted. Even then I didn’t really see anything wrong with other people deciding to do so, so perhaps I really didn’t change much.
Anna : I guess this weird one is not smart enough to grasp your intellectual ideas. Thanks for your time, it has been interesting.
This is a lame trick! My point was, ethical behavior is not “governed” by faith, it is endorsed by faith. I suppose you are a theist, aren’t you?
An afterthought which I think is relevant to this thread. I argued before that the whole idea of bringing “rationality” to moral dilemmas is futile and dangerous.
I asked: “What happens if you take faith out of the equation, will people be more or less inclined to want to be moralistic and can ethical behavior exist without the rules and regulations that have been governed by faith?”
For you. A religious person may feel that their ethical behavior is governed by their faith.
I believe in Something as opposed to Nothing but I am not a theist. I don’t believe in Gods or Goddesses. I don’t see how that’s relevant.
Regarding:
I agree that it’s futile.
Rationality is about looking at it from someone else’s point of view and deciding if it is “right for you” or “wrong for you”, without judgement.
Morals are about beliefs and faith.
Ethical behavior is about “right or wrong”.
I have wondered? “How can I believe myself to be rational and logical and still believe in something that I can’t see, hear, touch, taste or smell.”
I apologize, (yes, that’s weird to you, I know) if my post was too long.
Anna : For you.
No this is not a personal opinion, the fact that ethical behavior is for a large part innate is shown by psychological studies, and also philosopy back to millenia ago, Confucius and more.
Anna : A religious person may feel that their ethical behavior is governed by their faith.
Young children “feel” that some of the gifts they get are from Santa Claus or the Tooth Fairy, because they are told that.
Anna : I believe in Something as opposed to Nothing but I am not a theist. I don’t believe in Gods or Goddesses. I don’t see how that’s relevant.
FYI, atheists aren’t believers in Nothing. (Please show us an instance of Nothing) It is relevant to ask about theism because it is commonplace for religionists to claim that there can be no morality without faith.
Anna : I agree that it’s futile [bringing “rationality” to moral dilemmas]
Certainly not for the same reasons than me, you likely didn’t read the comment I linked to, I will reproduce the substance below :
[bringing “rationality” to moral dilemmas] is the typical legacy of the Greeks who “invented” logic for this very purpose. Yet it does not make any sense because our actual decision making (in moral matters as well as in anything else) is NOT rational. But George Ainslie thesis that we use hyperbolic discounting to evaluate distant rewards instead of the “rational” exponential discounting casts serious doubts about our ability to EVER come with consistent judgements on any matter. That may be the real cause for most occurrences of akrasia which are always noticed ex post facto. AND… Trying to enforce “rationality” upon our decisions may lead to severe psychiatric problems:
Intertemporal bargaining also predicts four serious side effects of willpower: A choice may become more valuable as a precedent than as an event in itself, making people legalistic; signs that predict lapses should become self-confirming, leading to failures of will so intractable that they seem like symptoms of disease; there is motivation not to recognize lapses, which might create an underworld much like the Freudian unconscious; and concrete personal rules should recruit motivation better than subtle ones, a difference which could impair the ability of will-based strategies to exploit emotional rewards.
Anna : Rationality is about looking at it from someone else’s point of view and deciding if it is “right for you” or “wrong for you”, without judgement.
Either I don’t catch at all what you mean or you have strange ideas about rationality. To me rationality isn’t (primarily) about someone else’s point of view but about using evidence and sound inferences to build a model of perceived reality. Nothing to do with “right for [me]” or “wrong for [me]” or with judgement. If I am using rationality I can expect to be able to share the resulting model with other people (intersubjectivity) or to amend or replace this model if I am shown factual errors or omissions in either my evidences or my inferences. This does not seem to be possible when the “other people” take exception of “faith” to assert or deny facts without evidence or condone faulty inferences.
Anna : Morals are about beliefs and faith.
You are just asserting the point you supposedly want to debate!
Anna : Ethical behavior is about “right or wrong”.
You are making up your own definitions, not much chance to reach any agreement on the substance of the debate if the terminology is uncertain. I would ask anyway, “right or wrong” about what and for whom?
Anna : “How can I believe myself to be rational and logical and still believe in something that I can’t see, hear, touch, taste or smell.”
Physicists “believe” in quarks, cosmologists in Dark matter neither of which they can see, hear, touch, taste or smell. Yet they are pretty hardcore rationalists, how do they do?
Anna : I apologize, (yes, that’s weird to you, I know) if my post was too long.
How funnny, you seem to feel blamable for the length of your post but not for sneaking in a snide remark. I surmise that you feel guilt about “breaking the rules” but don’t really care about fellow humans.
On #3, I think it’s more relevant to point out that many adults believe that God can make it alright to kill someone. What children believe about God and theft is a pale watered-down imitation of this.
I don’t know if you will respond here, joe, but it has been requested that our earlier conversation relocate.
Of course. I am morally opposed to your basis for choosing morals; I am determined to show that it must lead to a contradiction.
If no moral preference is better than any other, then randomly assigning an arbitrary set of morals, from all possible sets of morals, to each individual person should be no better or worse than any other approach. However, I see the consequences of such an experiment as potentially creating utter chaos and destruction leading to the downfall of the human race.
Certainly the survival of our race must be better than the alternative.
My reply to Tarleton from Doubting Thomas and Pious Pete:
Eliezer, just for clarification, would you say that you’re “right” and God is “wrong” for thinking genocide is “good”, or just that you and God have different goal systems and neither of you could convert the other by rational argument? (Should this go on the consolidated morality thread instead?)
Hard to give answers about God. If I was dealing with a very powerful AI that I had tried to make Friendly, I would assess a dominating probability that the AI and I had ended up in different moral frames of reference; a weak probability that the AI was “right” and I was “wrong” (within the same moral frame of reference, but the AI could convince me by rational/non-truth-destroyable argument from my own premises); and a tiny probability that the AI was “wrong” and I was “right”.
I distinguish between a “moral frame of reference” and a “goal system” because it seems to me that the human ability to argue about morality does, in cognitive fact, invoke more than consequentialist arguments about a constant utility function. By arguing with a fellow human, I can change their (or my) value assignments over final outcomes. A “moral frame of reference” indicates a class of minds that can be moved by the same type of moral arguments (including consequentialist arguments as a special case), rather than a shared constant utility function (which is only one way to end up in a shared reference frame on a particular moral issue).
This is how I would cash out Gordon Worley’s observation that “Morality is objective within a given frame of reference.”
joe, “utter chaos and destruction leading to the downfall of the human race” is not a contradiction. I assert that it cannot be objectively known that this outcome is “bad”. Some of the more extreme environmentalists would assert it is a good thing, and some alien species might think of it in the same manner as we might think of eradicating smallpox. Furthermore, as I do not think morality is objective, I do not feel my beliefs need to be universalizable. My belief in a certain nature of morality is not going to cause the rest of humanity to share that belief, discussing what would happen if everyone shared it would be like wondering about waking up with a blue tentacle.
Eliezer, you (and Worley) say “Morality is objective within a given frame of reference.” What is the difference between that and “morality is subjective”? It seems each frame of reference is itself subjective, we can never really know whether any two individuals share the same frame of reference (I do not think even an individual keeps the same frame of reference consistently for any considerable length of time). Do you think that aesthetic beauty or value are also objective “within a given frame of reference”, or are they of a different nature than morality?
TGGP, I’d say that aesthetics are objective within a given frame of reference. Given my sense of beauty, I can’t make myself pretend that a flower is ugly when it is in fact beautiful, or vice versa. I don’t experience it as a personal choice, but as a discovery.
Well, that just begs for consideration of what would happen to morality (and aesthetics) given the ability to modify one’s own frame of reference. It’s not a pleasant thing to consider.
Tarleton, unless you make a mistake (carry out an action you would not carry out given full information and comprehension regarding its consequences) you cannot, by definition, modify your own “frame of reference” any more than you can modify your own priors. What argument is it that caused you to want to change your frame of reference? This, by definition, is part of your frame of reference.
It should be understood that I am simply defining what I mean by “frame of reference”, not making any empirical argument—any statement about real-world consequences for self-modifying minds—which of course cannot carry by the mere meaning of words.
I agree that things get hairy when you have the ability to modify the cognitive representation of your utility function—this is one of my research areas. However, it seems probable that most simple agents will not want to modify their optimization targets, for the same reason Gandhi doesn’t want to swallow a pill that will make him regard murder as a moral good. Our own moral struggles indicate that we have a rather complicated frame of reference, required to make any moral question nontrivial!
Incidentally, any professional philosophers interested in writing about self-modifying moral agents, do please get in touch with me.
7. Analogous action: administer the potion described in 6.
:D
Is there any actual rational reason for a person to be moral at all, or indeed to have any other priority?
My position, which I keep finding myself arguing here, is that this isn’t even a meaningful question. Rationality applies to beliefs not terminal values. I doesn’t even make sense to wonder if there is a Bayesian way to decide what to care about.
We sometimes speak of rational reasons for priorities or non-terminal values but what we really mean is that we have rational reasons to believe that fulfilling that priority or non-terminal value fulfills our terminal value. A non-terminal normative claim is a mixed is/ought claim and the word rational is describing the ‘is’ part, not the ‘ought’ part.
I agree with Jack’s comment on this, but must also state: morality as a proximal value is actually rather effective in achieving most terminal values. Winning friends has a tendency to aid with almost anything else you might wish to do; and behaving morally has a tendency to aid in winning friends, and avoiding the acquisition of enemies.
There are of course some exceptions though- behaving ruthlessly and amorally probably helps in gaining promotions, dealing with buisness rivals etc (I don’t have any actual experience, just guessing from what I know of it and asking for comment from those with experience in those areas).
“No results found for \” expert moral system \”.”
-google.
Remember: it doesn’t have to be perfect, just better than us.
edit google was misinformed—this has been discussed. Nevertheless the point stands—unless there’s a particular reason why we think that we would perform better than an expert system in this topic I am skeptical that acting except insofar as to create one is anything but short-term context dependent moral.
In line with the maxim “read the textbook first” I offer metaethics:
https://plato.stanford.edu/entries/metaethics/
https://iep.utm.edu/metaethi/
Nietzsche claimed that “there are no moral facts at all”. It does seem that any moral system requires some axiom that cannot be derived from facts about the world, or logic.
Famously Kant’s Categorical Imperative is one such axiom.