The PDF version can be read here.
Moral realism is an explicit version of the ordinary view of morality. It has the following assumptions:
Good and evil are objectively real.
We have the ability to recognize good and evil.
We have an objective moral obligation to do good and not do evil. Likewise, we have an objective moral right to not have evil done to us.
Society depends on morality to exist. The social order is created by human goodness, and it is destroyed by evil.
There are many problems with moral realism, including:
Moral realism has no definition of good and evil. If good and evil are objectively real properties or substances, like temperature or oxygen, it should be possible to define them in scientific terms.
Why are we obliged to do good and not do evil? Moral realism does not explain why we have this obligation, nor how it is imposed on us.
Moral judgments vary between individuals, cultures and societies. If good and evil are objective, and humans have essentially the same ability to recognize good and evil, then we would expect moral judgments to be mostly the same. But they are not.
In most cases, moral disagreements cannot be resolved by rational persuasion. If good and evil are objective, and humans have the ability to recognize good and evil, then we should be able to resolve moral disagreements with evidence and arguments. But we can’t.
Morality is ad hoc. Moral judgments can’t be reduced to a small number of principles applied consistently. The ad hoc nature of morality is hard to explain if morality reflects objective good and evil.
If moral realism is true, then most people would be morally good. But evil is pervasive. Individuals and societies don’t behave in a morally good way, generally speaking. Morality is often linked to hypocrisy.
Let’s go through these problems in more detail, starting with the definition of good and evil.
What are good and evil?
If good and evil are objectively real, then we should be able to measure them, analogous to how we measure height or temperature. We could construct a device to measure things on this objective moral dimension, in a way that is free from personal biases. Then we could use the device to resolve moral conflicts, in the same way that we can use a ruler to resolve a disagreement about height. But of course, we can’t do any of those things for good and evil.
(see the rest of the post in the link)
I agree with the thrust and most of the content of the post, but in the interest of strengthening it, I’m looking at your list of problems and wanted to point out what I see as gaps/weaknesses.
For the first one, keep in mind it took centuries from trying to develop a temperature scale to actually having the modern thermodynamic definition of temperature, and reliable thermometers. The definition is kinda weird and unintuitive, and strictly speaking runs from 0 to infinity, then discontinuously jumps to negative infinity (but only for some kinds of finite systems), then rises back towards negative zero (I always found this funny when playing the Sims 3 since it had a” −1K Refrigerator”). Humans knew things got hot and cold for many, many millennia before figuring out temperature in a principled way. Morality could plausibly be similar.
The third and fourth seem easily explainable by bounded rationality, in the same way that “ability to build flying machines and quantum computers” and “ability to identify and explain the fundamental laws of physical reality” vary between individuals, cultures, and societies.
For the fifth, there’s no theoretical requirement that something real should only have a small number of principles that are necessary for human-scale application. Occam’s Razor cuts against anyone suggesting a fundamentally complex thing, but it is possible there is a simple underlying set of principles that is just incredibly complicated to use in practice. I would argue that most attempts to formalize morality, from Kant to Bentham etc., have this problem, and one of the common ways they go wrong is that people try to apply them without recognizing that.
The sixth seems like a complete non-sequitur to me. If moral realism were true, then people should be morally good. But why would they? Even if there were somehow a satisfying answer to the second problem of imposing an obligation, this does not necessarily provide an actual mechanism to compel action or a trend to action to fulfil the obligation. In fact at least some traditional attempts to have moral realist frameworks, like Judeo-Christian God-as-Lawgiver religion, explicitly avoid having such a mechanism.
The whole field of meta-ethics is bogus and produces verbiage that isn’t helpful. The last paragraph here hits the nail on the head, an interlocutor can grant basically anything about morality, as long as there isn’t an enforcement mechanism it’s simply irrelevant. Any non supernatural enforcement mechanism just turns compliance/non-compliance into a game theoretic problem. Even if there was a supernatural enforcement mechanism—people would just cooperate out of calculation, in which case morality and altruism get divorced and morality loses it’s emotional appeal and just reduces to individual selfishness, if you think long enough about it, which chokes the motivation to care about morality in the first place.
I don’t think good and evil are objectively real as moral terms, but if something makes us select against certain behaviour, it may be because said behaviour results in organisms deleting themselves from existence. So that “evil” actually means “unsustainable”. But this makes it situational (your sustainable expenditure depends on your income, for instance, so spending 100$ cannot be objectively good or evil).
Yes, and which actions result in you not existing will also vary. There’s no universal morality for the same reason that there’s no universal “best food” or “most fitting zoo enclosure”, for “best” cannot exist on its own. Calling something “best” is a kind of shortcut, there’s implicit things being referred to.
What’s the best move in Tetris? The correct answer depends on the game state. When you’re looking for “objectively correct universal moral rules” you might also be throwing away the game state on which the answer depends.
I’d go as far as to say that all situations where people are looking for universal solutions are mistaken, as there may (necessarily? I’m not sure) exist many local solutions which are objectively better in the smaller scope. For instance, you cannot design a tool which is the best tool for fixing any machine, instead you will have to create 100s of tools which are the best for each part of each machine. So hammers, saws, wrenches, etc. exist and you cannot unify all of them them to get something which is objectively better than any of them in any situation. But does this imply that tools are not objective? Does it not rather imply that good is a function taking at least two inputs (tool, object) and outputting a value based on the relation between the two? (a third input could be context, i.e. water is good for me in the context that I’m thirsty).
If my take is right, then like 80% of all philosophical problems turn out to be nonsense. In other words, most unsolved problems might be due to flawed questions. I’m fairly certain in this take, but I don’t know if it’s obvious or profound.
I think this issue has been discussed at length and repeatedly on LW, leading to a weak consensus that at least strong moral realism isn’t true.
Can anyone supply links to some other good posts on the topic?
Yudkowsky has written about it:
Only the first point “Good and evil are objectively real” is a necessary part of moral realism. Sometimes the first half of the third (“We have an objective moral obligation to do good and not do evil”) is included, but by some definitions that is included in what good and evil mean.
All the rest are assumptions that many people who believe in moral realism also happen to hold, but aren’t part of moral realism itself.
Replace in the post “morality” with “rationality” and you get a reductio ad absurdum.
If we imagine that each individual actor in an environment is constantly transducing incident causal events into further watersheds of causal event chains, and we allow each actor to evaluate the benefit or harm of a given incident causal event, we have a basis for a definition of moral good or evil. Moral good or evil is a positive or negative value of the ratio of benefits of an action less the harms of an action summed over the entire causal watershed and all affected actors, divided by the total benefits and harms determined in the same way. This quantity is computable and ranges from 1 to −1 (pure good, and pure evil).
I think there’s a much simpler case against it: show me the instrument readings, or at least tell me the unit of measure.