[SEQ RERUN] Whither Moral Progress?
Today’s post, Whither Moral Progress? was originally published on 16 July 2008. A summary (taken from the LW wiki):
Does moral progress actually happen? And if it does so, how?
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we’ll be going through Eliezer Yudkowsky’s old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Lawrence Watt-Evans’s Fiction, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day’s sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
Did anyone ever link to his ideas?
One key part is presumably this
I think there has been moral progress, likely from the continued pressure of people trying to enforce their moral preferences, but there have been regressions as well, and there is nothing inevitable about it.
For example, I think Orwell’s vision is horrifying because it seems plausible—just as under one set of conditions, it’s like there’s a benevolent invisible hand, under some other conditions, it would be like there’s a malevolent fist. Whether you agree with Orwell or Hayek about what those conditions are, a morality death spiral seems a plausible proposition.
There has been huge progress in both average and total human happiness (and welfare, QALY, etc) over the past few centuries. Most of it has been technological, but a significant part can be attributed to changes in common moral beliefs. A change in morals which causes or enables greater happiness, I call moral progress: change for the better.
The large majority of humans have been emancipated (women, ethnic and religious minorities), legally and socially. Most people are no longer taught they are inherently inferior, or evil and dangerous, or hated and punished by the gods, or to blame for the sins of their ancestors. Consequently they believe it much less themselves.
Furthermore, most or all technological and scientific progress (which has greatly increased human happiness) consists of things that accepted morals of the Christian era, up to a few centuries ago, forbade. Remember “think not of happiness in this life, but in the next”? Remember the protests against analgesics, and before that obstetrics, and before that all surgery and medicine in general? The claims that suffering was godly punishment for human sins and it was sinful (morally wrong) to alleviate that suffering? The loss of such beliefs among most people can certainly be called “moral progress” in my view.
“The future is already here—it’s just not very evenly distributed.”—William Gibson.
Morals (“free speech, democracy, mass street protests against wars, the end of slavery… and we could also cite female suffrage, or the fact that burning a cat alive was once a popular entertainment… and many other things that our ancestors believed were right, but which we have come to see as wrong, or vice versa”) are also not very evenly distributed. Statements about moral progress are premature.
Seems to me there are two sources of moral progress:
First, we have better information. Some our decisions about “right” and “wrong” are based on the outcomes of such actions, especially when compared to existing alternatives. Therefore, if we find a better alternative, if we discovering that we were wrong in predicting consequences of some action, or if the environment changes so that the same action will now have different consequences, we may update our perceptions of “right” and “wrong” in a way our ancestors would agree with (after being told what we know).
Second, our capacities for abstract reasoning are increasing, and we have a desire for a consistent morality. So if at some moment we realize that X and Y are just two examples of the same template, we will want a morality that says that X and Y are both “right”, or both “wrong”, or the difference must be explained by a difference between X and Y. Our ancestors would in general agree that morality needs to be consistent.