For the past, in some ways only, we are moral degenerates
Have human values improved over the last few centuries? Or is it just that current human values are naturally closer to our (current) human values and so we think that there’s been moral progress towards us?
If we project out in the future, the first scenario posits continuing increased moral improvements (as the “improvement trend” continues) and the second posits moral degeneration (as the values drift away from our own). So what is it?
I’ll make the case that both trends are happening. We have a lot less slavery, racism, ethnic conflicts, and endorsements of slavery, racism, and ethnic conflicts. In an uneven way, poorer people have more effective rights than they did before, so it’s somewhat less easy to abuse them.
Notice something interesting about the previous examples? They can all be summarised as “some people who were treated badly are now treated better”. Many people throughout time would agree that these people are actually being treated better. On the issue of slavery, consider the following question:
“If X would benefit from being a non-slave more than being a slave, and there were no costs to society, would it be better for X not to be a slave?”
Almost everyone would agree to that throughout history, barring a few examples of extremely motivated reasoning. So most defences of slavery rest on the idea that some classes of people are better off as slaves (almost always a factual error, and generally motivated reasoning), or that some morally relevant group of people benefited from slavery enough to make it worthwhile.
So most clear examples of moral progress are giving benefits to people, such that anyone who knew all the facts would agree it was beneficial for those people.
That trend we might expect to continue; as we gain greater knowledge how to benefit people, and as we gain greater resources, we can expect more people to be benefited.
Values that we have degenerated on
But I’ll argue that there are a second class of values that have less of a “direction” to, and where we could plausibly be argued to have “degenerated”. And, hence, where we might expect our descendants to “degenerate” more (ie move further away from us).
Community and extended family values, for example, are areas where much of the past would be horrified by the present. Why are people not (generally) meeting up with their second cousins every two weeks, and why do people waste time gossiping about irrelevant celebrities rather than friends and neighbours?
On issues of honour and reputation, why have we so meekly accepted to become citizens of administrative bureaucracies and defer to laws and courts, rather than taking pride in meeting out our own justice and defending our own honour? “Yes, yes”, the hypothetical past person would say, “your current system is fairer and more efficient; but why did it have to turn you all so supine”? Are you not free men?
Play around with vaguely opposite virtues: spontaneity versus responsibility; rationality versus romanticism; pride versus humility; honesty versus tact, and so on. Where is the ideal mean between any of those two extremes? Different people and different cultures put the ideal mean in different places, and there’s no reason to suspect that the means are “getting better” rather than just “moving around randomly”.
I won’t belabour the point; it just seems to me that there are areas where the moral progress narrative makes more sense (giving clear benefits to people who didn’t have them) and areas where the “values drift around” narrative makes more sense. And hence we might hope for continuing moral progress in some areas, and degeneration (or at least stagnation) in others.
Someone might consider this semantics, but doesn’t the “values drift around” model imply that there is neither progress nor degeneration for the values in question, since it’s just random drift?
In other words, if there is the possibility of progress in some values, then that implies that some values are better or worse than ours; and if others just show random drift and there’s no meaningful “progress” to speak of, then those values drifting away from ours doesn’t make them any worse than ours, it just makes them different.
I realize that you did explicitly define “degeneration = moving away from ours” for the drifting values, but it feels weird to then also define “progress = moving away from ours in a good way” for the progress-y values; it feels like you are trying the operation defined for the domain of progress-y values in a domain which it isn’t applicable for.
If I decomposed a bit more, I’d say that we need to distinguish the values of others, and the state of the world, and whether things are moving towards our values, away from our values, or just drifting.
So “progress”, in the sense of my post, is composed of a) other people’s values moving towards our own, and b) the state of the world moving more towards our own preferences/values. “Moral degeneration”, on the other hand, is c) people’s values drift away from our own.
I see all three of these happening at once (along with, to some extent “the state of the world moving away from our values”, which is another category), so that’s why we see both progress and degeneration in the future.
Recently Robin Hanson posted about the difference between fighting along the frontier vs. expanding the frontier. It’s a well known point, but given I was recently reminded of it it’s salient to me, and it seems quite relevant here.
When we ask if human values have “improved” or “degenerated” over time we have to have some way of judging increase or decrease. One way to understand this is to check if humans get to realize more value, as judged by each individual and then normalized and aggregated, along certain dimensions within the multidimensional space of values. To take your example of “engagement with extended family”, most moderns have less of this than ancients did both on average and it seems at maximum, i.e. modern systems preclude as much engagement as was possible in the past, such that a modern person maximally engaged with their extended family is less engaged than was maximally possible in the past. This seems to be traded-off, though, against greater freedom from need to engage with extended family because alternative systems allow a person to fulfill other values without reliance on extended family. As a result this looks much like a “fight”, i.e. a trade-off along the value frontier of one value to another.
You give the example of reduced slavery being a general benefit, but I think we can tell a similar story that it is a trade-off. We trade-off individual choice of labour use, living conditions, etc. for the right of the powerful to make those decisions for the less powerful. In this sense the reduction in slavery takes away something of value from someone—the would be slaveholders—to give it to someone else—the would be slaves. We may judge this to be an expansion or value efficiency improvement under two conditions (which change slightly what we mean by expansion):
(1) there is more value overall, i.e. we traded less value away than we got back in return
(2) there is more value overall along all dimensions
I would argue that case (1) is really still a fight though because we are still making a tradeoff, we are just moving to somewhere more efficient along the frontier. From this perspective the end of slavery was not an expansion of values, but it was a trade-off for more value.
But if we are so strict, is anything truly a pure expansion? This seems quite tricky, because humans can value arbitrary things, and so for every action that increases some value it would seem that we are necessarily decreasing the ability the realize some counter-value. For example, it might seem that something like “greater availability of calories” would result in pure value expansion, assuming we can screen off all the complicated details of how we make more calories available to humans and how that process will affect values. But suppose you value scarcity of calories, maybe even directly, then for you this will be a fight and we must interpret an increase in the availability of calories as a trade-off rather than as a pure expansion in values.
This is potentially troubling because it means there’s no universal way to judge moral progress if there can be no expansion without some contraction somewhere. It would seem that there must always be contraction of something, even if it is an efficient contraction that generates more value than it gives up.
So in the end I guess I am forced to (mostly) agree with your assessment even though you frame it in a way that seem foreign to me. It feels foreign to me because it seems every improvement is also a degeneration and vice versa, and the relevant question of improvement is mostly whether or not we are generating more value in aggregate (an efficiency improvement) if we want to be neutral on which value dimensions to optimize along.
I actually don’t love the idea of making aggregate value something we optimize for, though, because I worry about degenerate cases like highly optimizing along a single value dimension at the expense of all others such that it results in an overall increase in value but in a way we wouldn’t want, even though arguably if we were measuring value correctly in this system such a situation would be impossible because it would be factored in by a decrease in whatever value we had that was being traded off against that made us dislike the “optimization”.
I instead continue to think that value is a confused concept that we need to break apart and reunderstand, but I’m still working on deconfusing myself on this, so I have nothing additional to report in that direction for now.
I think at least some of this has to do with the fact that some forms of local coordination can hurt global coordination (think of price fixing, organized crime, nepotism, nimbyism), so evolution has favored cultures that managed to reduce such local coordination. (Of course if community and extended family values are terminal values instead of instrumental ones this would still imply “degeneration”, but I’m not sure if they are.)
I think community and extended family are (or were) terminal values, to some extent at least (which doesn’t preclude them being instrumental values also).
I think that conditional on some form of moral anti-realism, community and extended family likely were terminal values, and there has been “moral degeneration” in the sense that we now weigh such values less than before. But it seems to me that conditional on moral anti-realism, slavery was also a kind of terminal value, in the sense that slave owners weighed their own welfare higher than the welfare of slaves, and racism was a kind of terminal value in that people weighed the welfare of people of their own race higher than people of other races. This seems to be what’s going on if we put aside the factual claims.
If you disagree with this, can you explain more why it’s a terminal value to weigh one’s local community or extended family more than others, but not a terminal value to weigh oneself or people in one’s race or one’s social class (e.g., the nobility or slave owners) more than others? Or why that’s not what’s going on with racism or slavery?
I was talking about “extended family values” in the sense of “it is good for families to stick together and spend time with each other”; this preference can (and often does) apply to other families as well. I see no analogue for that with slavery.
But yeah, you could argue that racism can be a terminal value, and that slave owners would develop it, as a justification for what might have started as an instrumental value.
It seems that at least some people valued slavery in the sense of wanting to preserve a culture and way of life that included slavery. The following quotes from https://www.battlefields.org/learn/articles/why-non-slaveholding-southerners-fought seem to strongly suggest that slavery/racism (it seems hard to disentangle these) was a terminal value at least for some (again assuming moral anti-realism):
Back to you:
What scares me is the possibility that moral anti-realism is false, but we build an AI under the assumption that it’s true, and it “synthesizes” or “learns” or “extrapolates” some terminal value like or analogous to racism, which turns out to be wrong.
One way of dealing with this, in part, is to figure out what would convince you that moral realism was true, and put that in as a strong conditional meta-preference.
I can see two possible ways to convince me that moral realism is true:
I spend hundreds or more years in a safe environment with a bunch of other philosophically minded people and we try to come up with arguments for and against moral realism, counterarguments, counter-counterarguments and so on, and we eventually exhaust the space of such arguments and reach a consensus that moral realism is true.
We solve metaphilosophy, program/teach an AI to “do philosophy”, somehow reach high confidence that we did that correctly, and the AI solves metaethics and gives us a convincing argument that moral realism is true.
Do these seem like things that could be “put in as a strong conditional meta-preference” in your framework?
Yes, very easily.
The main issue is whether these should count as an overwhelming meta-preference—one that over-weights all other considerations. And, currently as I have things set up, the answer is no. I have no doubt that you feel strongly about potentially true moral realism. But I’m certain that this strong feeling is not absurdly strong compared to other preferences at other moments in your life. So if we synthesised your current preferences, and 1. or 2. ended up being true, then the moral realism would end up playing a large-but-not-dominating role in your moral preferences.
I wouldn’t want to change that, because what I’m aiming for is an accurate synthesis of your current preferences, and your current preference for moral-realism-if-it’s-true is not, in practice, dominating your preferences. If you wanted to ensure the potential dominance of moral realism, you’d have to put that directly into the synthesis process, as a global meta-preference (section 2.8 of the research agenda).
But the whole discussion feels a bit peculiar, to me. One property of moral realism that is often assumed, is that it is, in some sense, ultimately convincing—that all systems of morality (or all systems derived from humans) will converge to it. Yet when I said a “large-but-not-dominating role in your moral preferences”, I’m positing that moral realism is true, but that we have a system of morality - UH - that does not converge to it. I’m not really grasping how this could be possible (you could argue that the moral realism UR is some sort of acausal trade convergent function, but that gives an instrumental reason to follow UR, not an actual reason to have UR; and I know that a moral system need not be a utility function ^_^).
So yes, I’m a bit confused by true-but-not-convincing moral realisms.
I’m not sure the feeling against slavery is so uniform.
All would agree that for X it is better, but there is always a cost: no-one then gets the use of X as a slave. History, as Thucydides observes, has consisted of the strong doing what they will, and the weak bearing what they must. The strong see this as the proper nature of things, and would scoff at the question. The weak can but impotently daydream of paradise.
As late as 1848, these lines were penned in a Christian hymn: “The rich man in his castle, the poor man at his gate, God made them high and lowly, and ordered their estate.” The verse has since fallen into disuse, which would shock people of a few centuries ago, who saw the social order as divinely ordained.
What about when the poor men come barging through the rich man’s gate? I take it that too is factored into the divine plan?
Only insofar as even man’s sinning is part of the divine plan; but though part of the divine plan, it is still sin. The social order is the divine order, each is born into the position that God has ordained, and the King rules by the grace of God. So it has been believed in some former times and places.
There is still a trace of that in our (British) coins, which have a Latin inscription meaning “[name of monarch] by the grace of God King/Queen, defender of the faith.”
It seems to me that in the cases of extended family values and {honor and reputation}, both modern values and past values are indirect attempts to optimize the same hidden objective. While a modern person may be shocked by past values in these categories, and a past person shocked by modern values, if we digged deep enough into why we actually cared about these values, we would find that we care for them for the same reasons, and measurable progress can be made on that shared base-level value, which is more important than the surface-level values which may be change over time
Yes, things like honour and anger serve important signalling and game-theoretic functions. But they also come to be valued intrinsically (the same way people like sex, rather than just wanting to spread their genes), and strongly valued. This makes it hard to agree that “oh, your sacred core value is only in the service of this hidden objective, so we can focus on that instead”.
I certainly agree that we’d probably have a hard time convincing someone from the past to be able to understand that.
I hope for myself, and my cohort generally, that we’d be able to rise above the level of being attached to something that doesn’t actually matter, and sacrifice superficial values for what we care most about