What scares me is the realization that moral change mostly doesn’t happen via “deliberation” or “reflection” but instead through this kind of tolerance/intolerance, social pressure, implicit/explicit threats, physical coercion, up to war. I guess the way it works is that some small vanguard gets convinced of a new morality through “reason” (in quotes because the reasoning that convinces them is often quite terrible, and I think they’re also often motivated by implicit considerations of the benefits of being a moral vanguard), and by being more coordinated than their (initially more numerous) opponents, they can apply pressure/coercion to change some people’s minds (their minds respond to the pressure by becoming true believers) and silence others or force them to mouth the party line. The end game is to indoctrinate everyone’s kids with little resistance, and the old morality eventually dies off.
It seems to me like liberalism shifted the dynamics towards the softer side (withholding of association/cooperation as opposed to physical coercion/war, tolerance/intolerance instead of hard censorship) but the overall dynamics really isn’t that different as far as reason/deliberation/reflection playing only a minor role in how moral change happens. In other words, life under liberalism is more pleasant in the short run, but it doesn’t really do much to ensure long term moral progress, which I think explains why we’re seeing a lot of apparent regress recently.
ETA: Also, to the extent that longtermists and people like me (who think that it’s normative to have high moral uncertainty) are not willing to spread our views through these methods, it probably means our views will stay unpopular for a long time.
I like Scott Alexander’s discussion of symmetric vs asymmetric weapons. Symmetric weapons lead to an unceasing battle, which as you said has at least become less directly violent, but whose outcomes are more or less a random walk. But asymmetric weapons pull ever so slightly toward, well, a weakly extrapolated volition of the players on both sides.
Brownian motion plus a small term looks just like Brownian motion until you look a long ways back and notice the unlikeliness of the trend. The arc of the moral universe is long, etc.
(Of course, in this century we very probably don’t have the luxury of a long arc...)
That’s a good point, but aside from not having the luxury of a long arc, I’m also worried about asymmetric weapons coming online soon that will work in favor of bad ideas instead of good ones, namely AI assisted persuasion and value lock-in. Basically, good ideas should keep their hosts uncertain and probably unwilling to lock in their own values and beliefs or use superintelligent AI to essentially hack other people’s minds, but people under the influence of bad ideas probably won’t have such compunctions.
ETA: Also, some of the existing weapons are already asymmetric in favor of bad ideas. Namely the more moral certainty you have, the more you’re willing to use social pressure / physical coercion to spread your views. This could partly explain why moral uncertainty is so rare.
The very concept of moral uncertainty is pretty foreign to the vast majority of humanity. Tolerance and forbearance wax and wane in popularity, but actually acknowledging that you don’t know what’s best just doesn’t happen.
The association transitivity is applied on the individual level rather than on the ideas of the individuals. Most individual’s beliefs and thoughts may remain almost static through time, and maybe it became the default level of association transitivity because of it.
As you said that actually being uncertain really doesn’t happen because the development of concrete world view is important for survival as the person grows up from childhood. Schools certainly don’t focus much on uncertainty itself. It has to be derived from the individual’s own willingness to seek out alternatives and develop the habit of uncertainty mindset by reading a bit too much.
Society at large don’t encourage uncertainty mainly because it is inefficient to apply on a massive scale. It would lead to too much chaos and misunderstanding. People wouldn’t be able to communicate effectively. Having the luxury to be uncertain is not something most people can afford, which would lead to a very different type of societal structure and interoperability.
As a result, we apply the association transitivity on the individuals because ideas themselves are too ephemeral.
As an example of the reasoning of moral vanguards, a few days ago I became curious how the Age of Enlightenment (BTW, did those people know how to market themselves or what?) came about. How did the Enlightenment philosophers conclude (and convince others) that values like individualism, freedom, and equality would be good, given what they knew at the time? Well, judge for yourself. From https://plato.stanford.edu/entries/enlightenment:
However, John Locke’s Second Treatise of Government (1690) is the classical source of modern liberal political theory. In his First Treatise of Government, Locke attacks Robert Filmer’s Patriarcha (1680), which epitomizes the sort of political theory the Enlightenment opposes. Filmer defends the right of kings to exercise absolute authority over their subjects on the basis of the claim that they inherit the authority God vested in Adam at creation. Though Locke’s assertion of the natural freedom and equality of human beings in the Second Treatise is starkly and explicitly opposed to Filmer’s view, it is striking that the cosmology underlying Locke’s assertions is closer to Filmer’s than to Spinoza’s. According to Locke, in order to understand the nature and source of legitimate political authority, we have to understand our relations in the state of nature. Drawing upon the natural law tradition, Locke argues that it is evident to our natural reason that we are all absolutely subject to our Lord and Creator, but that, in relation to each other, we exist naturally in a state of equality “wherein all the power and jurisdiction is reciprocal, no one having more than another” (Second Treatise, §4). We also exist naturally in a condition of freedom, insofar as we may do with ourselves and our possessions as we please, within the constraints of the fundamental law of nature. The law of nature “teaches all mankind … that, being all equal and independent, no one ought to harm another in his life, health, liberty, or possessions” (§6). That we are governed in our natural condition by such a substantive moral law, legislated by God and known to us through our natural reason, implies that the state of nature is not Hobbes’ war of all against all. However, since there is lacking any human authority over all to judge of disputes and enforce the law, it is a condition marred by “inconveniencies”, in which possession of natural freedom, equality and possessions is insecure. According to Locke, we rationally quit this natural condition by contracting together to set over ourselves a political authority, charged with promulgating and enforcing a single, clear set of laws, for the sake of guaranteeing our natural rights, liberties and possessions. The civil, political law, founded ultimately upon the consent of the governed, does not cancel the natural law, according to Locke, but merely serves to draw that law closer. “[T]he law of nature stands as an eternal rule to all men” (§135). Consequently, when established political power violates that law, the people are justified in overthrowing it. Locke’s argument for the right to revolt against a government that opposes the purposes for which legitimate government is taken by some to justify the political revolution in the context of which he writes (the English revolution) and, almost a hundred years later, by others to justify the American revolution as well.
I’m pretty happy that we no longer have divine right of kings, though. For most of history god-monarchies were very prevalent. Somehow Locke and his friends found an attack that worked, it wasn’t a small task.
Often, I think about such changes as about phase transitions on a network.
If we assume that these processes (nucleation of clique of the new phase, changes in energy on edge boundaries,...) are independent of the content of moral change, we can expect the emergence of “fluctuations” of new moral phases. Then the question is which of these fluctuations grow to eventually take over the whole network; from an optimistic perspective, this is where relatively small differences between moral phases caused by some phases being “actually better” break the symmetry and lead to gradual moral progress.
Stated in other words, if you look at the micro-dynamic, when you look at the individual edges and nodes, you see the main terms are social pressure, coercion, etc., but the 3rd order terms representing something like “goodness of the moral system in the abstract ” act as a symmetry-breaking term and have large macroscopic consequences.
Turning to longtermism, network-wise, it seems advantageous for the initial bubble of the new phase to spread to central nodes in the network—which seems broadly in line with what EA is doing. Plausibly, in this phase, reasoning plays larger role, and coercion smaller—which is what you see. On the other hand, if longtermism becomes sufficiently large / dominant, I would expect it will become more coercive.
I think this is a good way to think about the issues. My main concerns, put into these terms, are
The network could fall into some super-stable moral phase that’s wrong or far from best. The stability could be enabled by upcoming tech like AI-enabled value lock-in, persuasion, surveillance.
People will get other powers, like being able to create an astronomical number of minds, while the network is still far from the phase that it will eventually settle down to, and use those powers to do things that will turn out to be atrocities when viewed from the right moral philosophy or according to people’s real values.
The random effects overwhelm the directional ones and the network keeps transitioning through various phases far from the best one. (I think this is a less likely outcome though, because it seems like sooner or later it will hit upon one of the super-stable phases mentioned in 1.)
Have you written more about “moral phase transitions” somewhere, or have specific thoughts about these concerns?
What scares me is the realization that moral change mostly doesn’t happen via “deliberation” or “reflection” but instead through this kind of tolerance/intolerance, social pressure, implicit/explicit threats, physical coercion, up to war. I guess the way it works is that some small vanguard gets convinced of a new morality through “reason” (in quotes because the reasoning that convinces them is often quite terrible, and I think they’re also often motivated by implicit considerations of the benefits of being a moral vanguard), and by being more coordinated than their (initially more numerous) opponents, they can apply pressure/coercion to change some people’s minds (their minds respond to the pressure by becoming true believers) and silence others or force them to mouth the party line. The end game is to indoctrinate everyone’s kids with little resistance, and the old morality eventually dies off.
It seems to me like liberalism shifted the dynamics towards the softer side (withholding of association/cooperation as opposed to physical coercion/war, tolerance/intolerance instead of hard censorship) but the overall dynamics really isn’t that different as far as reason/deliberation/reflection playing only a minor role in how moral change happens. In other words, life under liberalism is more pleasant in the short run, but it doesn’t really do much to ensure long term moral progress, which I think explains why we’re seeing a lot of apparent regress recently.
ETA: Also, to the extent that longtermists and people like me (who think that it’s normative to have high moral uncertainty) are not willing to spread our views through these methods, it probably means our views will stay unpopular for a long time.
I like Scott Alexander’s discussion of symmetric vs asymmetric weapons. Symmetric weapons lead to an unceasing battle, which as you said has at least become less directly violent, but whose outcomes are more or less a random walk. But asymmetric weapons pull ever so slightly toward, well, a weakly extrapolated volition of the players on both sides.
Brownian motion plus a small term looks just like Brownian motion until you look a long ways back and notice the unlikeliness of the trend. The arc of the moral universe is long, etc.
(Of course, in this century we very probably don’t have the luxury of a long arc...)
That’s a good point, but aside from not having the luxury of a long arc, I’m also worried about asymmetric weapons coming online soon that will work in favor of bad ideas instead of good ones, namely AI assisted persuasion and value lock-in. Basically, good ideas should keep their hosts uncertain and probably unwilling to lock in their own values and beliefs or use superintelligent AI to essentially hack other people’s minds, but people under the influence of bad ideas probably won’t have such compunctions.
ETA: Also, some of the existing weapons are already asymmetric in favor of bad ideas. Namely the more moral certainty you have, the more you’re willing to use social pressure / physical coercion to spread your views. This could partly explain why moral uncertainty is so rare.
The very concept of moral uncertainty is pretty foreign to the vast majority of humanity. Tolerance and forbearance wax and wane in popularity, but actually acknowledging that you don’t know what’s best just doesn’t happen.
The association transitivity is applied on the individual level rather than on the ideas of the individuals. Most individual’s beliefs and thoughts may remain almost static through time, and maybe it became the default level of association transitivity because of it.
As you said that actually being uncertain really doesn’t happen because the development of concrete world view is important for survival as the person grows up from childhood. Schools certainly don’t focus much on uncertainty itself. It has to be derived from the individual’s own willingness to seek out alternatives and develop the habit of uncertainty mindset by reading a bit too much.
Society at large don’t encourage uncertainty mainly because it is inefficient to apply on a massive scale. It would lead to too much chaos and misunderstanding. People wouldn’t be able to communicate effectively. Having the luxury to be uncertain is not something most people can afford, which would lead to a very different type of societal structure and interoperability.
As a result, we apply the association transitivity on the individuals because ideas themselves are too ephemeral.
As an example of the reasoning of moral vanguards, a few days ago I became curious how the Age of Enlightenment (BTW, did those people know how to market themselves or what?) came about. How did the Enlightenment philosophers conclude (and convince others) that values like individualism, freedom, and equality would be good, given what they knew at the time? Well, judge for yourself. From https://plato.stanford.edu/entries/enlightenment:
I’m pretty happy that we no longer have divine right of kings, though. For most of history god-monarchies were very prevalent. Somehow Locke and his friends found an attack that worked, it wasn’t a small task.
Often, I think about such changes as about phase transitions on a network.
If we assume that these processes (nucleation of clique of the new phase, changes in energy on edge boundaries,...) are independent of the content of moral change, we can expect the emergence of “fluctuations” of new moral phases. Then the question is which of these fluctuations grow to eventually take over the whole network; from an optimistic perspective, this is where relatively small differences between moral phases caused by some phases being “actually better” break the symmetry and lead to gradual moral progress.
Stated in other words, if you look at the micro-dynamic, when you look at the individual edges and nodes, you see the main terms are social pressure, coercion, etc., but the 3rd order terms representing something like “goodness of the moral system in the abstract ” act as a symmetry-breaking term and have large macroscopic consequences.
Turning to longtermism, network-wise, it seems advantageous for the initial bubble of the new phase to spread to central nodes in the network—which seems broadly in line with what EA is doing. Plausibly, in this phase, reasoning plays larger role, and coercion smaller—which is what you see. On the other hand, if longtermism becomes sufficiently large / dominant, I would expect it will become more coercive.
I think this is a good way to think about the issues. My main concerns, put into these terms, are
The network could fall into some super-stable moral phase that’s wrong or far from best. The stability could be enabled by upcoming tech like AI-enabled value lock-in, persuasion, surveillance.
People will get other powers, like being able to create an astronomical number of minds, while the network is still far from the phase that it will eventually settle down to, and use those powers to do things that will turn out to be atrocities when viewed from the right moral philosophy or according to people’s real values.
The random effects overwhelm the directional ones and the network keeps transitioning through various phases far from the best one. (I think this is a less likely outcome though, because it seems like sooner or later it will hit upon one of the super-stable phases mentioned in 1.)
Have you written more about “moral phase transitions” somewhere, or have specific thoughts about these concerns?