There’s something that strikes me as odd about the way I hear “moral progress” discussed in the EA/longtermist community in general and in What We Owe the Future in particular.
The issues that are generally discussed under this heading are things like: animal welfare, existential risk, “coordination” mechanisms, and so forth.
However, when I look out at the world, I see a lot of problems rarely discussed among EAs. Most of the world is still deeply religious; in some communities and even whole nations, this pervades the culture and can even create oppression. Much of the world still lives under authoritarian or even totalitarian governments. Much of the world lives in countries that lack the institutional capacity to support even rudimentary economic development, leaving their people without electricity, clean water, etc.
And the conversation around the world’s problems leaves even more to be desired. A lot of discussion of environmental issues is lacking in scientific or technical understanding, treating nature like a god and industry like a sin—there’s even now a literal “degrowth” movement. Many young people in the US are now attracted to socialism, an ideology that should have been left behind in the 20th century. Others support Trump, an authoritarian demagogue with no qualifications to leadership. Both sides seem eager to tear down the institutions of liberalism, a “burn it all down” mentality. And all around me I see people more concerned with tribal affiliations than with truth-seeking.
So when I think about what moral progress the world needs, I mostly think it needs a defense of Enlightenment ideas such as reason and liberalism, so that these ideas can become the foundation for addressing the real problems and threats we face.
I think this might be highly relevant even to someone solely concerned with existential risk. For instance, if we want to make sure that an extinction-level weapon doesn’t get in the hands of a radical terrorist group, it would be good if there were fewer fundamentalist ideologies in the world, and no nation-states that sponsor them. More prosaically, if we want to have a good response to pandemics, it would be good to have competent leadership instead of the opposite (my understanding is that the US covid response could have been much better if we had just followed the pre-existing pandemic response plan). If we want to make sure civilization deals with climate change, it would be good to have a world that believed in technological solutions rather than being locked in a battle over “degrowth.” Etc.
Looking at it another way, we could think about two dimensions of moral progress, analogous to two dimensions of economic progress: pushing forward the frontier, vs. distribution of a best-practice standard. Zero-to-one progress vs. one-to-N progress. EA folks are very focused on pushing forward the moral frontier, breaking new moral ground—but I’m very worried about, well, let’s call it “moral inequality”: simple best practices like “allow freedom of speech,” “give women equality,” or even “use reason and science” are nowhere near universal.
These kind of concerns are what drew me to “progress studies” in the first place (before that term even existed). I see progress studies first and foremost as an intellectual defense of progress as such, and ultimately of the Enlightenment ideas that underlie it.
But I never hear EA folks talk about these kinds of issues, and these ideas don’t seem to resonate with the community when I bring them up. I’m still left wondering, what is the disconnect here?
I agree, and I think that focus on the welfare of animals while there is so much outstanding human suffering to be tackled is a weird mistake the EA community seems to be making. Even more importantly, the importance of aligning AI and the potential relevance of moral philosophy to that aim seems to vastly overwhelm anything whatsoever happening with the environment. If you want to help the environment or animals, the only plausible way to do so is to help align AI with your values (including your value of the environment and animals). We’re at a super weird crux point where everything channels through that.
I don’t think it’s a mistake to focus on animal suffering over human suffering (if we’re only comparing these two), since it seems likely we can reduce animal suffering more cost-effectively, and possibly much more cost-effectively, depending on your values. See:
https://forum.effectivealtruism.org/posts/ahr8k42ZMTvTmTdwm/how-good-is-the-humane-league-compared-to-the-against
https://forum.effectivealtruism.org/posts/fogJKYXvqzkr9KCud/a-complete-quantitative-model-for-cause-selection#global-poverty-vs-animal-advocacy
https://forum.effectivealtruism.org/posts/nDgCKwjBKwFvcBsts/corporate-campaigns-for-chicken-welfare-are-10-000-times-as
https://forum.effectivealtruism.org/posts/rvvwCcixmEep4RSjg/prioritizing-x-risks-may-require-caring-about-future-people
“If you want to help the environment or animals, the only plausible way to do so is to help align AI with your values (including your value of the environment and animals). We’re at a super weird crux point where everything channels through that.”
We can still prevent suffering up until AGI arrives, AGI might not come for decades, and even after it comes, if we don’t go extinct (which would very plausibly come with the end of animal suffering!), there can still be popular resistance to helping or not harming animals. You might say influencing AI values is the most cost-effective way to help animals and this is plausible, but not obvious. Some people are looking at moral circle expansion as a way to improve the far future, like Sentience Institute, but mostly for artificial sentience.
Likely relevant: Raising the Sanity Waterline.
I tend to think of much of this as coming down to coalitions. Different coalitions have different norms/beliefs/etc., and what you want to do is to make more people be in coalitions that support enlightenment norms and ideas.
There seem to be three main ways that this can be achieved:
Create an enlightenment-based coalition that is so attractive that people will naturally tend to want to join it and adopt its norms.
Enter into existing coalitions and gain power in them, and then use that power to shift them towards more enlightenment norms and beliefs.
Use force to destroy the alternative non-enlightenment coalitions, thereby making people need to adopt enlightened norms and beliefs.
Option 3 seems kind of contrary to the enlightenment ideals, e.g. one option for “force” could be censoring them, but censorship violates the enlightenment norm of freedom of speech.
The other two options are basically reliant on impressing the people within the coalitions that currently disagree with enlightenment norms. But there’s a reason people follow those coalitions rather than the more pro-enlightenment ones, because they are not sufficiently impressed by the achievements and goals of enlightenment-aligned people compared to what their own coalitions have to offer. So the things that recruited people to the enlightenment-aligned groups are unlikely to work on the people in these competing groups.
I would support EA spending a chunk of its resources strategizing on whether and how to participate in politics, do activism, or lobby politicians. If we understood how to involve ourselves in politics in a way that didn’t pose a serious risk of destroying or corrupting EA, we might be able to find some important, tractable, and neglected levers to pull in at least some of these areas. We also need to figure out a way to think about these more sociological and qualitative issues; it’s not EA’s strong suit.
I’m very glad EA has tried to stay largely clear of politics so far, and our track record in our few attempts thus far is not encouraging. By contrast, EA seems to have a good thing going using its present strategy, and I don’t see an obvious slowdown looming for the movement.
On the flip side, learning how to be effective in politics might be crucial for pressing standard EA goals like AGI and biorisk. SBF already spent a pile of cash on a fruitless and possibly misguided attempt to gain political clout for his causes by getting an EA elected to congress. If money’s already being spent and EA’s reputation is already at risk from high-profile misguided political activity, maybe that’s a sign we ought to focus on learning how to do it well.
What makes you say this? The word “socialism” is so overloaded, I can’t tell which specific interpretation of it you dislike and why. I support some ideologies that describe themselves as socialist, but repudiate others.
Here’s an example:
https://www.vox.com/first-person/2018/8/1/17637028/bernie-sanders-alexandria-ocasio-cortez-cynthia-nixon-democratic-socialism-jacobin-dsa
Why is that automatically bad? The word “capitalism” is overloaded as well. It’s a two dimensional problem, at least.
The problem here seems to be that these “democratic” socialists still believe in the legitimacy of the state, which is a mistake. What would actually happen if they get what they want is the center of elite power moving from the capitalist-controlled market to the equally capitalist-controlled government; in practice nothing would change, as corporate lobbyists and networks of cronyism already determine who gets elected and all the regulatory agencies are quickly captured by the industries they regulate. I think these people are well-intentioned, but nowhere near radical enough.
Also, as is typical for liberals, they confuse the market for capitalism, when in reality a capitalist market is only one possible kind of market, and a quite degenerate and unfree one at that—characterized by rampant wage slavery, not because markets intrinsically result in inequality (they don’t), but due to the existence of a state with a monopoly on violence, which is used by the capitalist class to back up their claims to “own” the means of production (i.e. to steal the products of workers’ labor out of their hands and give them a pittance in return); an anticapitalist market, freed from excessive rents, interest, and profits, along the lines of what mutualists envision creating, seems to me like it would be much more conducive to human flourishing.
I’m not an expert on it, but I’d encourage you to read about anarchist economic theories, particularly mutualism. Capitalism versus thinly-disguised state communism is a false dichotomy; the way I see it, the state, even if supposedly democratic in nature (as if the ability of a majority to coerce a minority is a sign of moral advancement rather than yet another example of the brutality of the natural state! and we all know most Western countries are only apparently democratic anyway—in reality controlled by corrupt elites who manipulate the public’s perception of reality to maintain their position), is not only unnecessary, but actively harmful to human freedom, and the ideals of the Enlightenment can only be attained when it, and capitalism, are both abolished in favor of local direct democracy (involving communes and worker cooperatives) and freed markets.
Also, check out the history of anarcho-communism. Everywhere it’s been tried, it has succeeded, or at least not decayed into totalitarianism—though it’s unfortunately always managed to get tried only in dangerous areas and times in the world like the Spanish civil war and (if democratic confederalism is considered an offshoot) the autonomous cities of northeast Syria today. State communism a la Soviet Russia and China is the one that should be left behind in the 20th century, along with states generally. Anarchism, in all its forms, deserves a closer look.
I am guilty of being a zero-to-one, rather than one-to-many, type person. It seems far easier and more interesting to me, to create new forms of progress of any sort, rather than convincing people to adopt better ideas.
I guess the project of convincing people seems hard? Like, if I come up with something awesome that’s new, it seems easier to get it into people’s hands, rather than taking an existing thing which people have already rejected and telling them “hey this is actually cool, let’s look again”.
All that said, I do find this idea-space intriguing partly thanks to this post—it makes me want to think of ways of doing more one-to-many type stuff. I’ve been recently drawn into living in DC and I think the DC effective altruism folks are much more on the one-to-many side of the world.
I don’t blame anyone for being more personally interested in advancing the moral frontier than in distributing moral best practices. And we need both types of work. I’m just curious why the latter doesn’t figure larger in EA cause prioritization.
It may be the same kind of bias that disproportionately incentivizes publishing new shiny research papers, finding new hypotheses etc over trying to replicate what has already been published.
I should point out that the logic of the degrowth movement follows from a relatively straightforward analysis of available resources vs. first world consumption levels. Our world can only sustain 7 billion human beings because the vast majority of them live not at first world levels of consumption, but third world levels, which many would argue to be unfair and an unsustainable pyramid scheme. If you work out the numbers, if everyone had the quality of life of a typical American citizen, taking into account things like meat consumption to arable land, energy usage, etc., then the Earth would be able to sustain only about 1-3 billion such people. Degrowth thus follows logically if you believe that all the people around the world should eventually be able to live comfortable, first world lives.
I’ll also point out that socialism is, like liberalism, a child of the Enlightenment and general beliefs that reason and science could be used to solve political and economic problems. Say what you will about the failed socialist experiments of the 20th century, but the idea that government should be able to engineer society to function better than the ad-hoc arrangement that is capitalism, is very much an Enlightenment rationalist, materialist, and positivist position that can be traced to Jean-Jacques Rousseau, Charles Fourier, and other philosophes before Karl Marx came along and made it particularly popular. Marxism in particular, at least claims to be “scientific socialism”, and historically emphasized reason and science, to the extent that most Marxist states were officially atheist (something you might like given your concerns about religions).
In practice, many modern social policies, such as the welfare state, Medicare, public pensions, etc., are heavily influenced by socialist thinking and put in place in part as a response by liberal democracies to the threat of the state socialist model during the Cold War. No country in the world runs on laissez-faire capitalism, we all utilize mixed market economies with varying degrees of public and private ownership. The U.S. still has a substantial public sector, just as China, an ostensibly Marxist Leninist society in theory, has a substantial private sector (albeit with public ownership of the “commanding heights” of the economy). It seems that all societies in the world eventually compromised in similar ways to achieve reasonably functional economies balanced with the need to avoid potential class conflict. This convergence is probably not accidental.
If you’re truly more concerned with truth seeking than tribal affiliations, you should be aware of your own tribe, which as far as I can tell, is western, liberal, and democratic. Even if you honestly believe in the moral truth of the western liberal democratic intellectual tradition, you should still be aware that it is, in some sense, a tribe. A very powerful one that is arguably predominant in the world right now, but a tribe nonetheless, with its inherent biases (or priors at least) and propaganda.
Just some thoughts.
I don’t think we have time before AGI comes to deeply change global culture.
I’ve noticed that LW is generally more cynical about civilization (probably inspired by EY) compared to EA. You can see it in the framings. The “reducing existential risk” framing focuses on risks, not on making something happen that gives us more control than we currently have. The implicit theme is that “humans will be in control by default.” Whereas the way EY frames things, it’s more like “there’s a big challenge coming up but civilization isn’t reacting properly; we have a lot of work to do forming pockets of sanity and saving everyone with the help of magic-like technology which we’re forced to develop and master fast enough (because others are going to fuck it up by default).”
To address the title only, for each person moral progress consists of everyone else adopting their moral views. For they believe that their moral views are right (otherwise they would not be their views); therefore everyone else is wrong to the extent that they differ.
People who disagree about physical things can resolve their differences by reason applied to observation and experiment. Nothing of the sort seems to be available for morality. There is no evidence, no Bayesian updating. Without that, what sound epistemology can there be? Instead, there is only warfare, metaphorical or actual, cold or hot. The fundamental imperative of morality is to take over everyone else’s mind, and all that differs is what methods one’s morality permits.
I am sometimes tempted to ask ████s why, if they believe ███ █ ████, and not merely believe that they believe it, they aren’t out █████ing ██ ███s, but I don’t, because I think there would be too much risk of them actually doing that.
“People who disagree about physical things can resolve their differences by reason applied to observation and experiment. Nothing of the sort seems to be available for morality.”
—
W. James applies the empirical method to various belief systems in presentation to a seemingly skeptical audience. The test for all are the resulting behavior. Not even the saints remain unscathed!
“Both thought and feeling are determinants of conduct, and the same conduct may be determined either by feeling or by thought. When we survey the whole field of religion, we find a great variety in the thoughts that have prevailed there; but the feelings on the one hand and the conduct on the other are almost always the same, for Stoic, Christian, and Buddhist saints are practically indistinguishable in their lives. The theories which Religion generates, being thus variable, are secondary; and if you wish to grasp her essence, you must look to the feelings and the conduct as being the more constant elements. It is between these two elements that the short circuit exists on which she carries on her principal business,”
—William James, 1901 (lectures to book)
[the last sentence of your post was unreadable due to blocks of text having been blacked out]
To William James one can add C.S. Lewis’ essay on “The Tao”. And yet, pace these philosophers, people have fought wars over how people should live, and still do. Christians worked that out of their system with the Thirty Years War, but the Shia/Sunni and Moslem/Hindu conflicts continue. And even in the rationalist bubble, someone is always popping up to say that there is no such thing as morality, while others compare factory farming to the Holocaust.
I observe, therefore, that in fact these things are not resolved. Who can resolve them?
That was intentional. The blocks also do not correspond in length or number to the actual words I had in mind, just to make sure they are not discoverable.
Thanks for adding the James link.
Lewis has long been a fav author. Abolition of Man(3rd sec.) left a lasting impression(scars?).
Re: “people have fought wars over how people should live, and still do. ”
I offer in response: “they have turned to God without turning from themselves; would be alive to God before they are dead to their own nature. Now religion in the hands of self, or corrupt nature, serves only to discover vices of a worse kind than in nature left to itself. Hence are all the disorderly passions of religious men, which burn in a worse flame than passions only employed about worldly matters; pride, self-exaltation, hatred and persecution, under a cloak of religious zeal, will sanctify actions which nature, left to itself, would be ashamed to own.”—
William Law via A.Huxley, The Perennial Philosophy, 1945
Re:”even in the rationalist bubble, someone is always [saying] that there is no such thing as morality”
We all like to put what we read and observe in clear containers with familiar labels but IMO subjective human experience with all it’s ambiguities should be considered(1). Considering all human activity I find that when don’t get along we break things—even to the point of our our own determent. Cooperating provides better material results and greater, longer lasting satisfaction. Beyond the initial crisis or common interest cooperation requires ‘moral struggle’ which is summed up as the Golden Rule+(treat others like you want to be treated and try not to be a jerk about it)(I added the last part to remind myself : )
I’ll close with a quote from CSL’s 1952 pub:
”Strictly speaking, there are no such things as good and bad impulses. Think once again of a piano. It has not got two kinds of notes on it, the “right” notes and the “wrong” ones. Every single note is right at one time and wrong at another. [my bold] The Moral Law is not any one instinct or any set of instincts: it is something which makes a kind of tune (the tune we call goodness or right conduct) by directing the instincts.”
Thanks for the exchange, Richard. Please do reply if you have something to add!
Mark
(1)Re-posted: Scott Alexander’s What is Mysticism? A working definition for skeptics
Also, Superb Owl’s Religion as an Ego-modulator, William James, Aldous Huxley, and the functions of religion. Lastly, I. McGilchrist 08, 22.
During further research into topic of morality I discovered the following.
Per excerpt below which meshes with my current understanding, knowledge of “morality” is based on unchanging Natural Law(“Ground”) (essentially the Golden Rule) is restricted by who we are. Given that who we are fluctuates this adds even more variation into defining it. I’m not suggesting basic morality is relative to a person or culture but how we perceive it is. Applying basic morality to a current culture is another matter but first things first.
“In other words, the Ground can be denoted as being there, but not defined as having qualities. This means that discursive knowledge about the Ground is not merely, like all inferential knowledge, a thing at one remove, or even at several removes, from the reality of immediate acquaintance; it is and, because of the very nature of our language and our standard patterns of thought, it must be, paradoxical knowledge. Direct knowledge of the Ground cannot be had except by union, and union can be achieved only by the annihilation of the self-regarding ego, which is the barrier separating the “thou” from the “That.””
—Aldous Huxley, The Perennial Philosophy, 1945
[Start edit/add]
Here’s a perspective on current issues—social need driven answers to moral questions. Still, first things first I say.
“Moral questions may not have objective answers but they do have rational ones, answers rooted in a rationality that emerges out of social need. To bring reason to bear upon social relations, to define a rational answer to a moral question, requires social engagement and collective action. It is the breakdown over the past century of such engagement and such action that has proved so devastating for moral thinking.”
—Kenan Malik, The Quest for a Moral Compass, 2014
In the US, “socialism” is a derogatory word used for the belief the government shouldn’t let you die, and that it should serve you (rather than you it). This works fine in Europe, and there is no connection to socialism or communism.
It’s not possible to infer, from a single-ruling murderous party failing to govern acceptably (the central example of “communism”), that what
ringright-wing (or other) Americans call “socialism” is negative or undesirable in any way.This comment seems right to me, but maybe it’s misunderstanding the OP? I’d imagine that the OP was talking about “actual socialism (communism)” and not “derogatory word US libertarians use for left-ish policy”? There are definitely some people sympathizing with “actual socialism (communism).”
I mean, probably a lot of LWers are libertarians but I’d hope they don’t consider politics an easy enough issue where mainstream disagreeing views without an obviously atrocious track record are basically “backwards in terms of moral progress.”
I don’t think those are numerous enough for anyone to notice.
“Moral progress” is first and foremost what it takes for people to get along. An ongoing process at the individual and group levels.
“a lot of problems rarely discussed among EAs.”
While I’m floored by the civil, intelligent and sincere discussion I’ve recently found here at LW , the rootier problem to what is discussed often is, IMO, that no person seems immune to the avoidance of self-regulation. The imperative to do so noted in B. Russell’s Nobel Prize acceptance speech “The Four Desires Driving All Human Behavior”(1950) in which he describes universal human desires that are insatiable.
Communities of those striving to self-regulate know they must be mindful of the constant maintenance required to balance individual and societal needs as well as wants.
cite: From Walden, 1854: “Our whole life is startlingly moral. There is never an instant’s truce between virtue and vice. Goodness is the only investment that never fails.” Also, Aristotle’s Doctrine of the Mean.