Effective Altruism is a well-intentioned but flawed philosophy. This is a critique of typical EA approaches, but it might not apply to all EAs, or to alternative EA approaches.
Edit: In a follow up comment, I clarify that this critique is primarily directed at GiveWell and Peter Singer’s styles of EA, which are the dominant EA approaches, but are not universal.
There is no good philosophical reason to hold EA’s axiomatic style of utilitarianism. EA seems to value lives equally, but this is implausible from psychology (which values relatives and friends more), and also implausible from non-naive consequentialism, which values people based on their contributions, not just their needs.
Even if you agree with EA’s utilitarianism, it is unclear that EA is actually effective at optimizing for it over a longer time horizon. EA focuses on maximizing lives saved in the present, but it has never been shown that this approach is optimal for human welfare over the long-run. The existential risk strand of EA gets this better, but it is too far off.
If EA is true, then moral philosophy is a solved problem. I don’t think moral philosophy works that way. Values are much harder than EA gives credit for. Betting on a particular moral philosophy with a percentage of your income shows an immense amount of confidence, and extraordinary claims require extraordinary evidence.
EA has an opportunity cost, and its confidence is crowding out better ideas. What would those better altruistic interventions be? I don’t know, but I feel like we can do better.
EAs have a weak understanding of geopolitics and demographics. The current state of the world is that Western Civilization, the goose that laid the golden egg, is declining. If indeed Western Civilization is in trouble, and we are facing near or medium-term catastrophic risks like social collapse, turning into Brazil, or war with Russia or China, then the highest-value opportunities for altruism will be at home. Unless you think we have a hard-takeoff AI scenario or technological miracles in the near-term, we should be very worried about geopolitics, demographics, and civilization in the medium-term and long-term.
If Western Civilization collapses, or is over-taken by China, then that will not be a good future for human welfare. Averting this possibility is way more high-impact than anything else that EAs are currently doing. If the West is secure and abundant, then maybe EAs have the right idea by redistributing wealth out of the West. But if the West is precarious and fragile, then redistribution makes less sense, and addressing the risks in the West seems more important.
EAs do not understand demographics, or are not taking them seriously if they do. The West is currently faltering in fertility and undergoing population replacement from people from areas with higher crime and corruption. Meanwhile, altruism itself varies between populations based on clannishness and inbreeding. We are heading towards a future that is demographically more clannish and less altruistic.
Some EAs are open borders advocates, but open borders is a ridiculously dangerous experiment for the West. They have not satisfactorily accounted for the crime and corruption that immigrants may bring. Additionally, under democracy, immigrants can vote and change the culture. Open border advocates hope that institutions will survive, but they have provided no good arguments that Western institutions will survive rapid demographic change. Institutions might seem fine and then rapidly collapse in a non-linear way. If Western Civilization collapses into ethnic turmoil or Soviet sclerosis, then humans everywhere will suffer.
Some EAs have a skeptical attitude towards parenthood, because it takes away money from charity, and believe that EAs are easier to convert than create. In some cases, EAs who want to become parents justify parenthood as an unprincipled exception. This whole conversation is ridiculous and exemplifies EAs’ flawed moral philosophy and understanding of humans. Altruistic parents are likely to have altruistic children due to the heritability of behavioral traits. If altruistic people fail to breed, then they will take their altruistic genes to the grave with them, like the Shakers. If altruism itself is a casualty of changing demographics, then human welfare will suffer in the future. (If you doubt this can happen, then check out the earlier two links, and good luck getting Eastern Europeans or Middle-Easterners interested in EA.)
I don’t think EAs do a very good job of distinguishing their moral intuitions from good philosophical arguments; see the interest of many EAs in open borders and animal rights. I do not see a large understanding in EA of what altruism is and how it can become pathological. Pathological altruism is where people become practically addicted to a feeling of doing good which leads them to act sometime with negative consequences. A quote from the book in that review, which shows some of the difficulties disentangling moral psychological from moral philosophy:
Despite the fact that a moral conviction feels like a deliberate rational conclusion to a particular line of reasoning, it is neither a conscious choice nor a thought process. Certainty and similar states of ‘knowing that we know’ arise out of primary brain mechanisms that, like love or anger, function independently of rationality or reason. . . .
What feels like a conscious life-affirming moral choice—my life will have meaning if I help others—will be greatly influenced by the strength of an unconscious and involuntary mental sensation that tells me that this decision is “correct.” It will be this same feeling that will tell you the “rightness” of giving food to starving children in Somalia, doing every medical test imaginable on a clearly terminal patient, or bombing an Israeli school bus. It helps to see this feeling of knowing as analogous to other bodily sensations over which we have no direct control.
It seems that some people have strong intuitions towards altruism or animal rights, but it’s another thing entirely to say that those arguments are philosophically strong. It seems that people who are biologically predisposed towards altruism will be motivated to find philosophical arguments that justify what they already want to do. I don’t think EAs have corrected for this bias. If EAs’ arguments are flawed, then their adoption of them must be explained by their moral intuitions or signaling desires. Since EA provides great opportunities to signal altruism, intelligence, and discernment, it seems that there would be a gigantic temptation for some personalities to get into EA and exaggerate the quality of its arguments, or adopt its axioms even though other axioms are possible. Even though EAs employ reason and philosophy unlike typical pathological altruists, moral philosophy is subjective, and choice of particular moral theories seems highly related to personality.
The other psychological bias of EAs is due to them getting nerd-sniped by narrowly defining problems, or picking problems that are easier to solve or charities that are possible to evaluate. They seem to operate from the notion that giving away some of their money to charity is taken for granted, so they just need to find the best charity out of those that are possible to evaluate. In an inconvenient world for an altruist, the high-value opportunities are unknown or unknowable, throwing your money at what seems best might result in a negligible or negative effect, and keeping your money in your piggy bank until more obvious opportunities emerge might make the most sense.
EA isn’t all bad. It’s probably better than typical ineffective charities, so if you absolute must give to a charity, then effective charities are probably better. EAs have the right idea by trying to evaluate charities. Many EA arguments are strong within the bounds of utilitarianism, or the confines of a particular problem. But EAs have a hard road towards justification because their philosophy advocates spending money on strong moral claims, and being wrong about important things about the world will totally throw off their results.
My criticisms here don’t apply to all EAs or all possible EA approaches, just the median EA arguments and interventions I’ve seen. It is conceivable that in the future EA will become more persuasive to a larger group of people once it has greater knowledge about the world and incorporates that knowledge into its philosophy. An alternative approach to EA would focus on preserving Western Civilization and avoiding medium-term political/demographic catastrophies. But nobody is sufficiently knowledgeable at this point to know how we could spend money towards this goal.
As someone said in another comment there are the core tenets of EA, and there is your median EA. Since you only seem to have quibbles with the latter, I’ll address some of those, but I don’t feel like accepting or rejecting them is particularly important for being an EA in the context of the current form of the movement. We love discussing and challenging our views. Then again I think I so happen to agree with many median EA views.
which values people based on their contributions, not just their needs
VoiceOfRa put very concisely what I think is a median EA view here, but the comment is so deeply nested that I’m afraid it might get buried: “Even if he values human lives terminally, a utilitarian should assign unequal instrumental value to different human lives and make decision based on the combination of both.”
I don’t think EAs do a very good job of distinguishing their moral intuitions from good philosophical arguments
I think this has been mentioned in the comments but not very directly. The median EA view may be not to bother with philosophy at all because the branches that still call themselves philosophy haven’t managed to come to a consensus on central issues over centuries so that there is little hope for the individual EA to achieve that.
However when I talk to EAs who do have a background in philosophy, I find that a lot of them are metaethical antirealists. Lukas Gloor, who also posted in this thread, has recently convinced me that antirealism, though admittedly unintuitive to me, is the more parsimonious view and thus the view under which I operate now. Under antirealism moral intuitions, or some core ones anyway, are all we have, so that there can be no philosophical arguments (and thus no good or bad ones) for them.
Even if this is not a median EA view, I would argue that most EAs act in accordance with it just out of concern for the cost-effectiveness of their movement-building work. It is not cost-effective to try to convince everyone of the most unintuitive inferences from ones own moral system. However, among the things that are important to the individual EA, there are likely many that are very uncontroversial in most of society and focusing on those views in one’s “evangelical” EA work is much more cost-effective.
Betting on a particular moral philosophy with a percentage of your income shows an immense amount of confidence, and extraordinary claims require extraordinary evidence.
From my moral vantage point, the alternative (I’ll consider a different counterfactual in a moment) that I keep the money to spend it on myself where its marginal positive impact on my happiness is easily two or three orders of magnitude lower and my uncertainty over what will make me happy is also just slightly lower than with some top charities, that alternative would be a much more extraordinary claim.
You could break that up and note that in the end I’m not deciding to just “donate effectively,” but that I’ll decide on a very specific intervention and charity to donate to, for example Animal Equality, making my decision much more shaky again, but I’d also have to make such highly specific decisions that are probably only slightly less shaky when trying to spend money on my own happiness.
However, the alternative might also be:
keeping your money in your piggy bank until more obvious opportunities emerge
That’s something the median EA has probably considered a good deal. Even at GiveWell there was a time in 2013 when some of the staff pondered whether it would be better to hold off with their personal donations and donate a year later when they’ve discovered better giving opportunities.
However several of your arguments seem to stem from uncertainty in the sense of “There is substantial uncertainty, so we should hold off doing X until the uncertainty is reduced.” Trading off these element in an expected value framework and choosing the right counterfactuals is probably again a rather personal decision when it comes to investing ones donation budget, but over time I’ve become less risk-averse and more ready to act under some uncertainty, which has hopefully brought me closer to maximizing the expected utility of my actions. Plus I don’t expect any significant decreases in uncertainty wrt the best giving opportunities in the future that I could wait for. There will hopefully be more with similar or only slightly greater levels of uncertainty though.
Part of the reason I wrote my critique is that I know that at least some EAs will learn something from it and update their thinking.
VoiceOfRa put very concisely what I think is a median EA view here, but the comment is so deeply nested that I’m afraid it might get buried: “Even if he values human lives terminally, a utilitarian should assign unequal instrumental value to different human lives and make decision based on the combination of both.”
I’ll take your word that many EAs also think this way, but I don’t really see it effecting the main charitable recommendations. Followed to its logical conclusion, this outlook would result in a lot more concern about the West.
Even if this is not a median EA view, I would argue that most EAs act in accordance with it just out of concern for the cost-effectiveness of their movement-building work. It is not cost-effective to try to convince everyone of the most unintuitive inferences from ones own moral system.
Well, there is a question about what EA is. Is EA about being effectively altruistic within your existing value system? Or is it also about improving your value system to more effectively embody your terminal values? Is it about questioning even your terminal values to make sure they are effective and altruistic?
Regardless of whether you are an antirealist, not all value systems are created equal. Many people’s value systems are hopelessly contradictory, or corrupted by politics. For example, some people claim to support gay people, but they also support unselective immigration from countries with anti-gay attitudes, which will inevitably cause negative externalities for gay people. That’s a contradiction.
I just don’t think a lot of EAs have thought their value systems through very thoroughly, and their knowledge of history, politics, and object-level social science is low. I think there are a lot of object-level facts about humanity, and events in history or going on right now which EAs don’t know about, and which would cause them to update their approach if they knew about it and thought seriously about it.
Look at the argument that EAs make towards ineffective altruists: they know so little about charity and the world that they are hopelessly unable to achieve significant results in their charity. When EAs talk to non-EAs, they advocate that (a) people reflect on their value system and priorities, and (b) they learn about the likely consequences of charities at an object-level. I’m doing the same thing: encouraging EAs to reflect on their value systems, and attain a broader geopolitical and historical context to evaluate their interventions.
However, among the things that are important to the individual EA, there are likely many that are very uncontroversial in most of society and focusing on those views in one’s “evangelical” EA work is much more cost-effective.
What is or isn’t controversial in society is more a function of politics than of ethics. Progressive politics is memetically dominant, potentially religiously-descended, and falsely presents itself as universal. Imagine what an EA would do in Nazi Germany under the influence of propaganda. How about Soviet Effective Altruists, would they actually do good, or would they say “collectivize faster, comrade?” How do we know we aren’t also deluded by present-day politics?
It seems like there should be some basic moral requirement that EAs give their value a system a sanity-check instead of just accepting whatever the respectable politics of the time tell them. If indeed politics has a very pervasive influence on people’s knowledge and ethics, then giving your value system a sanity-check would require separating out the political component of your worldview. This would require deep knowledge of politics, history, and social science, and I just don’t see most EAs or rationalists operating at this level (I’m certainly not: the more I learn, the more I realize I don’t know).
The fact that the major EA interventions are so palatable to progressivism suggests that EA is operating with very bounded rationality. If indeed EA is bounded by progressivism, and progressivism is a flawed value system, then there are lots of EA missed opportunities lying around waiting for someone to pick them up.
I didn’t respond to your critiques that went into a more political direction because there was already discussion of those aspects there that I wouldn’t have been able to add anything to. There is concern in the movement in general and in individual EA organizations that because EAs are so predominantly computer scientists and philosophers, there is a great risk of incurring known and unknown unknowns. In the first category, more economists for example would be helpful; in the second category it will be important to bring people from a wide variety of demographics into the movement without compromising its core values. As computer scientist I’m pretty median again.
then there are lots of EA missed opportunities lying around waiting for someone to pick them up
Indeed. I’m not sure if the median EA is concerned about this problem yet, but I wouldn’t be surprised if they are. Many EA organizations are certainly very alert to the problem.
Followed to its logical conclusion, this outlook would result in a lot more concern about the West.
This concern manifests in movement-building (GWWC et al.) and capacity-building (80k Hours, CEA, et al.). There is also concern that I share but that may not yet be median EA concern that we should focus more on movement-wide capacity-building, networking, and some sort of quality over quantity approach to allow the movement to be better and more widely informed. (And by “quantity” I don’t mean to denigrate anyone but just I mean more people like myself who already feel welcomed in the movement because everyone speaks their dialect and whose peers are easily convinced too.)
Throughout the time that I’ve been part of the movement, the general sentiment either in the movement as a whole or within my bubble of it has shifted in some ways. One trend that I’ve perceived is that in the earlier days there was more concern over trying vs. really trying while now concern over putting one’s activism on a long-term sustainable basis has become more important. Again, this may be just my filter bubble. This is encouraging as it shows that everyone is very well capable of updating, but it also indicates that as of one or two years ago, we still had a bunch to learn even concerning rather core issues. In a few more years, I’ll probably be more confident that come core questions are not so much in flux anymore that new EAs can overlook or disregard them and thereby dilute what EA currently stands for or shift it into a direction I couldn’t identify with anymore.
Again, I’m not ignoring your points on political topics, I just don’t feel sufficiently well-informed to comment. I’ve been meaning to read David Roodman’s literature review on open borders–related concerns, since I greatly enjoyed some of his other work, but I haven’t yet. David Roodman now works for the Open Philanthropy Project.
Well, there is a question about what EA is. Is EA about being effectively altruistic within your existing value system? Or is it also about improving your value system to more effectively embody your terminal values? Is it about questioning even your terminal values to make sure they are effective and altruistic?
I’ve always perceived EA as whatever stands at the end of any such process, or maybe not the end but some critical threshold when a person realizes that they agree with the core tenets that they value other’s well-being, and that greater well-being or the well-being or more beings weighs heavier than lesser well-being or the well-being of fewer. If they reach such a threshold. If they do, I see all three processes as relevant.
Regardless of whether you are an antirealist, not all value systems are created equal.
Of course.
Their knowledge of history, politics, and object-level social science is low. … I’m doing the same thing: encouraging EAs to reflect on their value systems, and attain a broader geopolitical and historical context to evaluate their interventions.
Yes, thanks! That’s why I was most interested in your comment in this thread, and because all other comments that piqued my interest in similar ways already had comprehensive replies below them when I found the thread.
This needs to be turned into a concrete strategy, and I’m sure CEA is already on that. Identifying exactly what sorts of expertise are in short supply in the movement and networking among the people who possess just this expertise. I’ve made some minimal-effort attempts to pitch EA to economists, but inviting such people to speak at events like EA Global is surely a much more effective way of drawing them and their insights into the movement. That’s not limited to economists of course.
Do you have ideas for people or professions the movement would benefit from and strategies for drawing them in and making them feel welcome?
I just don’t think a lot of EAs have thought their value systems through very thoroughly
Given how many philosophers there are in the movement, this would surprise me. Is it possible that it’s more the result of the ubiquitous disagreement between philosophers?
How do we know we aren’t also deluded by present-day politics?
I’ve wondered about that in the context of moral progress. Sometimes the idea of moral progress is attacked on the grounds that proponents base their claims for moral progress on how history has developed into the direction of our current status quo, which is rather pointless since by that logic any historical trend toward the status quo would then become “moral progress.” However, by my moral standards the status quo is far from perfect.
Analogously I see that the political views EAs are led to hold are so heterogeneous that some have even thought about coining new terms for this political stance (such as “newtilitarianism”), luckily only in jest. (I’m not objecting to the pun but I’m wary of labels like that.) That these political views are at least somewhat uncommon in their combination suggests to me that we’re not falling into that trap, or at least making an uncommonly good effort of avoiding it. Since the trap is pretty much the default starting point for many of us, it’s likely we still have many legs trapped in it despite this “uncommonly good effort.” The metaphor is already getting awkward, so I’ll just add that some sort of contrarian hypercorrection would of course constitute just another trap. (As it happens, there’s another discussion of the importance of diversity in the context of Open Phil in that Vox article.)
No need for you to address any particular political point I’m making. For now, it is sufficient for me to suggest that reigning progressive ideas about politics are flawed and holding EAs back, without you committing to any particular alternative view.
I’m glad to hear that EAs are focusing more on movement-building and collaboration. I think there is a lot of value in eigenaltruism: being altruistic only towards other eigenaltruistic people who “pay it forward” (see Scott Aaronson’s eigenmorality). Civilizations have been built with reciprocal altruism. The problem with most EA thinking is that is one-way, so the altruism is consumed immediately. This post argues that morality evolved as a system of mutual obligation, and that EAs misunderstand this.
Although there is some political heterogeneity in EA, it is overwhelmed by progressives, and the main public recommendations are all progressive causes. Moral progress is a tricky concept: for example, the French Revolution is often considered moral progress, but the pictures paint another story.
On open borders, economic analyses like Roodman’s are just too narrow. They do not take into account all of the externalities, such as crime and changes to cultural institutions. OpenBorders.info addresses many of the objections, sometimes; it does a good job of summarizing some of the anti-open borders arguments, but often fails to refute them, yet this lack of refutation doesn’t translate into them updating their general stance on immigration.
If humans are interchangeable homo economicus then open borders would be a economic and perhaps moral imperative. If indeed human groups are significantly different, such as in crime rates, then it throws a substantial wrench into open borders. If the safety of open borders is in question, then it is a risky experiment.
Some of early indicators are scary, like the Rotherham Scandal. There are reports of similar coverups in other areas, and economic analyses do not capture the harms to these thousands of children. High-crime areas where the police have trouble enforcing rule of law are well documented in Europe: they are called “no-go zones” or “sensitive urban zones” (“no-go zone” is controversial because technically you can go there, but would you want to go to this zone, especially if you were Jewish?). Britain literally has Sharia Patrols harassing gay people and women.
These are just the tip of the iceberg of what is happening with current levels of immigration. Just imagine what happens with fully open borders. I really don’t think its advocates have grappled with this graph, and what it means for Europe under open borders. No matter how generous Europe was, its institutions would never be able to handle the wave of immigrants, and open borders advocates are seriously kidding themselves if they don’t see that Europe would turn into South Africa mixed with Syria, and the US would turn into Brazil. And then who would send aid to Africa?
Rule of law is slowly breaking down in the West, and elite Westerners are sitting in their filter bubbles fiddling while Rome burns. I’m not telling you to accept this scenario as likely; you would need to go do your own research at the object-level. But with even a small risk that this scenario is possible, it’s very significant for future human welfare.
Do you have ideas for people or professions the movement would benefit from and strategies for drawing them in and making them feel welcome?
I’ll think about it. I think some of the sources I’ve cited start answering that question: finding people who are knowledgeable about the giant space of stuff that the media and academia is sweeping under the carpet for political reasons.
Before I delay my reply until I’ve read everything you’ve linked, I’ll rather post a WIP reply.
Thanks for all the data! I hope I’ll have time to look into Open Borders some more in August.
Error theorists would say that the blog post “Effective Altruists are Cute but Wrong” is cute but wrong, but more generally the idea of using PageRank for morality is beautifully elegant (but beautifully elegant things have often turned out imperfect in practice in my experience). I still have to read the rest of the blog post though.
Eigendemocracy reminds me of Cory Doctorow’s whuffie idea.
An interesting case for eigenmorality is when you have distinct groups that cooperate amongst themselves and defect against others. Especially interesting is the case where there are two large, competing groups that are about the same size.
“I’ll take your word that many EAs also think this way, but I don’t really see it effecting the main charitable recommendations. Followed to its logical conclusion, this outlook would result in a lot more concern about the West.”
Can you elaborate please? From my perspective, just because a western citizen is more rich / powerful doesn’t mean that helping to satisfy their preferences is more valuable in terms of indirect effects? Or are you talking about who to persuade because I don’t see many EA orgs asking Dalit groups for their cash or time yet.
It’s not the preferences of the West that are inherently more valuable, it’s the integrity of its institutions, such as rule of law, freedom of speech, etc… If the West declines, then it’s going to have negative flow-through effects for the rest of the world.
There are other countries with sound institutions, like Singapore and Japan, but I’m not so worried about them as I am about the West, because they have an eye towards self-preservation. For instance, both those countries have declining birth rates, but they protect their own rule of law (unlike the West), and have more cautious immigration policies that help avoid their population from being replaced by a foreign one (unlike the West). The West, unlike sensible Asian countries, is playing a dangerous game by treating its institutions in a cavalier way for ill-thought-out redistributionist projects and importing leftist voting blocs.
EAs should also be more worried about decline in the West, because Westerners (particularly NW Europeans) are more into charity than other populations (e.g. Eastern Europeans are super-low in charity). My previous post documents this. A Chinese- or Russian- dominated future is really, really bad for EA, for existential risk prevention, and for AI safety.
There are other countries with sound institutions, like Singapore and Japan, but I’m not so worried about them as I am about the West, because they have an eye towards self-preservation.
I wouldn’t be so cavalier about that. Japan, specifically, has about zero immigration and its population, not to mention the workforce, is already falling. Demographics is a bitch. Without any major changes, in a few decades Japan will be a backwater full of old people’s homes that some Chinese trillionaire might decide to buy on a whim and turn into a large theme park.
Open borders and no immigration are like Scylla and Charybdis—neither is a particularly appealing option for a rich and aging country.
I also feel that the question “how much immigration to allow” is overrated. I consider it much less important than the question of “precisely what kind of people should we allow in”. A desirable country has an excellent opportunity to filter a part of its future population and should use it.
I agree that Japan has its own problems. No solutions are particularly good if they can’t get their birth rates up. Singapore also has low birth rates. What problems are preventing high-IQ people from reproducing might be something that EAs should look into.
“How much immigration to allow” and “precisely what kind of people should we allow in” can be related, because the more immigration you allow, the less selective you are probably being, unless you have a long line of qualified applicants. Skepticism of open borders doesn’t require being against immigration in general.
As you say, a filtered immigration population could be very valuable. For example, you could have “open borders” for educated professionals from low-crime, low-corruption areas countries with compatible value systems and who are encouraged to assimilate. I’m pretty sure this isn’t what most open borders advocates mean by “open borders,” though.
The left doesn’t “want” a responsible immigration policy either. For their political goals, they want a large and dissatisfied voting block. And for their signaling goals, it’s much more holy to invite poor, unskilled people rather than skilled professionals who want to assimilate.
Trading off these element in an expected value framework … is probably again a rather personal decision
If you aren’t aware of the relevant decision theory, then I have good news for you!
I’m not sure this is true, at least in the narrow instance of rationalists trying to make maximally effective decisions based on well defined uncertainties. In principle, at least, it should be possible to calculate the value of information. Decision theory has a concept called the expected value of perfect information. If you’re not 100% sure of something, but the cost of obtaining information is high (which it generally is in philosophy, as evidenced by the somewhat slow progress over the centuries.) and giving opportunities are shrinking (which they are for many areas, as conditions improve) then you probably want to risk giving sub-optimally by giving now vs later. The price of information is simply higher than the expected value.
Unfortunately, you might still need to make a judgement call to guesstimate the values to plug in.
Thanks! I hadn’t seen the formulae for the expected value of perfect information before. I haven’t taken the time to think them through yet, but maybe they’ll come in handy at some point.
If anyone’s skimming through these comments, it’s worthwhile noting that most of my original ideas as seen in my top-level comment have been thoroughly refuted.
tl;dr—My perspective is, in short, echoed on Marginal Revolution:
‘Of course, there are systematic problems with charitable giving. Most importantly, the feedback mechanism is never going to work as well when people are buying something to be consumed by others (as Milton Friedman explains)’ –
Those criticisms that remain and many stronger points of contention are far more eloquently independently explained by Journeyman’s critique here.
Anyhow, I don’t like the movements branding, which is essentially its core feature. Since the community would probably reorganise around a new brand anyway. Altruism is fictional, hypothetical, doesn’t exist.
It has been observed, however, that the very act of eating (especially, when there are others starving in the world) is such an act of self-interested discrimination. Ethical egoists such as Rand who readily acknowledge the (conditional) value of others to an individual, and who readily endorse empathy for others, have argued the exact reverse from Rachels, that it is altruism which discriminates: “If the sensation of eating a cake is a value, then why is it an immoral indulgence in your stomach, but a moral goal for you to achieve in the stomach of others?”
It is therefore altruism which is an arbitrary position, according to Rand.
That would be another example of things which some EAs do, but which don’t yet seem to percolate through to the public-facing parts of the movement. For example, valuing other EAs due to flow-though contradicts Singer’s view, as far as I understand him:
Effective altruists do not discount suffering because it occurs far away or in another country or afflicts people of a different race or religion. They agree that the suffering of animals counts too and generally agree that we should not give less consideration to suffering just because the victim is not a member of our species.
I don’t get your argument there. After all, you might e.g. value other EAs instrumentally because they help members of other species. That is, you intrinsically value an EA like anyone else, but you’re inclined to help them more because that will translate into others being helped.
Some EAs are open borders advocates, but open borders is a ridiculously dangerous experiment for the West. They have not satisfactorily accounted for the crime and corruption that immigrants may bring. Additionally, under democracy, immigrants can vote and change the culture. Open border advocates hope that institutions will survive, but they have provided no good arguments that Western institutions will survive rapid demographic change. Institutions might seem fine and then rapidly collapse in a non-linear way. If Western Civilization collapses into ethnic turmoil or Soviet sclerosis, then humans everywhere will suffer.
A good straightforward illustration of how institutions are entangled with culture is the difficulty the West has had exporting democracy to the Middle East.
Syrian openish border events reignited my interest in this so I did a bit more reading:
One the one hand, there is evidence that people who move from a more violence-supportive cultural context to a less violence-supportive one can have their tolerance for violence lessened as a result… On the other hand, violence-supportive attitudes can be imported by immigrant communities from one culture context to another. -Domestic violence attitudes
They seem to operate from the notion that giving away some of their money to charity is taken for granted, so they just need to find the best charity out of those that are possible to evaluate.
A lot of the post seems to confuse complex strategic moves like GiveWell’s move to start by focusing on life saved by proven interventions with the belief that life saved by proven interventions is the most important thing.
It is possible that some of a group doesn’t believe the logical consequences of its own positions. That doesn’t make them immune from criticism based on those logical consequences.
The actual position of GiveWell on it’s charity recommendations are quite long documents. The problem comes when you reduces the complex position to a simplified position.
Deworming saves lives but at the same time it’s also better at getting children to attend school than a lot of other interventions. The fact that the argument for Deworming is commonly made via saved lives in no way implies that the other benefits don’t factor in.
I do believe that my comment accurately characterizes the large EA organizations like GiveWell and philosophers like Peter Singer. I do realize that EAs are smart people, and many individual EAs have other beliefs and engage in all sorts of research. For example, some EA are concerned about nuclear war with Russia, and today I discovered the Global Catastrophic Risk Institute and the Global Priorities Project, which are outside of my critique. However, for now, Peter Singer, Give Well, Giving What We Can, and similar approaches are the most emblematic of EA, and it is towards this style of EA that my critique is directed, which I indicated in my previous comment when I said I was addressing “typical” or “median” EA. I believe it is fair to judge EA (as it currently exists) by these dominant approaches.
I disagree with you that I am stereotyping, but I think it’s good for me to clarify the scope of my critique, so I am adding a note to my previous comment that links to this comment.
That 80,000 Hours post doesn’t contradict my argument at all, and in fact reinforces it. My comment never argued that EAs believe that everyone earned to give, only that they are very confident about their moral claims about what people should do with their money. That post still shows that 80,000 Hours believes that at least 10% of people should earn to give, which is still an incredibly strong ethical claim.
A lot of the post seems to confuse complex strategic moves like GiveWell’s move to start by focusing on life saved by proven interventions with the belief that life saved by proven interventions is the most important thing.
Obviously GiveWell cannot show that their interventions are the “most important thing.” But GiveWell does claim that that its proven interventions are a sufficiently good thing to justify you spending money on them, and this is an immense moral claim. It’s not like GiveWell is a purely informational website.
In the context of the larger EA movement, Peter Singer’s philosophy and EA pledges argue with incredible confidence that people should be giving. EA is extremely evangelical, and Singer’s philosophy is incredibly flawed and emotionally manipulative.
The problem is that none of the most common EA approaches have defeated the “null giving hypothesis” of spending your money on yourself, or saving it in an investment account and then giving the compounded amount to another cause in the future. If someone is already insisting on giving to charity, then GiveWell might redirect their money in a direction that is actually useful, but EA is also trying to get people involved who were not doing charity before, and its moral arguments and understanding of the world are just not strong enough to justify spending money on the most dominant charitable approaches.
“X is the most efficient birdfeeder on the market” is a different type of claim from “the best birdfeeder on the market is worth spending money on,” or “feeding birds is a moral imperative,” or “we should pledge to feed birds and evangelize other people to do so, too.” My impression is that EAs are getting these kinds of claims mixed up.
Interesting that the solutions you’re jumping to are about defending the ‘west’ and beating the south / east rather than working with the south/east to make sure the best of both is shared?
To be clear, when I speak of defending the West, I am mostly thinking of defending the West against self-inflicted problems. Nobody is talking about “beating” the global south / east. If the West declines, then it won’t be in a very good position to share anything with anyone.
EA seems to value lives equally, but this is implausible from psychology (which values relatives and friends more), and also implausible from non-naive consequentialism, which values people based on their contributions, not just their needs.
The consequentialist issue could be addressed by the assumption that if only people’s needs were met, their potential for contribution would be equal. Do the people involved in EA generally believe that?
EAs might believe that, but that would be an example of their lack of knowledge of humanity and adoption of simplistic progressivism. Human traits for either altruism or accomplishment are not distributed evenly: people vary in clannishness, charity, civic-mindness, corruption, and IQ. It is most likely that differences between people explains why some groups have trouble building functional institutions and meeting their own needs.
Whether basic needs are met doesn’t explain why some groups within Europe are so different from each other. Southern Europe and parts of Eastern Europe have extremely low concentrations of charitable organizations. Also, good luck explaining the finding in the post I linked in my previous comment finding that vegetarianism in the US is correlated at 0.68 with English ancestry (but only weakly with European ancestry). Even different groups of white people are really, really different from each other, such as differences between Yankees and Southerners in the US, stemming from differences between settlers from different part of England.
Human groups evolved with geographical separation and selection pressures. For example, the clannishness source I linked show how tons of different outcomes are related to whether groups are inside or outside the Hajnal Line of inbreeding. Different rates of inbreeding will result in different strength of kin selection vs. reciprocal altruism. For example, here is the map of corruption with the Hajnal Line superimposed.
There is no good reason to believe that humans have equal potential for altruism and accomplishment, though there are benefits to signaling this belief.
Well, quite. The problem I see is that equality of worth is for some a sacred value, leading to the valuing of all lives equally and direction of resources to wherever the most lives can be saved, regardless of whose they are. While it is not something that logically follows from the basic idea of directing resources wherever they can do the most good, I don’t see the EA movement grasping the nettle of what counts as the most good. Lives or QALYs are the only things on the EA table at present.
This matches research showing that there are “sacred values”, like human lives, and “unsacred values”, like money. When you try to trade off a sacred value against an unsacred value, subjects express great indignation (sometimes they want to punish the person who made the suggestion).
Lives or QALYs are the only things on the EA table at present.
How do you come to that conclusion?
When the Open Philanthropy project researches whether why should spend more effort on dealing with the risk of solar storms, how’s that Lives or QALYs?
I may have a limited view of the EA movement. I had in mind primarily Givewell, whose currently recommended charities are all focussed on directing money towards the poorer parts of the world, to alleviate either disease or poverty. The Good Ventures portfolio of grants is mostly directed to the same sort of thing.
On global threats:
When the Open Philanthropy project researches whether why should spend more effort on dealing with the risk of solar storms, how’s that Lives or QALYs?
How would it not be? Major and prolonged geomagnetic storms, threaten the lives and QALYs of everyone everywhere, so there isn’t an issue there of selecting who to save first. Protective measures save everyone.
I had in mind primarily Givewell, whose currently recommended charities are all focussed on directing money towards the poorer parts of the world, to alleviate either disease or poverty.
You confuse reasons strategic choices of why GiveWell makes those recommendations with the shortest summary of the intervention.
Spending money on health care intervention does more than just saving lives. There are a lot of ripple effects.
GiveWell is also producing incentives to for charities in general to become more transparent and evidence-based.
Major and prolonged geomagnetic storms, threaten the lives and QALYs of everyone everywhere
You said only lives and QALYs. I’m not disputing that it also effects lives and QALYs. I’m disputing that’s the only thing you get from it.
it is not something that logically follows from the basic idea of directing resources wherever they can do the most good
It depends on how do you define “good”. In particular, in some value systems (and in some contexts) human lives are valued according to their productivity, and in other value systems and contexts, lives are valued regardless of their economic use or potential.
Even if he values human lives terminally, a utilitarian should assign unequal instrumental value to different human lives and make decision based on the combination of both.
I have a most altruistic mother, and I hate listening about other people’s problems which they have created without me, presented in such a way that if only I did give a damn I would, of course, join the fight and go on helping them for however long it takes. She is quite passionate when she comes home and unloads.
In contrast, when you, for example, write up a report about a place rich in biodiversity to be made into reserve, you get this warm feeling that you are creating a way for a problem to actually be solved, or at least solvable. And you do it not because somebody has an Enlightment Impulse around midnight, which you can’t escape being a dependent minor.
So: altruistic offspring, probable. EA offspring, improbable. Therefore, EA activists are right in not investing in it.
Effective Altruism is a well-intentioned but flawed philosophy. This is a critique of typical EA approaches, but it might not apply to all EAs, or to alternative EA approaches.
Edit: In a follow up comment, I clarify that this critique is primarily directed at GiveWell and Peter Singer’s styles of EA, which are the dominant EA approaches, but are not universal.
There is no good philosophical reason to hold EA’s axiomatic style of utilitarianism. EA seems to value lives equally, but this is implausible from psychology (which values relatives and friends more), and also implausible from non-naive consequentialism, which values people based on their contributions, not just their needs.
Even if you agree with EA’s utilitarianism, it is unclear that EA is actually effective at optimizing for it over a longer time horizon. EA focuses on maximizing lives saved in the present, but it has never been shown that this approach is optimal for human welfare over the long-run. The existential risk strand of EA gets this better, but it is too far off.
If EA is true, then moral philosophy is a solved problem. I don’t think moral philosophy works that way. Values are much harder than EA gives credit for. Betting on a particular moral philosophy with a percentage of your income shows an immense amount of confidence, and extraordinary claims require extraordinary evidence.
EA has an opportunity cost, and its confidence is crowding out better ideas. What would those better altruistic interventions be? I don’t know, but I feel like we can do better.
EAs have a weak understanding of geopolitics and demographics. The current state of the world is that Western Civilization, the goose that laid the golden egg, is declining. If indeed Western Civilization is in trouble, and we are facing near or medium-term catastrophic risks like social collapse, turning into Brazil, or war with Russia or China, then the highest-value opportunities for altruism will be at home. Unless you think we have a hard-takeoff AI scenario or technological miracles in the near-term, we should be very worried about geopolitics, demographics, and civilization in the medium-term and long-term.
If Western Civilization collapses, or is over-taken by China, then that will not be a good future for human welfare. Averting this possibility is way more high-impact than anything else that EAs are currently doing. If the West is secure and abundant, then maybe EAs have the right idea by redistributing wealth out of the West. But if the West is precarious and fragile, then redistribution makes less sense, and addressing the risks in the West seems more important.
EAs do not understand demographics, or are not taking them seriously if they do. The West is currently faltering in fertility and undergoing population replacement from people from areas with higher crime and corruption. Meanwhile, altruism itself varies between populations based on clannishness and inbreeding. We are heading towards a future that is demographically more clannish and less altruistic.
Some EAs are open borders advocates, but open borders is a ridiculously dangerous experiment for the West. They have not satisfactorily accounted for the crime and corruption that immigrants may bring. Additionally, under democracy, immigrants can vote and change the culture. Open border advocates hope that institutions will survive, but they have provided no good arguments that Western institutions will survive rapid demographic change. Institutions might seem fine and then rapidly collapse in a non-linear way. If Western Civilization collapses into ethnic turmoil or Soviet sclerosis, then humans everywhere will suffer.
Some EAs have a skeptical attitude towards parenthood, because it takes away money from charity, and believe that EAs are easier to convert than create. In some cases, EAs who want to become parents justify parenthood as an unprincipled exception. This whole conversation is ridiculous and exemplifies EAs’ flawed moral philosophy and understanding of humans. Altruistic parents are likely to have altruistic children due to the heritability of behavioral traits. If altruistic people fail to breed, then they will take their altruistic genes to the grave with them, like the Shakers. If altruism itself is a casualty of changing demographics, then human welfare will suffer in the future. (If you doubt this can happen, then check out the earlier two links, and good luck getting Eastern Europeans or Middle-Easterners interested in EA.)
I don’t think EAs do a very good job of distinguishing their moral intuitions from good philosophical arguments; see the interest of many EAs in open borders and animal rights. I do not see a large understanding in EA of what altruism is and how it can become pathological. Pathological altruism is where people become practically addicted to a feeling of doing good which leads them to act sometime with negative consequences. A quote from the book in that review, which shows some of the difficulties disentangling moral psychological from moral philosophy:
It seems that some people have strong intuitions towards altruism or animal rights, but it’s another thing entirely to say that those arguments are philosophically strong. It seems that people who are biologically predisposed towards altruism will be motivated to find philosophical arguments that justify what they already want to do. I don’t think EAs have corrected for this bias. If EAs’ arguments are flawed, then their adoption of them must be explained by their moral intuitions or signaling desires. Since EA provides great opportunities to signal altruism, intelligence, and discernment, it seems that there would be a gigantic temptation for some personalities to get into EA and exaggerate the quality of its arguments, or adopt its axioms even though other axioms are possible. Even though EAs employ reason and philosophy unlike typical pathological altruists, moral philosophy is subjective, and choice of particular moral theories seems highly related to personality.
The other psychological bias of EAs is due to them getting nerd-sniped by narrowly defining problems, or picking problems that are easier to solve or charities that are possible to evaluate. They seem to operate from the notion that giving away some of their money to charity is taken for granted, so they just need to find the best charity out of those that are possible to evaluate. In an inconvenient world for an altruist, the high-value opportunities are unknown or unknowable, throwing your money at what seems best might result in a negligible or negative effect, and keeping your money in your piggy bank until more obvious opportunities emerge might make the most sense.
EA isn’t all bad. It’s probably better than typical ineffective charities, so if you absolute must give to a charity, then effective charities are probably better. EAs have the right idea by trying to evaluate charities. Many EA arguments are strong within the bounds of utilitarianism, or the confines of a particular problem. But EAs have a hard road towards justification because their philosophy advocates spending money on strong moral claims, and being wrong about important things about the world will totally throw off their results.
My criticisms here don’t apply to all EAs or all possible EA approaches, just the median EA arguments and interventions I’ve seen. It is conceivable that in the future EA will become more persuasive to a larger group of people once it has greater knowledge about the world and incorporates that knowledge into its philosophy. An alternative approach to EA would focus on preserving Western Civilization and avoiding medium-term political/demographic catastrophies. But nobody is sufficiently knowledgeable at this point to know how we could spend money towards this goal.
As someone said in another comment there are the core tenets of EA, and there is your median EA. Since you only seem to have quibbles with the latter, I’ll address some of those, but I don’t feel like accepting or rejecting them is particularly important for being an EA in the context of the current form of the movement. We love discussing and challenging our views. Then again I think I so happen to agree with many median EA views.
VoiceOfRa put very concisely what I think is a median EA view here, but the comment is so deeply nested that I’m afraid it might get buried: “Even if he values human lives terminally, a utilitarian should assign unequal instrumental value to different human lives and make decision based on the combination of both.”
I think this has been mentioned in the comments but not very directly. The median EA view may be not to bother with philosophy at all because the branches that still call themselves philosophy haven’t managed to come to a consensus on central issues over centuries so that there is little hope for the individual EA to achieve that.
However when I talk to EAs who do have a background in philosophy, I find that a lot of them are metaethical antirealists. Lukas Gloor, who also posted in this thread, has recently convinced me that antirealism, though admittedly unintuitive to me, is the more parsimonious view and thus the view under which I operate now. Under antirealism moral intuitions, or some core ones anyway, are all we have, so that there can be no philosophical arguments (and thus no good or bad ones) for them.
Even if this is not a median EA view, I would argue that most EAs act in accordance with it just out of concern for the cost-effectiveness of their movement-building work. It is not cost-effective to try to convince everyone of the most unintuitive inferences from ones own moral system. However, among the things that are important to the individual EA, there are likely many that are very uncontroversial in most of society and focusing on those views in one’s “evangelical” EA work is much more cost-effective.
From my moral vantage point, the alternative (I’ll consider a different counterfactual in a moment) that I keep the money to spend it on myself where its marginal positive impact on my happiness is easily two or three orders of magnitude lower and my uncertainty over what will make me happy is also just slightly lower than with some top charities, that alternative would be a much more extraordinary claim.
You could break that up and note that in the end I’m not deciding to just “donate effectively,” but that I’ll decide on a very specific intervention and charity to donate to, for example Animal Equality, making my decision much more shaky again, but I’d also have to make such highly specific decisions that are probably only slightly less shaky when trying to spend money on my own happiness.
However, the alternative might also be:
That’s something the median EA has probably considered a good deal. Even at GiveWell there was a time in 2013 when some of the staff pondered whether it would be better to hold off with their personal donations and donate a year later when they’ve discovered better giving opportunities.
However several of your arguments seem to stem from uncertainty in the sense of “There is substantial uncertainty, so we should hold off doing X until the uncertainty is reduced.” Trading off these element in an expected value framework and choosing the right counterfactuals is probably again a rather personal decision when it comes to investing ones donation budget, but over time I’ve become less risk-averse and more ready to act under some uncertainty, which has hopefully brought me closer to maximizing the expected utility of my actions. Plus I don’t expect any significant decreases in uncertainty wrt the best giving opportunities in the future that I could wait for. There will hopefully be more with similar or only slightly greater levels of uncertainty though.
Part of the reason I wrote my critique is that I know that at least some EAs will learn something from it and update their thinking.
I’ll take your word that many EAs also think this way, but I don’t really see it effecting the main charitable recommendations. Followed to its logical conclusion, this outlook would result in a lot more concern about the West.
Well, there is a question about what EA is. Is EA about being effectively altruistic within your existing value system? Or is it also about improving your value system to more effectively embody your terminal values? Is it about questioning even your terminal values to make sure they are effective and altruistic?
Regardless of whether you are an antirealist, not all value systems are created equal. Many people’s value systems are hopelessly contradictory, or corrupted by politics. For example, some people claim to support gay people, but they also support unselective immigration from countries with anti-gay attitudes, which will inevitably cause negative externalities for gay people. That’s a contradiction.
I just don’t think a lot of EAs have thought their value systems through very thoroughly, and their knowledge of history, politics, and object-level social science is low. I think there are a lot of object-level facts about humanity, and events in history or going on right now which EAs don’t know about, and which would cause them to update their approach if they knew about it and thought seriously about it.
Look at the argument that EAs make towards ineffective altruists: they know so little about charity and the world that they are hopelessly unable to achieve significant results in their charity. When EAs talk to non-EAs, they advocate that (a) people reflect on their value system and priorities, and (b) they learn about the likely consequences of charities at an object-level. I’m doing the same thing: encouraging EAs to reflect on their value systems, and attain a broader geopolitical and historical context to evaluate their interventions.
What is or isn’t controversial in society is more a function of politics than of ethics. Progressive politics is memetically dominant, potentially religiously-descended, and falsely presents itself as universal. Imagine what an EA would do in Nazi Germany under the influence of propaganda. How about Soviet Effective Altruists, would they actually do good, or would they say “collectivize faster, comrade?” How do we know we aren’t also deluded by present-day politics?
It seems like there should be some basic moral requirement that EAs give their value a system a sanity-check instead of just accepting whatever the respectable politics of the time tell them. If indeed politics has a very pervasive influence on people’s knowledge and ethics, then giving your value system a sanity-check would require separating out the political component of your worldview. This would require deep knowledge of politics, history, and social science, and I just don’t see most EAs or rationalists operating at this level (I’m certainly not: the more I learn, the more I realize I don’t know).
The fact that the major EA interventions are so palatable to progressivism suggests that EA is operating with very bounded rationality. If indeed EA is bounded by progressivism, and progressivism is a flawed value system, then there are lots of EA missed opportunities lying around waiting for someone to pick them up.
I didn’t respond to your critiques that went into a more political direction because there was already discussion of those aspects there that I wouldn’t have been able to add anything to. There is concern in the movement in general and in individual EA organizations that because EAs are so predominantly computer scientists and philosophers, there is a great risk of incurring known and unknown unknowns. In the first category, more economists for example would be helpful; in the second category it will be important to bring people from a wide variety of demographics into the movement without compromising its core values. As computer scientist I’m pretty median again.
Indeed. I’m not sure if the median EA is concerned about this problem yet, but I wouldn’t be surprised if they are. Many EA organizations are certainly very alert to the problem.
This concern manifests in movement-building (GWWC et al.) and capacity-building (80k Hours, CEA, et al.). There is also concern that I share but that may not yet be median EA concern that we should focus more on movement-wide capacity-building, networking, and some sort of quality over quantity approach to allow the movement to be better and more widely informed. (And by “quantity” I don’t mean to denigrate anyone but just I mean more people like myself who already feel welcomed in the movement because everyone speaks their dialect and whose peers are easily convinced too.)
Throughout the time that I’ve been part of the movement, the general sentiment either in the movement as a whole or within my bubble of it has shifted in some ways. One trend that I’ve perceived is that in the earlier days there was more concern over trying vs. really trying while now concern over putting one’s activism on a long-term sustainable basis has become more important. Again, this may be just my filter bubble. This is encouraging as it shows that everyone is very well capable of updating, but it also indicates that as of one or two years ago, we still had a bunch to learn even concerning rather core issues. In a few more years, I’ll probably be more confident that come core questions are not so much in flux anymore that new EAs can overlook or disregard them and thereby dilute what EA currently stands for or shift it into a direction I couldn’t identify with anymore.
Again, I’m not ignoring your points on political topics, I just don’t feel sufficiently well-informed to comment. I’ve been meaning to read David Roodman’s literature review on open borders–related concerns, since I greatly enjoyed some of his other work, but I haven’t yet. David Roodman now works for the Open Philanthropy Project.
I’ve always perceived EA as whatever stands at the end of any such process, or maybe not the end but some critical threshold when a person realizes that they agree with the core tenets that they value other’s well-being, and that greater well-being or the well-being or more beings weighs heavier than lesser well-being or the well-being of fewer. If they reach such a threshold. If they do, I see all three processes as relevant.
Of course.
Yes, thanks! That’s why I was most interested in your comment in this thread, and because all other comments that piqued my interest in similar ways already had comprehensive replies below them when I found the thread.
This needs to be turned into a concrete strategy, and I’m sure CEA is already on that. Identifying exactly what sorts of expertise are in short supply in the movement and networking among the people who possess just this expertise. I’ve made some minimal-effort attempts to pitch EA to economists, but inviting such people to speak at events like EA Global is surely a much more effective way of drawing them and their insights into the movement. That’s not limited to economists of course.
Do you have ideas for people or professions the movement would benefit from and strategies for drawing them in and making them feel welcome?
Given how many philosophers there are in the movement, this would surprise me. Is it possible that it’s more the result of the ubiquitous disagreement between philosophers?
I’ve wondered about that in the context of moral progress. Sometimes the idea of moral progress is attacked on the grounds that proponents base their claims for moral progress on how history has developed into the direction of our current status quo, which is rather pointless since by that logic any historical trend toward the status quo would then become “moral progress.” However, by my moral standards the status quo is far from perfect.
Analogously I see that the political views EAs are led to hold are so heterogeneous that some have even thought about coining new terms for this political stance (such as “newtilitarianism”), luckily only in jest. (I’m not objecting to the pun but I’m wary of labels like that.) That these political views are at least somewhat uncommon in their combination suggests to me that we’re not falling into that trap, or at least making an uncommonly good effort of avoiding it. Since the trap is pretty much the default starting point for many of us, it’s likely we still have many legs trapped in it despite this “uncommonly good effort.” The metaphor is already getting awkward, so I’ll just add that some sort of contrarian hypercorrection would of course constitute just another trap. (As it happens, there’s another discussion of the importance of diversity in the context of Open Phil in that Vox article.)
No need for you to address any particular political point I’m making. For now, it is sufficient for me to suggest that reigning progressive ideas about politics are flawed and holding EAs back, without you committing to any particular alternative view.
I’m glad to hear that EAs are focusing more on movement-building and collaboration. I think there is a lot of value in eigenaltruism: being altruistic only towards other eigenaltruistic people who “pay it forward” (see Scott Aaronson’s eigenmorality). Civilizations have been built with reciprocal altruism. The problem with most EA thinking is that is one-way, so the altruism is consumed immediately. This post argues that morality evolved as a system of mutual obligation, and that EAs misunderstand this.
Although there is some political heterogeneity in EA, it is overwhelmed by progressives, and the main public recommendations are all progressive causes. Moral progress is a tricky concept: for example, the French Revolution is often considered moral progress, but the pictures paint another story.
On open borders, economic analyses like Roodman’s are just too narrow. They do not take into account all of the externalities, such as crime and changes to cultural institutions. OpenBorders.info addresses many of the objections, sometimes; it does a good job of summarizing some of the anti-open borders arguments, but often fails to refute them, yet this lack of refutation doesn’t translate into them updating their general stance on immigration.
If humans are interchangeable homo economicus then open borders would be a economic and perhaps moral imperative. If indeed human groups are significantly different, such as in crime rates, then it throws a substantial wrench into open borders. If the safety of open borders is in question, then it is a risky experiment.
Some of early indicators are scary, like the Rotherham Scandal. There are reports of similar coverups in other areas, and economic analyses do not capture the harms to these thousands of children. High-crime areas where the police have trouble enforcing rule of law are well documented in Europe: they are called “no-go zones” or “sensitive urban zones” (“no-go zone” is controversial because technically you can go there, but would you want to go to this zone, especially if you were Jewish?). Britain literally has Sharia Patrols harassing gay people and women.
These are just the tip of the iceberg of what is happening with current levels of immigration. Just imagine what happens with fully open borders. I really don’t think its advocates have grappled with this graph, and what it means for Europe under open borders. No matter how generous Europe was, its institutions would never be able to handle the wave of immigrants, and open borders advocates are seriously kidding themselves if they don’t see that Europe would turn into South Africa mixed with Syria, and the US would turn into Brazil. And then who would send aid to Africa?
Rule of law is slowly breaking down in the West, and elite Westerners are sitting in their filter bubbles fiddling while Rome burns. I’m not telling you to accept this scenario as likely; you would need to go do your own research at the object-level. But with even a small risk that this scenario is possible, it’s very significant for future human welfare.
I’ll think about it. I think some of the sources I’ve cited start answering that question: finding people who are knowledgeable about the giant space of stuff that the media and academia is sweeping under the carpet for political reasons.
Before I delay my reply until I’ve read everything you’ve linked, I’ll rather post a WIP reply.
Thanks for all the data! I hope I’ll have time to look into Open Borders some more in August.
Error theorists would say that the blog post “Effective Altruists are Cute but Wrong” is cute but wrong, but more generally the idea of using PageRank for morality is beautifully elegant (but beautifully elegant things have often turned out imperfect in practice in my experience). I still have to read the rest of the blog post though.
Eigendemocracy reminds me of Cory Doctorow’s whuffie idea.
An interesting case for eigenmorality is when you have distinct groups that cooperate amongst themselves and defect against others. Especially interesting is the case where there are two large, competing groups that are about the same size.
“I’ll take your word that many EAs also think this way, but I don’t really see it effecting the main charitable recommendations. Followed to its logical conclusion, this outlook would result in a lot more concern about the West.”
Can you elaborate please? From my perspective, just because a western citizen is more rich / powerful doesn’t mean that helping to satisfy their preferences is more valuable in terms of indirect effects? Or are you talking about who to persuade because I don’t see many EA orgs asking Dalit groups for their cash or time yet.
It’s not the preferences of the West that are inherently more valuable, it’s the integrity of its institutions, such as rule of law, freedom of speech, etc… If the West declines, then it’s going to have negative flow-through effects for the rest of the world.
I think its clearer then if you say sound institutions rather than the West?
There are other countries with sound institutions, like Singapore and Japan, but I’m not so worried about them as I am about the West, because they have an eye towards self-preservation. For instance, both those countries have declining birth rates, but they protect their own rule of law (unlike the West), and have more cautious immigration policies that help avoid their population from being replaced by a foreign one (unlike the West). The West, unlike sensible Asian countries, is playing a dangerous game by treating its institutions in a cavalier way for ill-thought-out redistributionist projects and importing leftist voting blocs.
EAs should also be more worried about decline in the West, because Westerners (particularly NW Europeans) are more into charity than other populations (e.g. Eastern Europeans are super-low in charity). My previous post documents this. A Chinese- or Russian- dominated future is really, really bad for EA, for existential risk prevention, and for AI safety.
I wouldn’t be so cavalier about that. Japan, specifically, has about zero immigration and its population, not to mention the workforce, is already falling. Demographics is a bitch. Without any major changes, in a few decades Japan will be a backwater full of old people’s homes that some Chinese trillionaire might decide to buy on a whim and turn into a large theme park.
Open borders and no immigration are like Scylla and Charybdis—neither is a particularly appealing option for a rich and aging country.
I also feel that the question “how much immigration to allow” is overrated. I consider it much less important than the question of “precisely what kind of people should we allow in”. A desirable country has an excellent opportunity to filter a part of its future population and should use it.
I agree that Japan has its own problems. No solutions are particularly good if they can’t get their birth rates up. Singapore also has low birth rates. What problems are preventing high-IQ people from reproducing might be something that EAs should look into.
“How much immigration to allow” and “precisely what kind of people should we allow in” can be related, because the more immigration you allow, the less selective you are probably being, unless you have a long line of qualified applicants. Skepticism of open borders doesn’t require being against immigration in general.
As you say, a filtered immigration population could be very valuable. For example, you could have “open borders” for educated professionals from low-crime, low-corruption areas countries with compatible value systems and who are encouraged to assimilate. I’m pretty sure this isn’t what most open borders advocates mean by “open borders,” though.
The left doesn’t “want” a responsible immigration policy either. For their political goals, they want a large and dissatisfied voting block. And for their signaling goals, it’s much more holy to invite poor, unskilled people rather than skilled professionals who want to assimilate.
If you aren’t aware of the relevant decision theory, then I have good news for you!
I’m not sure this is true, at least in the narrow instance of rationalists trying to make maximally effective decisions based on well defined uncertainties. In principle, at least, it should be possible to calculate the value of information. Decision theory has a concept called the expected value of perfect information. If you’re not 100% sure of something, but the cost of obtaining information is high (which it generally is in philosophy, as evidenced by the somewhat slow progress over the centuries.) and giving opportunities are shrinking (which they are for many areas, as conditions improve) then you probably want to risk giving sub-optimally by giving now vs later. The price of information is simply higher than the expected value.
Unfortunately, you might still need to make a judgement call to guesstimate the values to plug in.
Thanks! I hadn’t seen the formulae for the expected value of perfect information before. I haven’t taken the time to think them through yet, but maybe they’ll come in handy at some point.
If anyone’s skimming through these comments, it’s worthwhile noting that most of my original ideas as seen in my top-level comment have been thoroughly refuted.
tl;dr—My perspective is, in short, echoed on Marginal Revolution:
Those criticisms that remain and many stronger points of contention are far more eloquently independently explained by Journeyman’s critique here.
Anyhow, I don’t like the movements branding, which is essentially its core feature. Since the community would probably reorganise around a new brand anyway. Altruism is fictional, hypothetical, doesn’t exist.
W. Pedia.
Thanks, this helped me!
Thank you for taking the time to write such a detailed description of the issue.
One minor thing
Many EAs do seem to understand this to varying degrees of explicitly or implicitly: they value other EAs highly because of the flow through effects.
That would be another example of things which some EAs do, but which don’t yet seem to percolate through to the public-facing parts of the movement. For example, valuing other EAs due to flow-though contradicts Singer’s view, as far as I understand him:
I don’t get your argument there. After all, you might e.g. value other EAs instrumentally because they help members of other species. That is, you intrinsically value an EA like anyone else, but you’re inclined to help them more because that will translate into others being helped.
A good straightforward illustration of how institutions are entangled with culture is the difficulty the West has had exporting democracy to the Middle East.
Syrian openish border events reignited my interest in this so I did a bit more reading:
What are you basing your moral philosophy on, if it’s not moral intuitions?
To me that seems like you object to EA because you stereotype it and then find that the stereotype produces problems. 80,000 hours lately wrote a post indicating that they don’t believe that a majority should do earning-to-give: https://80000hours.org/2015/07/80000-hours-thinks-that-only-a-small-proportion-of-people-should-earn-to-give-long-term/
A lot of the post seems to confuse complex strategic moves like GiveWell’s move to start by focusing on life saved by proven interventions with the belief that life saved by proven interventions is the most important thing.
It is possible that some of a group doesn’t believe the logical consequences of its own positions. That doesn’t make them immune from criticism based on those logical consequences.
The actual position of GiveWell on it’s charity recommendations are quite long documents. The problem comes when you reduces the complex position to a simplified position.
Deworming saves lives but at the same time it’s also better at getting children to attend school than a lot of other interventions. The fact that the argument for Deworming is commonly made via saved lives in no way implies that the other benefits don’t factor in.
I do believe that my comment accurately characterizes the large EA organizations like GiveWell and philosophers like Peter Singer. I do realize that EAs are smart people, and many individual EAs have other beliefs and engage in all sorts of research. For example, some EA are concerned about nuclear war with Russia, and today I discovered the Global Catastrophic Risk Institute and the Global Priorities Project, which are outside of my critique. However, for now, Peter Singer, Give Well, Giving What We Can, and similar approaches are the most emblematic of EA, and it is towards this style of EA that my critique is directed, which I indicated in my previous comment when I said I was addressing “typical” or “median” EA. I believe it is fair to judge EA (as it currently exists) by these dominant approaches.
I disagree with you that I am stereotyping, but I think it’s good for me to clarify the scope of my critique, so I am adding a note to my previous comment that links to this comment.
That 80,000 Hours post doesn’t contradict my argument at all, and in fact reinforces it. My comment never argued that EAs believe that everyone earned to give, only that they are very confident about their moral claims about what people should do with their money. That post still shows that 80,000 Hours believes that at least 10% of people should earn to give, which is still an incredibly strong ethical claim.
Obviously GiveWell cannot show that their interventions are the “most important thing.” But GiveWell does claim that that its proven interventions are a sufficiently good thing to justify you spending money on them, and this is an immense moral claim. It’s not like GiveWell is a purely informational website.
In the context of the larger EA movement, Peter Singer’s philosophy and EA pledges argue with incredible confidence that people should be giving. EA is extremely evangelical, and Singer’s philosophy is incredibly flawed and emotionally manipulative.
The problem is that none of the most common EA approaches have defeated the “null giving hypothesis” of spending your money on yourself, or saving it in an investment account and then giving the compounded amount to another cause in the future. If someone is already insisting on giving to charity, then GiveWell might redirect their money in a direction that is actually useful, but EA is also trying to get people involved who were not doing charity before, and its moral arguments and understanding of the world are just not strong enough to justify spending money on the most dominant charitable approaches.
“X is the most efficient birdfeeder on the market” is a different type of claim from “the best birdfeeder on the market is worth spending money on,” or “feeding birds is a moral imperative,” or “we should pledge to feed birds and evangelize other people to do so, too.” My impression is that EAs are getting these kinds of claims mixed up.
Interesting that the solutions you’re jumping to are about defending the ‘west’ and beating the south / east rather than working with the south/east to make sure the best of both is shared?
To be clear, when I speak of defending the West, I am mostly thinking of defending the West against self-inflicted problems. Nobody is talking about “beating” the global south / east. If the West declines, then it won’t be in a very good position to share anything with anyone.
The consequentialist issue could be addressed by the assumption that if only people’s needs were met, their potential for contribution would be equal. Do the people involved in EA generally believe that?
EAs might believe that, but that would be an example of their lack of knowledge of humanity and adoption of simplistic progressivism. Human traits for either altruism or accomplishment are not distributed evenly: people vary in clannishness, charity, civic-mindness, corruption, and IQ. It is most likely that differences between people explains why some groups have trouble building functional institutions and meeting their own needs.
Whether basic needs are met doesn’t explain why some groups within Europe are so different from each other. Southern Europe and parts of Eastern Europe have extremely low concentrations of charitable organizations. Also, good luck explaining the finding in the post I linked in my previous comment finding that vegetarianism in the US is correlated at 0.68 with English ancestry (but only weakly with European ancestry). Even different groups of white people are really, really different from each other, such as differences between Yankees and Southerners in the US, stemming from differences between settlers from different part of England.
Human groups evolved with geographical separation and selection pressures. For example, the clannishness source I linked show how tons of different outcomes are related to whether groups are inside or outside the Hajnal Line of inbreeding. Different rates of inbreeding will result in different strength of kin selection vs. reciprocal altruism. For example, here is the map of corruption with the Hajnal Line superimposed.
There is no good reason to believe that humans have equal potential for altruism and accomplishment, though there are benefits to signaling this belief.
That sounds obviously false on its face.
Well, quite. The problem I see is that equality of worth is for some a sacred value, leading to the valuing of all lives equally and direction of resources to wherever the most lives can be saved, regardless of whose they are. While it is not something that logically follows from the basic idea of directing resources wherever they can do the most good, I don’t see the EA movement grasping the nettle of what counts as the most good. Lives or QALYs are the only things on the EA table at present.
That’s unfortunate. There can be no sacred values. That way lies madness.
Nevertheless:
-- Circular Altruism
Well...
How do you come to that conclusion? When the Open Philanthropy project researches whether why should spend more effort on dealing with the risk of solar storms, how’s that Lives or QALYs?
I may have a limited view of the EA movement. I had in mind primarily Givewell, whose currently recommended charities are all focussed on directing money towards the poorer parts of the world, to alleviate either disease or poverty. The Good Ventures portfolio of grants is mostly directed to the same sort of thing.
On global threats:
How would it not be? Major and prolonged geomagnetic storms, threaten the lives and QALYs of everyone everywhere, so there isn’t an issue there of selecting who to save first. Protective measures save everyone.
You confuse reasons strategic choices of why GiveWell makes those recommendations with the shortest summary of the intervention.
Spending money on health care intervention does more than just saving lives. There are a lot of ripple effects.
GiveWell is also producing incentives to for charities in general to become more transparent and evidence-based.
You said only lives and QALYs. I’m not disputing that it also effects lives and QALYs. I’m disputing that’s the only thing you get from it.
Well, what measure are they using?
I don’t think there’s a single measure. There rather an attempt to understand all the effects of an intervention as best as possible.
It depends on how do you define “good”. In particular, in some value systems (and in some contexts) human lives are valued according to their productivity, and in other value systems and contexts, lives are valued regardless of their economic use or potential.
Even if he values human lives terminally, a utilitarian should assign unequal instrumental value to different human lives and make decision based on the combination of both.
Re: altruistic children of altruistic parents.
I have a most altruistic mother, and I hate listening about other people’s problems which they have created without me, presented in such a way that if only I did give a damn I would, of course, join the fight and go on helping them for however long it takes. She is quite passionate when she comes home and unloads.
In contrast, when you, for example, write up a report about a place rich in biodiversity to be made into reserve, you get this warm feeling that you are creating a way for a problem to actually be solved, or at least solvable. And you do it not because somebody has an Enlightment Impulse around midnight, which you can’t escape being a dependent minor.
So: altruistic offspring, probable. EA offspring, improbable. Therefore, EA activists are right in not investing in it.