Effective Altruism (EA) is a movement trying to invest time and money in causes that do the most good per some unit of effort. The label applies broadly, including a philosophy, a community, a set of organisations and set of behaviours. Likewise it also sometimes means how to donate effectively to charities, choose one’s career, do the most good per $, do good in general or ensure the most good happens. All of these different framings have slightly different implications.
The basic concept behind EA is that one would really struggle to donate 100 times more money or time to charity than you currently do but, spending a little time researching who to donate to could have an impact on roughly this order of magnitude. The same argument works for doing good with your career or volunteer hours.
The Effective Altruism movement also has its own forum, The EA Forum. It runs on the same software as LessWrong.
Key Concepts
The Scale, Neglectedness, Tractability, (Personal Fit) criteria
Despite a broad diversity of ideas within the EA community on which areas are most pressing, there are a handful of criteria that are generally agreed make an area potentially impactful to work on (either directly or through donation). These are:
The area has the potential for impact at scale, either in human lives saved, animal or human suffering alleviated, catastrophic crises averted, etc. Sometimes this is called “importance”
The area is generally neglected, that is, it has capacity for more support either financially or in terms of skills. An area with lots of resources should lead us to think we are less likely to be able to make improvements.
The area is tractable, it is a solvable problem, or is solvable with minimal resource investment (relative to other problem areas)
A fourth semi-area is:
Does the individual have good personal fit? Do they have unique skills which will make them more effective in an area.
Impartiality (geographic, species, time)
Global health and wellbeing (geographic impartiality)
One morning, I say to them, you notice a child has fallen in and appears to be drowning. To wade in and pull the child out would be easy but it will mean that you get your clothes wet and muddy, and by the time you go home and change you will have missed your first class.
I then ask the students: do you have any obligation to rescue the child? Unanimously, the students say they do. The importance of saving a child so far outweighs the cost of getting one’s clothes muddy and missing a class, that they refuse to consider it any kind of excuse for not saving the child. Does it make a difference, I ask, that there are other people walking past the pond who would equally be able to rescue the child but are not doing so? No, the students reply, the fact that others are not doing what they ought to do is no reason why I should not do what I ought to do.
Once we are all clear about our obligations to rescue the drowning child in front of us, I ask: would it make any difference if the child were far away, in another country perhaps, but similarly in danger of death, and equally within your means to save, at no great cost – and absolutely no danger – to yourself?[1]
It is not clear why, under many moral systems we should care more about people who are in our country than to those who aren’t. But those who are in developing nations can be helped about 100x more cheaply than those in the US.
Animal Welfare (species impartiality)
The question is not, Can they reason?, nor Can they talk? but, Can they suffer?
If states of wellbeing matter, then they matter regardless of a being’s ability to express or change the situation. A sleeping person can be tormented by nightmares but we still consider that suffering meaningful. Likewise animals are capable of states of pleasure and pain, regardless of their ability to tell us of their situation.
And there are many animals. Likewise, they cannot vote and cannot earn money so are unable to change their own situation. This suggests that supporting animal welfare legislation might be a very cheap way to improve wellbeing.
On a deeper level, EAs say that species is not the marker of moral worth. If we had evolved from dolphins rather than apes, would we be less deserving of moral consideration? If this logic follows, it implies significant low-cost opportunities to improve welfare.
Longtermism (time impartiality)
A large portion of the EA community are by and large, longtermist. This refers to the idea that, if there are many future generations (100s, 1000s or more), and their lives are as valuable as ours, then even very small impacts on all of their lives—or things like moving good changes forwards in time or bad ones back—far outweigh impacts on people who are currently alive. Because this concept is less broadly-accepted than charity for currently-alive people, longtermist solutions are also generally considered to be neglected. Longtermist interventions generally focus on S-risks or X-risks.
Examples of longtermist interventions include AI safety, pandemic preparedness, and nanotechnology security. Examples of other popular EA interventions include global poverty alleviation, malaria treatments, and vitamin supplementation in sub-saharan Africa.
Suspicious convergence
If many unrelated factors point towards doing the same action, beware that you may be using motivated reasoning[2].
Charity effectiveness
From [scale tractability neglectendness], we can see a vast number of charities do not meet all or indeed any of these criteria. A major issue with EA is that some areas are much easier to track progress in than others (think tracking the cost per life saved of malaria nets vs existential AI risk, for instance). What is clear, however, is that some of the more effective charities (of those which are easy to track) have far more benefit over the average charity than people think—perhaps as much as 10,000% as effective.
An attempt at a minimal set of effective altruism axioms
Zvi wrote a set of axioms of EA as well as his disagreement with them in Criticism of EA Criticism Contest. This list is very roughly based on that, though with very substantial changes.
Consequentialism. Or something that looks similar in most situations to most people.
Importance of Suffering. Suffering is The Bad. Happiness/pleasure is The Good.
Quantification. It is good to quantify things, with made-up numbers if necessary.
Weirdness humility. If the answer is strange, double-check your math.
Scope Sensitivity. Shut up and multiply, two are twice as good as one.
Openness to criticism. Create low-effort ways for people to test your decisions/ theories of change/ sums
Intentionality. If you plan you will probably fail, but if you don’t plan it’s you are more likely to.
Effectiveness. Do what works. The goal is to actually win.
Altruism. The best way to do good yourself is to act selflessly to do good.
Impartiality. Beyond close friends and family, we should treat all others equally.
Evangelicalism. Belief that it is good to add skills and resources to EA.
Existential Risk. Wiping out all value in the universe is really, really bad.
Appreciate norms. It is usually good to be predictable to outsiders so you can work together well. It is very tempting to find reasons to break this rule.
Seriousness. Our actions have real-world consequences. People live and die based on our choices.
Grace. In practice, people can’t live up to this list fully and that’s acceptable.
Totalization. Everything of value can be expressed in terms of this framework, though it’s often better to confine only part of your life to it and do what you want with the rest.
Additional axioms for longtermism
Expected value still applies with very small chances of very large outcomes.
Total resources and how they are split
Current EA billionaires
Dustin Moskovitz
Cari Tuna
Bill Gates—Vaccination is clearly an EA cause
Vitalik gave $100m - $1Bn in crypto to GiveWell,
Maybe
Melinda French Gates—Vaccination is clearly an EA cause
Elon Musk—Civilisation on mars would reduce existential risk from nuclear/biorisk
Notion founder
Canva founder
Crypto people?
Number of Community Members
Funding in general
Impact
Global health and economic development
Lives saved − 90% CI [50,000, 10mn] - Nathan Young
The Against Malaria Foundation has distributed more than 70 million bednets to protect people (mostly children) from a debilitating parasite. (Source) [number of lives saved]
GiveDirectly has facilitated more than $100 million in direct cash transfers to families living in extreme poverty, who determine for themselves how best to spend the money. (Source) [number of lives saved]
The Schistosomiasis Control Initiative and Deworm the World Initiative invests in people’s health and future well-being by treating preventable diseases that often get little attention. They have given out hundreds of millions of deworming treatments to fight intestinal parasites, which may help people earn higher incomes later in life. (Sources for SCI and DWI)
Animal welfare
Chicken equivalent lives saved per year: 90% CI [10m , 100T] - Nathan Young
The Humane League and Mercy for Animals, alongside many other organizations, have orchestrated corporate campaigns and legal reforms to fight the use of battery cages. Because of this work, more than 100 million hens that would have been caged instead live cage-free. (This includes all cage-free reform work, of which a sizable fraction was funded by EA-aligned donors.)
The Good Food Institute works with scientists, entrepreneurs, and investors to develop and promote meat alternatives that don’t require the suffering of farmed animals.
Existential risk and the long-term future
[how much lower higher? risk of existential catastrphe as a result][3]
Organizations like the Future of Humanity Institute and the Centre for the Study of Existential Risk work on research and policy related to some of the biggest threats facing humanity, from pandemics and climate change to nuclear war and superintelligent AI systems.
Some organizations in this space, like the Center for Human-Compatible AI and the Machine Intelligence Research Institute, focus entirely on solving issues posed by advances in artificial intelligence. AI systems of the future could be very powerful and difficult to control—a dangerous combination.
Sherlock Biosciences is developing a diagnostic platform that could reduce threats from viral pandemics. (They are a private company, but much of their capital comes from a grant made by Open Philanthropy, an EA-aligned grantmaker.)
Criticisms
EA is incoherent. Consequentialism applies to one’s whole life, but many EAs don’t take it this seriously
This argument applies to virtue ethics too, but no one criticises it—“why aren’t you constantly seeking to always do the virtuous action”. People in practice seem to take statements from consequentialist philosophies more seriously than they do from others
It is more intellectually honest to surface incoherence in your worldview—“I use 80% of my time as effectively as possible” is more honest that “I try and always do the most good
EA frames all value in terms of impact creation and this makes members sad[4]
How widespread is this?
Many EAs don’t feel this way
Some people control orders of magnitude more resources than others. They could use their time and money to improve the lives of many other people. It is not idea to say these people should feel free to not create benefit
EA supports a culture of guilt [Kerry thread]
How does EA compare in terms of mental wellbeing to other communities centred around “doing good” eg “Protestant Work Ethic” and “Catholic Guilt”?
If you struggle with this, consider reading Replacing Guilt, which is one of only 3 sequences with a permanent place sidebar of the EA Forum.
EA is spending too much money
EA is spending more money but it’s not immediately obvious it is spending too much. It might be spending too little.
EA is too focused on people in developing nations
Dollars go much further in developing nations which does lead to a natural bias in spending
EA isn’t focused enough on systemic change in America
Note that this is often used in very similar situations to the above criticism. And in some of these, they can’t both be true.
EA is too focused on longtermism and existential risk to the detriment of people who are alive now
People who are alive now are far less neglected, they can participate in markets, democracies, and self-advocacy
A significant portion of funding goes to present causes
Existential risk is arguably high enough to be relevant even to people alive today
EAs defer too much to authority
EAs don’t listen to outside experts enough
EA doesn’t care about [insert issue]
The repugnant conclusions is bad
Utilitarianism is wrong
EAs lie a bit[5]
Nick Bostrom said he won a national record when really he did more courses than anyone he ever talked to
It’s hard to say. This is the sort of thing many people write in book bios. But regardless, he removed it when pressed
If these are the best accusations of dishonesty one can get from a 1000s strong decade long movement, then it sounds like they are pretty honest
There is a culture of suppressing disagreement while claiming to welcome it
This seems much better than comparable communities.
Add from Zvi’s list
Criticisms to add
Stefan Shubert’s criticisms and responses
Kuhn, Ben (2013) A critique of effective altruism, Ben Kuhn’s Blog, December 2.
McMahan, Jeff (2016) Philosophical critiques of effective altruism, The Philosophers’ Magazine, vol. 73, pp. 92–99.
Nielsen, Michael (2022) Notes on effective altruism, Michael’s Notebook, June 2.
Rowe, Abraham (2022) Critiques of EA that I want to read, Effective Altruism Forum, June 19.
Wiblin, Robert & Keiran Harris (2019) Vitalik Buterin on effective altruism, better ways to fund public goods, the blockchain’s problems so far, and how it could yet change the world, 80,000 Hours, September 3.
Zhang, Linchuan (2021) The motivated reasoning critique of effective altruism, Effective Altruism Forum, September 14.
The winners of this https://forum.effectivealtruism.org/posts/YgbpxJmEdFhFGpqci/winners-of-the-ea-criticism-and-red-teaming-contest
Related pages
Notable EA orgs
80,000 Hours, who offer advice for how to have a maximally globally impactful career
Effective Altruism, who offer support for local EA groups, as well as articles and advice surrounding EA
GiveWell, a charity doing research into the effectiveness of other charities to provide information for donors
The Life You Can Save, a free eBook outlining reasons for donating more and more effectively
I haven’t come with a way for going backward and forwards on discussions ie in the criticisms tag