Effective Altruism from XYZ perspective
In this thread, I would like to invite people to summarize their attitude to Effective Altruism and to summarise their justification for their attitude while identifying the framework or perspective their using.
Initially I prepared an article for a discussion post (that got rather long) and I realised it was from a starkly utilitarian value system with capitalistic economic assumptions. I’m interested in exploring the possibility that I’m unjustly mindkilling EA.
I’ve posted my write-up as a comment to this thread so it doesn’t get more air time than anyone else’s summarise and they can be benefit equally from the contrasting views.
I encourage anyone who participates to write up their summary and identify their perspective BEFORE they read the others, so that the contrast can be most plain.
I confess that I get the impression that the real purpose of the thread is Clarity’s own comment, but here FWIW are my own opinions.
My underlying assumptions are consequentialist (approximately preference-utilitarian) as to ethics, and rationalist/empiricist as to epistemology.
“Effective altruism” can mean at least two things.
Attempting to do good for others as effectively as you can (at least given the level of resources you’re willing to put in).
The particular cluster of approaches to that problem found among people and organizations that presently identify themselves as EA. I take it this means things like these:
Broadly utilitarian notion of what doing good means.
Preference for directing charitable activity at the world’s worst-off people, or perhaps (some) non-human animals.
Plus, for some, a side-order of existential risk.
Preference for quantifiable benefits, measured as carefully as possible.
Focus on smallish charities aiming to pluck low-hanging fruit.
Looking to organizations like GiveWell to identify those charities.
Strong preference for “earning to give” over other ways of helping charities.
I very strongly approve of effective altruism in the first, broad, sense. I dare say narrow-sense EA is not the best possible version of broad-sense EA, but it may be the best approximation readily available.
I don’t think strong approval of broad-sense EA is in need of much justification; if one is anything resembling a utilitarian (I am, as it happens, something resembling a utilitarian) then it’s almost a tautology.
Should we weight people equally for EA purposes? (I.e., should we reject claims that “charity begins at home”, that we should actually just take care of ourselves and to hell with everyone else, that it’s morally right not to care about people far away and very different from ourselves, etc.?) To some extent this is a question about first principles and hence largely unanswerable. I think we should expect our moral intuitions to be more heavily weighted against further more-different people than we would want on reflection, because they are partly a product of evolution and in the not-very-distant past our ability to help further more-different people was drastically less.
Should we focus on interventions that target very poor people, people in very poor countries, etc.? Given the answer to the previous question, I think we should expect the best interventions to be there. Crude model explaining why: any given person will have a bunch of problems and will, roughly, solve them in order of benefit/cost; they will stop when they run out of resources. Money is not the only resource but by definition interconverts with a wide variety of resources. We should expect the people with least money to have the worst problems. The governments of the places where they live will make some effort to address some of those problems, and again will roughly address them in order of benefit/cost and stop when they run out of resources. So we should expect people in the poorest countries to have their problems helped least by governments. Likely weaknesses of model: some problems need resources not readily exchanged for money (so consider also highly “non-monetary” problems like depression, totally untreatable diseases, unrequited love), but note that by definition these are hard to address by giving money. Some people have a bad idea of how to help themselves (so consider whether ill-informed or cognitively weak people offer better opportunities for “paternalistic” charity than one would expect just on the basis of their wealth) but helping them effectively may be difficult and paternalism is kinda icky. Some governments are very ineffective (by accident or design) at using their resources to help their neediest citizens (so consider dysfunctional countries as well as poor ones) but note that helping people effectively is probably harder where governments are broken or malicious.
What about non-human animals? Dunno. Difficult question that I’m not going to try to resolve here.
What about existential risk? The difficulty here is that naive calculations tend to suggest we should drop everything else and reduce existential risk (where this is taken in a maybe-unusual sense that includes the “risk” that the human race endures for millions of years, but never engages in large-scale colonization of other planets or massive-scale uploading or other scenarios that produce colossal numbers of people), but this has a distinctly Pascal’s-mugging flavour to it. Personally, I’m happy to discount future things and maybe even very distant things a little, and a little exponential discounting “tames” the argument that existential risk is overwhelmingly important.
Should we focus on causes for which clear quantifiable benefits can be demonstrated? I’d love to answer yes; that would make things so much easier. But the error terms in the quantification always seem to be enormous, and I don’t see any good reason to assume that the actually-best causes are all ones whose benefits are readily quantified and demonstrated.
Should we focus on small charities with low-hanging fruit? They’re surely the easiest to quantify the benefits of, and it’s likely that there are good opportunities there (and I believe GiveWell has found some). I am not altogether convinced that these are actually the best opportunities, but finding and evaluating others seems like a really hard problem. (Candidates include: larger charities whose economies of scale or greater credibility might make them more effective; political lobbying to try to redirect the huge resources of major governments; investment in for-profit enterprises that might bring big benefits to poor places.)
Does GiveWell give good advice? Conditional on my answers above, I’d say it does about as well as I can see any plausible way of doing given the resources available to them, and I don’t know of anyone else doing better.
Is “earning to give” better than working directly on doing good? I would expect this to vary a lot. If you are able to earn a good salary, wouldn’t be much more effective at a charitable organization than other people they can afford to hire, and don’t have exceptional skills directly applicable to doing good for the neediest people, then earning to give seems like a very good bet. If (e.g.) investment in carefully-chosen for-profit enterprises is actually a better way of doing good, that might be even better (though you should then consider whether you should give them $X rather than investing in them and getting $X less in expected return than you would for whatever investments you’d have made purely selfishly).
Effective Altruism is a well-intentioned but flawed philosophy. This is a critique of typical EA approaches, but it might not apply to all EAs, or to alternative EA approaches.
Edit: In a follow up comment, I clarify that this critique is primarily directed at GiveWell and Peter Singer’s styles of EA, which are the dominant EA approaches, but are not universal.
There is no good philosophical reason to hold EA’s axiomatic style of utilitarianism. EA seems to value lives equally, but this is implausible from psychology (which values relatives and friends more), and also implausible from non-naive consequentialism, which values people based on their contributions, not just their needs.
Even if you agree with EA’s utilitarianism, it is unclear that EA is actually effective at optimizing for it over a longer time horizon. EA focuses on maximizing lives saved in the present, but it has never been shown that this approach is optimal for human welfare over the long-run. The existential risk strand of EA gets this better, but it is too far off.
If EA is true, then moral philosophy is a solved problem. I don’t think moral philosophy works that way. Values are much harder than EA gives credit for. Betting on a particular moral philosophy with a percentage of your income shows an immense amount of confidence, and extraordinary claims require extraordinary evidence.
EA has an opportunity cost, and its confidence is crowding out better ideas. What would those better altruistic interventions be? I don’t know, but I feel like we can do better.
EAs have a weak understanding of geopolitics and demographics. The current state of the world is that Western Civilization, the goose that laid the golden egg, is declining. If indeed Western Civilization is in trouble, and we are facing near or medium-term catastrophic risks like social collapse, turning into Brazil, or war with Russia or China, then the highest-value opportunities for altruism will be at home. Unless you think we have a hard-takeoff AI scenario or technological miracles in the near-term, we should be very worried about geopolitics, demographics, and civilization in the medium-term and long-term.
If Western Civilization collapses, or is over-taken by China, then that will not be a good future for human welfare. Averting this possibility is way more high-impact than anything else that EAs are currently doing. If the West is secure and abundant, then maybe EAs have the right idea by redistributing wealth out of the West. But if the West is precarious and fragile, then redistribution makes less sense, and addressing the risks in the West seems more important.
EAs do not understand demographics, or are not taking them seriously if they do. The West is currently faltering in fertility and undergoing population replacement from people from areas with higher crime and corruption. Meanwhile, altruism itself varies between populations based on clannishness and inbreeding. We are heading towards a future that is demographically more clannish and less altruistic.
Some EAs are open borders advocates, but open borders is a ridiculously dangerous experiment for the West. They have not satisfactorily accounted for the crime and corruption that immigrants may bring. Additionally, under democracy, immigrants can vote and change the culture. Open border advocates hope that institutions will survive, but they have provided no good arguments that Western institutions will survive rapid demographic change. Institutions might seem fine and then rapidly collapse in a non-linear way. If Western Civilization collapses into ethnic turmoil or Soviet sclerosis, then humans everywhere will suffer.
Some EAs have a skeptical attitude towards parenthood, because it takes away money from charity, and believe that EAs are easier to convert than create. In some cases, EAs who want to become parents justify parenthood as an unprincipled exception. This whole conversation is ridiculous and exemplifies EAs’ flawed moral philosophy and understanding of humans. Altruistic parents are likely to have altruistic children due to the heritability of behavioral traits. If altruistic people fail to breed, then they will take their altruistic genes to the grave with them, like the Shakers. If altruism itself is a casualty of changing demographics, then human welfare will suffer in the future. (If you doubt this can happen, then check out the earlier two links, and good luck getting Eastern Europeans or Middle-Easterners interested in EA.)
I don’t think EAs do a very good job of distinguishing their moral intuitions from good philosophical arguments; see the interest of many EAs in open borders and animal rights. I do not see a large understanding in EA of what altruism is and how it can become pathological. Pathological altruism is where people become practically addicted to a feeling of doing good which leads them to act sometime with negative consequences. A quote from the book in that review, which shows some of the difficulties disentangling moral psychological from moral philosophy:
It seems that some people have strong intuitions towards altruism or animal rights, but it’s another thing entirely to say that those arguments are philosophically strong. It seems that people who are biologically predisposed towards altruism will be motivated to find philosophical arguments that justify what they already want to do. I don’t think EAs have corrected for this bias. If EAs’ arguments are flawed, then their adoption of them must be explained by their moral intuitions or signaling desires. Since EA provides great opportunities to signal altruism, intelligence, and discernment, it seems that there would be a gigantic temptation for some personalities to get into EA and exaggerate the quality of its arguments, or adopt its axioms even though other axioms are possible. Even though EAs employ reason and philosophy unlike typical pathological altruists, moral philosophy is subjective, and choice of particular moral theories seems highly related to personality.
The other psychological bias of EAs is due to them getting nerd-sniped by narrowly defining problems, or picking problems that are easier to solve or charities that are possible to evaluate. They seem to operate from the notion that giving away some of their money to charity is taken for granted, so they just need to find the best charity out of those that are possible to evaluate. In an inconvenient world for an altruist, the high-value opportunities are unknown or unknowable, throwing your money at what seems best might result in a negligible or negative effect, and keeping your money in your piggy bank until more obvious opportunities emerge might make the most sense.
EA isn’t all bad. It’s probably better than typical ineffective charities, so if you absolute must give to a charity, then effective charities are probably better. EAs have the right idea by trying to evaluate charities. Many EA arguments are strong within the bounds of utilitarianism, or the confines of a particular problem. But EAs have a hard road towards justification because their philosophy advocates spending money on strong moral claims, and being wrong about important things about the world will totally throw off their results.
My criticisms here don’t apply to all EAs or all possible EA approaches, just the median EA arguments and interventions I’ve seen. It is conceivable that in the future EA will become more persuasive to a larger group of people once it has greater knowledge about the world and incorporates that knowledge into its philosophy. An alternative approach to EA would focus on preserving Western Civilization and avoiding medium-term political/demographic catastrophies. But nobody is sufficiently knowledgeable at this point to know how we could spend money towards this goal.
As someone said in another comment there are the core tenets of EA, and there is your median EA. Since you only seem to have quibbles with the latter, I’ll address some of those, but I don’t feel like accepting or rejecting them is particularly important for being an EA in the context of the current form of the movement. We love discussing and challenging our views. Then again I think I so happen to agree with many median EA views.
VoiceOfRa put very concisely what I think is a median EA view here, but the comment is so deeply nested that I’m afraid it might get buried: “Even if he values human lives terminally, a utilitarian should assign unequal instrumental value to different human lives and make decision based on the combination of both.”
I think this has been mentioned in the comments but not very directly. The median EA view may be not to bother with philosophy at all because the branches that still call themselves philosophy haven’t managed to come to a consensus on central issues over centuries so that there is little hope for the individual EA to achieve that.
However when I talk to EAs who do have a background in philosophy, I find that a lot of them are metaethical antirealists. Lukas Gloor, who also posted in this thread, has recently convinced me that antirealism, though admittedly unintuitive to me, is the more parsimonious view and thus the view under which I operate now. Under antirealism moral intuitions, or some core ones anyway, are all we have, so that there can be no philosophical arguments (and thus no good or bad ones) for them.
Even if this is not a median EA view, I would argue that most EAs act in accordance with it just out of concern for the cost-effectiveness of their movement-building work. It is not cost-effective to try to convince everyone of the most unintuitive inferences from ones own moral system. However, among the things that are important to the individual EA, there are likely many that are very uncontroversial in most of society and focusing on those views in one’s “evangelical” EA work is much more cost-effective.
From my moral vantage point, the alternative (I’ll consider a different counterfactual in a moment) that I keep the money to spend it on myself where its marginal positive impact on my happiness is easily two or three orders of magnitude lower and my uncertainty over what will make me happy is also just slightly lower than with some top charities, that alternative would be a much more extraordinary claim.
You could break that up and note that in the end I’m not deciding to just “donate effectively,” but that I’ll decide on a very specific intervention and charity to donate to, for example Animal Equality, making my decision much more shaky again, but I’d also have to make such highly specific decisions that are probably only slightly less shaky when trying to spend money on my own happiness.
However, the alternative might also be:
That’s something the median EA has probably considered a good deal. Even at GiveWell there was a time in 2013 when some of the staff pondered whether it would be better to hold off with their personal donations and donate a year later when they’ve discovered better giving opportunities.
However several of your arguments seem to stem from uncertainty in the sense of “There is substantial uncertainty, so we should hold off doing X until the uncertainty is reduced.” Trading off these element in an expected value framework and choosing the right counterfactuals is probably again a rather personal decision when it comes to investing ones donation budget, but over time I’ve become less risk-averse and more ready to act under some uncertainty, which has hopefully brought me closer to maximizing the expected utility of my actions. Plus I don’t expect any significant decreases in uncertainty wrt the best giving opportunities in the future that I could wait for. There will hopefully be more with similar or only slightly greater levels of uncertainty though.
Part of the reason I wrote my critique is that I know that at least some EAs will learn something from it and update their thinking.
I’ll take your word that many EAs also think this way, but I don’t really see it effecting the main charitable recommendations. Followed to its logical conclusion, this outlook would result in a lot more concern about the West.
Well, there is a question about what EA is. Is EA about being effectively altruistic within your existing value system? Or is it also about improving your value system to more effectively embody your terminal values? Is it about questioning even your terminal values to make sure they are effective and altruistic?
Regardless of whether you are an antirealist, not all value systems are created equal. Many people’s value systems are hopelessly contradictory, or corrupted by politics. For example, some people claim to support gay people, but they also support unselective immigration from countries with anti-gay attitudes, which will inevitably cause negative externalities for gay people. That’s a contradiction.
I just don’t think a lot of EAs have thought their value systems through very thoroughly, and their knowledge of history, politics, and object-level social science is low. I think there are a lot of object-level facts about humanity, and events in history or going on right now which EAs don’t know about, and which would cause them to update their approach if they knew about it and thought seriously about it.
Look at the argument that EAs make towards ineffective altruists: they know so little about charity and the world that they are hopelessly unable to achieve significant results in their charity. When EAs talk to non-EAs, they advocate that (a) people reflect on their value system and priorities, and (b) they learn about the likely consequences of charities at an object-level. I’m doing the same thing: encouraging EAs to reflect on their value systems, and attain a broader geopolitical and historical context to evaluate their interventions.
What is or isn’t controversial in society is more a function of politics than of ethics. Progressive politics is memetically dominant, potentially religiously-descended, and falsely presents itself as universal. Imagine what an EA would do in Nazi Germany under the influence of propaganda. How about Soviet Effective Altruists, would they actually do good, or would they say “collectivize faster, comrade?” How do we know we aren’t also deluded by present-day politics?
It seems like there should be some basic moral requirement that EAs give their value a system a sanity-check instead of just accepting whatever the respectable politics of the time tell them. If indeed politics has a very pervasive influence on people’s knowledge and ethics, then giving your value system a sanity-check would require separating out the political component of your worldview. This would require deep knowledge of politics, history, and social science, and I just don’t see most EAs or rationalists operating at this level (I’m certainly not: the more I learn, the more I realize I don’t know).
The fact that the major EA interventions are so palatable to progressivism suggests that EA is operating with very bounded rationality. If indeed EA is bounded by progressivism, and progressivism is a flawed value system, then there are lots of EA missed opportunities lying around waiting for someone to pick them up.
I didn’t respond to your critiques that went into a more political direction because there was already discussion of those aspects there that I wouldn’t have been able to add anything to. There is concern in the movement in general and in individual EA organizations that because EAs are so predominantly computer scientists and philosophers, there is a great risk of incurring known and unknown unknowns. In the first category, more economists for example would be helpful; in the second category it will be important to bring people from a wide variety of demographics into the movement without compromising its core values. As computer scientist I’m pretty median again.
Indeed. I’m not sure if the median EA is concerned about this problem yet, but I wouldn’t be surprised if they are. Many EA organizations are certainly very alert to the problem.
This concern manifests in movement-building (GWWC et al.) and capacity-building (80k Hours, CEA, et al.). There is also concern that I share but that may not yet be median EA concern that we should focus more on movement-wide capacity-building, networking, and some sort of quality over quantity approach to allow the movement to be better and more widely informed. (And by “quantity” I don’t mean to denigrate anyone but just I mean more people like myself who already feel welcomed in the movement because everyone speaks their dialect and whose peers are easily convinced too.)
Throughout the time that I’ve been part of the movement, the general sentiment either in the movement as a whole or within my bubble of it has shifted in some ways. One trend that I’ve perceived is that in the earlier days there was more concern over trying vs. really trying while now concern over putting one’s activism on a long-term sustainable basis has become more important. Again, this may be just my filter bubble. This is encouraging as it shows that everyone is very well capable of updating, but it also indicates that as of one or two years ago, we still had a bunch to learn even concerning rather core issues. In a few more years, I’ll probably be more confident that come core questions are not so much in flux anymore that new EAs can overlook or disregard them and thereby dilute what EA currently stands for or shift it into a direction I couldn’t identify with anymore.
Again, I’m not ignoring your points on political topics, I just don’t feel sufficiently well-informed to comment. I’ve been meaning to read David Roodman’s literature review on open borders–related concerns, since I greatly enjoyed some of his other work, but I haven’t yet. David Roodman now works for the Open Philanthropy Project.
I’ve always perceived EA as whatever stands at the end of any such process, or maybe not the end but some critical threshold when a person realizes that they agree with the core tenets that they value other’s well-being, and that greater well-being or the well-being or more beings weighs heavier than lesser well-being or the well-being of fewer. If they reach such a threshold. If they do, I see all three processes as relevant.
Of course.
Yes, thanks! That’s why I was most interested in your comment in this thread, and because all other comments that piqued my interest in similar ways already had comprehensive replies below them when I found the thread.
This needs to be turned into a concrete strategy, and I’m sure CEA is already on that. Identifying exactly what sorts of expertise are in short supply in the movement and networking among the people who possess just this expertise. I’ve made some minimal-effort attempts to pitch EA to economists, but inviting such people to speak at events like EA Global is surely a much more effective way of drawing them and their insights into the movement. That’s not limited to economists of course.
Do you have ideas for people or professions the movement would benefit from and strategies for drawing them in and making them feel welcome?
Given how many philosophers there are in the movement, this would surprise me. Is it possible that it’s more the result of the ubiquitous disagreement between philosophers?
I’ve wondered about that in the context of moral progress. Sometimes the idea of moral progress is attacked on the grounds that proponents base their claims for moral progress on how history has developed into the direction of our current status quo, which is rather pointless since by that logic any historical trend toward the status quo would then become “moral progress.” However, by my moral standards the status quo is far from perfect.
Analogously I see that the political views EAs are led to hold are so heterogeneous that some have even thought about coining new terms for this political stance (such as “newtilitarianism”), luckily only in jest. (I’m not objecting to the pun but I’m wary of labels like that.) That these political views are at least somewhat uncommon in their combination suggests to me that we’re not falling into that trap, or at least making an uncommonly good effort of avoiding it. Since the trap is pretty much the default starting point for many of us, it’s likely we still have many legs trapped in it despite this “uncommonly good effort.” The metaphor is already getting awkward, so I’ll just add that some sort of contrarian hypercorrection would of course constitute just another trap. (As it happens, there’s another discussion of the importance of diversity in the context of Open Phil in that Vox article.)
No need for you to address any particular political point I’m making. For now, it is sufficient for me to suggest that reigning progressive ideas about politics are flawed and holding EAs back, without you committing to any particular alternative view.
I’m glad to hear that EAs are focusing more on movement-building and collaboration. I think there is a lot of value in eigenaltruism: being altruistic only towards other eigenaltruistic people who “pay it forward” (see Scott Aaronson’s eigenmorality). Civilizations have been built with reciprocal altruism. The problem with most EA thinking is that is one-way, so the altruism is consumed immediately. This post argues that morality evolved as a system of mutual obligation, and that EAs misunderstand this.
Although there is some political heterogeneity in EA, it is overwhelmed by progressives, and the main public recommendations are all progressive causes. Moral progress is a tricky concept: for example, the French Revolution is often considered moral progress, but the pictures paint another story.
On open borders, economic analyses like Roodman’s are just too narrow. They do not take into account all of the externalities, such as crime and changes to cultural institutions. OpenBorders.info addresses many of the objections, sometimes; it does a good job of summarizing some of the anti-open borders arguments, but often fails to refute them, yet this lack of refutation doesn’t translate into them updating their general stance on immigration.
If humans are interchangeable homo economicus then open borders would be a economic and perhaps moral imperative. If indeed human groups are significantly different, such as in crime rates, then it throws a substantial wrench into open borders. If the safety of open borders is in question, then it is a risky experiment.
Some of early indicators are scary, like the Rotherham Scandal. There are reports of similar coverups in other areas, and economic analyses do not capture the harms to these thousands of children. High-crime areas where the police have trouble enforcing rule of law are well documented in Europe: they are called “no-go zones” or “sensitive urban zones” (“no-go zone” is controversial because technically you can go there, but would you want to go to this zone, especially if you were Jewish?). Britain literally has Sharia Patrols harassing gay people and women.
These are just the tip of the iceberg of what is happening with current levels of immigration. Just imagine what happens with fully open borders. I really don’t think its advocates have grappled with this graph, and what it means for Europe under open borders. No matter how generous Europe was, its institutions would never be able to handle the wave of immigrants, and open borders advocates are seriously kidding themselves if they don’t see that Europe would turn into South Africa mixed with Syria, and the US would turn into Brazil. And then who would send aid to Africa?
Rule of law is slowly breaking down in the West, and elite Westerners are sitting in their filter bubbles fiddling while Rome burns. I’m not telling you to accept this scenario as likely; you would need to go do your own research at the object-level. But with even a small risk that this scenario is possible, it’s very significant for future human welfare.
I’ll think about it. I think some of the sources I’ve cited start answering that question: finding people who are knowledgeable about the giant space of stuff that the media and academia is sweeping under the carpet for political reasons.
Before I delay my reply until I’ve read everything you’ve linked, I’ll rather post a WIP reply.
Thanks for all the data! I hope I’ll have time to look into Open Borders some more in August.
Error theorists would say that the blog post “Effective Altruists are Cute but Wrong” is cute but wrong, but more generally the idea of using PageRank for morality is beautifully elegant (but beautifully elegant things have often turned out imperfect in practice in my experience). I still have to read the rest of the blog post though.
Eigendemocracy reminds me of Cory Doctorow’s whuffie idea.
An interesting case for eigenmorality is when you have distinct groups that cooperate amongst themselves and defect against others. Especially interesting is the case where there are two large, competing groups that are about the same size.
“I’ll take your word that many EAs also think this way, but I don’t really see it effecting the main charitable recommendations. Followed to its logical conclusion, this outlook would result in a lot more concern about the West.”
Can you elaborate please? From my perspective, just because a western citizen is more rich / powerful doesn’t mean that helping to satisfy their preferences is more valuable in terms of indirect effects? Or are you talking about who to persuade because I don’t see many EA orgs asking Dalit groups for their cash or time yet.
It’s not the preferences of the West that are inherently more valuable, it’s the integrity of its institutions, such as rule of law, freedom of speech, etc… If the West declines, then it’s going to have negative flow-through effects for the rest of the world.
I think its clearer then if you say sound institutions rather than the West?
There are other countries with sound institutions, like Singapore and Japan, but I’m not so worried about them as I am about the West, because they have an eye towards self-preservation. For instance, both those countries have declining birth rates, but they protect their own rule of law (unlike the West), and have more cautious immigration policies that help avoid their population from being replaced by a foreign one (unlike the West). The West, unlike sensible Asian countries, is playing a dangerous game by treating its institutions in a cavalier way for ill-thought-out redistributionist projects and importing leftist voting blocs.
EAs should also be more worried about decline in the West, because Westerners (particularly NW Europeans) are more into charity than other populations (e.g. Eastern Europeans are super-low in charity). My previous post documents this. A Chinese- or Russian- dominated future is really, really bad for EA, for existential risk prevention, and for AI safety.
I wouldn’t be so cavalier about that. Japan, specifically, has about zero immigration and its population, not to mention the workforce, is already falling. Demographics is a bitch. Without any major changes, in a few decades Japan will be a backwater full of old people’s homes that some Chinese trillionaire might decide to buy on a whim and turn into a large theme park.
Open borders and no immigration are like Scylla and Charybdis—neither is a particularly appealing option for a rich and aging country.
I also feel that the question “how much immigration to allow” is overrated. I consider it much less important than the question of “precisely what kind of people should we allow in”. A desirable country has an excellent opportunity to filter a part of its future population and should use it.
I agree that Japan has its own problems. No solutions are particularly good if they can’t get their birth rates up. Singapore also has low birth rates. What problems are preventing high-IQ people from reproducing might be something that EAs should look into.
“How much immigration to allow” and “precisely what kind of people should we allow in” can be related, because the more immigration you allow, the less selective you are probably being, unless you have a long line of qualified applicants. Skepticism of open borders doesn’t require being against immigration in general.
As you say, a filtered immigration population could be very valuable. For example, you could have “open borders” for educated professionals from low-crime, low-corruption areas countries with compatible value systems and who are encouraged to assimilate. I’m pretty sure this isn’t what most open borders advocates mean by “open borders,” though.
The left doesn’t “want” a responsible immigration policy either. For their political goals, they want a large and dissatisfied voting block. And for their signaling goals, it’s much more holy to invite poor, unskilled people rather than skilled professionals who want to assimilate.
If you aren’t aware of the relevant decision theory, then I have good news for you!
I’m not sure this is true, at least in the narrow instance of rationalists trying to make maximally effective decisions based on well defined uncertainties. In principle, at least, it should be possible to calculate the value of information. Decision theory has a concept called the expected value of perfect information. If you’re not 100% sure of something, but the cost of obtaining information is high (which it generally is in philosophy, as evidenced by the somewhat slow progress over the centuries.) and giving opportunities are shrinking (which they are for many areas, as conditions improve) then you probably want to risk giving sub-optimally by giving now vs later. The price of information is simply higher than the expected value.
Unfortunately, you might still need to make a judgement call to guesstimate the values to plug in.
Thanks! I hadn’t seen the formulae for the expected value of perfect information before. I haven’t taken the time to think them through yet, but maybe they’ll come in handy at some point.
If anyone’s skimming through these comments, it’s worthwhile noting that most of my original ideas as seen in my top-level comment have been thoroughly refuted.
tl;dr—My perspective is, in short, echoed on Marginal Revolution:
Those criticisms that remain and many stronger points of contention are far more eloquently independently explained by Journeyman’s critique here.
Anyhow, I don’t like the movements branding, which is essentially its core feature. Since the community would probably reorganise around a new brand anyway. Altruism is fictional, hypothetical, doesn’t exist.
W. Pedia.
Thanks, this helped me!
Thank you for taking the time to write such a detailed description of the issue.
One minor thing
Many EAs do seem to understand this to varying degrees of explicitly or implicitly: they value other EAs highly because of the flow through effects.
That would be another example of things which some EAs do, but which don’t yet seem to percolate through to the public-facing parts of the movement. For example, valuing other EAs due to flow-though contradicts Singer’s view, as far as I understand him:
I don’t get your argument there. After all, you might e.g. value other EAs instrumentally because they help members of other species. That is, you intrinsically value an EA like anyone else, but you’re inclined to help them more because that will translate into others being helped.
A good straightforward illustration of how institutions are entangled with culture is the difficulty the West has had exporting democracy to the Middle East.
Syrian openish border events reignited my interest in this so I did a bit more reading:
What are you basing your moral philosophy on, if it’s not moral intuitions?
To me that seems like you object to EA because you stereotype it and then find that the stereotype produces problems. 80,000 hours lately wrote a post indicating that they don’t believe that a majority should do earning-to-give: https://80000hours.org/2015/07/80000-hours-thinks-that-only-a-small-proportion-of-people-should-earn-to-give-long-term/
A lot of the post seems to confuse complex strategic moves like GiveWell’s move to start by focusing on life saved by proven interventions with the belief that life saved by proven interventions is the most important thing.
It is possible that some of a group doesn’t believe the logical consequences of its own positions. That doesn’t make them immune from criticism based on those logical consequences.
The actual position of GiveWell on it’s charity recommendations are quite long documents. The problem comes when you reduces the complex position to a simplified position.
Deworming saves lives but at the same time it’s also better at getting children to attend school than a lot of other interventions. The fact that the argument for Deworming is commonly made via saved lives in no way implies that the other benefits don’t factor in.
I do believe that my comment accurately characterizes the large EA organizations like GiveWell and philosophers like Peter Singer. I do realize that EAs are smart people, and many individual EAs have other beliefs and engage in all sorts of research. For example, some EA are concerned about nuclear war with Russia, and today I discovered the Global Catastrophic Risk Institute and the Global Priorities Project, which are outside of my critique. However, for now, Peter Singer, Give Well, Giving What We Can, and similar approaches are the most emblematic of EA, and it is towards this style of EA that my critique is directed, which I indicated in my previous comment when I said I was addressing “typical” or “median” EA. I believe it is fair to judge EA (as it currently exists) by these dominant approaches.
I disagree with you that I am stereotyping, but I think it’s good for me to clarify the scope of my critique, so I am adding a note to my previous comment that links to this comment.
That 80,000 Hours post doesn’t contradict my argument at all, and in fact reinforces it. My comment never argued that EAs believe that everyone earned to give, only that they are very confident about their moral claims about what people should do with their money. That post still shows that 80,000 Hours believes that at least 10% of people should earn to give, which is still an incredibly strong ethical claim.
Obviously GiveWell cannot show that their interventions are the “most important thing.” But GiveWell does claim that that its proven interventions are a sufficiently good thing to justify you spending money on them, and this is an immense moral claim. It’s not like GiveWell is a purely informational website.
In the context of the larger EA movement, Peter Singer’s philosophy and EA pledges argue with incredible confidence that people should be giving. EA is extremely evangelical, and Singer’s philosophy is incredibly flawed and emotionally manipulative.
The problem is that none of the most common EA approaches have defeated the “null giving hypothesis” of spending your money on yourself, or saving it in an investment account and then giving the compounded amount to another cause in the future. If someone is already insisting on giving to charity, then GiveWell might redirect their money in a direction that is actually useful, but EA is also trying to get people involved who were not doing charity before, and its moral arguments and understanding of the world are just not strong enough to justify spending money on the most dominant charitable approaches.
“X is the most efficient birdfeeder on the market” is a different type of claim from “the best birdfeeder on the market is worth spending money on,” or “feeding birds is a moral imperative,” or “we should pledge to feed birds and evangelize other people to do so, too.” My impression is that EAs are getting these kinds of claims mixed up.
Interesting that the solutions you’re jumping to are about defending the ‘west’ and beating the south / east rather than working with the south/east to make sure the best of both is shared?
To be clear, when I speak of defending the West, I am mostly thinking of defending the West against self-inflicted problems. Nobody is talking about “beating” the global south / east. If the West declines, then it won’t be in a very good position to share anything with anyone.
The consequentialist issue could be addressed by the assumption that if only people’s needs were met, their potential for contribution would be equal. Do the people involved in EA generally believe that?
EAs might believe that, but that would be an example of their lack of knowledge of humanity and adoption of simplistic progressivism. Human traits for either altruism or accomplishment are not distributed evenly: people vary in clannishness, charity, civic-mindness, corruption, and IQ. It is most likely that differences between people explains why some groups have trouble building functional institutions and meeting their own needs.
Whether basic needs are met doesn’t explain why some groups within Europe are so different from each other. Southern Europe and parts of Eastern Europe have extremely low concentrations of charitable organizations. Also, good luck explaining the finding in the post I linked in my previous comment finding that vegetarianism in the US is correlated at 0.68 with English ancestry (but only weakly with European ancestry). Even different groups of white people are really, really different from each other, such as differences between Yankees and Southerners in the US, stemming from differences between settlers from different part of England.
Human groups evolved with geographical separation and selection pressures. For example, the clannishness source I linked show how tons of different outcomes are related to whether groups are inside or outside the Hajnal Line of inbreeding. Different rates of inbreeding will result in different strength of kin selection vs. reciprocal altruism. For example, here is the map of corruption with the Hajnal Line superimposed.
There is no good reason to believe that humans have equal potential for altruism and accomplishment, though there are benefits to signaling this belief.
That sounds obviously false on its face.
Well, quite. The problem I see is that equality of worth is for some a sacred value, leading to the valuing of all lives equally and direction of resources to wherever the most lives can be saved, regardless of whose they are. While it is not something that logically follows from the basic idea of directing resources wherever they can do the most good, I don’t see the EA movement grasping the nettle of what counts as the most good. Lives or QALYs are the only things on the EA table at present.
That’s unfortunate. There can be no sacred values. That way lies madness.
Nevertheless:
-- Circular Altruism
Well...
How do you come to that conclusion? When the Open Philanthropy project researches whether why should spend more effort on dealing with the risk of solar storms, how’s that Lives or QALYs?
I may have a limited view of the EA movement. I had in mind primarily Givewell, whose currently recommended charities are all focussed on directing money towards the poorer parts of the world, to alleviate either disease or poverty. The Good Ventures portfolio of grants is mostly directed to the same sort of thing.
On global threats:
How would it not be? Major and prolonged geomagnetic storms, threaten the lives and QALYs of everyone everywhere, so there isn’t an issue there of selecting who to save first. Protective measures save everyone.
You confuse reasons strategic choices of why GiveWell makes those recommendations with the shortest summary of the intervention.
Spending money on health care intervention does more than just saving lives. There are a lot of ripple effects.
GiveWell is also producing incentives to for charities in general to become more transparent and evidence-based.
You said only lives and QALYs. I’m not disputing that it also effects lives and QALYs. I’m disputing that’s the only thing you get from it.
Well, what measure are they using?
I don’t think there’s a single measure. There rather an attempt to understand all the effects of an intervention as best as possible.
It depends on how do you define “good”. In particular, in some value systems (and in some contexts) human lives are valued according to their productivity, and in other value systems and contexts, lives are valued regardless of their economic use or potential.
Even if he values human lives terminally, a utilitarian should assign unequal instrumental value to different human lives and make decision based on the combination of both.
Re: altruistic children of altruistic parents.
I have a most altruistic mother, and I hate listening about other people’s problems which they have created without me, presented in such a way that if only I did give a damn I would, of course, join the fight and go on helping them for however long it takes. She is quite passionate when she comes home and unloads.
In contrast, when you, for example, write up a report about a place rich in biodiversity to be made into reserve, you get this warm feeling that you are creating a way for a problem to actually be solved, or at least solvable. And you do it not because somebody has an Enlightment Impulse around midnight, which you can’t escape being a dependent minor.
So: altruistic offspring, probable. EA offspring, improbable. Therefore, EA activists are right in not investing in it.
Okay, a summary of my attitude towards EA is that EA rationally follows from a set of weird premises that are not shared by most people and certainly not by me. I do not have any desire to maximize utility in a way that considers utility for every human being equally. I prefer increasing utility for myself, my family, friends, countrymen, and people like me. Every time I pay for electricity for my computer rather than sending the money to a third world peasant is, according to EA, a failure to maximize utility.
Also, I believe that most cases of EA producing very counterintuitive results are just examples of cases where the weirdness of EA becomes obvious.
I’m sad that people still think EAers endorse such a naive and short-time-horizon type of optimizing utility. It would obviously not optimize any reasonable utility function over a reasonable timeframe for you to stop paying for electricity for your computer.
More generally, I think most EAers have a much more sophisticated understanding of their values, and the psychology of optimizing them, than you give them credit for. As far as I know, nobody who identifies with EA routinely makes individual decisions between personal purchases and donating. Instead, most people allocate a “charity budget” periodically and make sure they feel ok about both the charity budget and the amount they spend on themselves. Very few people, if any, cut personal spending to the point where they have to worry about, e.g., electricity bills.
I do know—indeed, live with :S—a couple.
So I think most EAs have come to the point where they realise that small trade offs and agonising over them displace other good things, so they try and find a way of setting a limit by year or whatever. But you know many people agonise and make trade offs, its just that often it isn’t giving to the poor that’s the counterfactual, it’s saving or paying the mortgage, or buying a better holiday or school for their children or whatever. If you don’t think like that, then you have everything you need?? http://www.givinggladly.com/ and http://www.jefftk.com/index have documented going on this journey of living well with generosity. Sounds like it might be worth a read :)
edit: Soz Ben, I think I put this comment in the wrong place!
As I said before, it is possible that some of a group doesn’t believe the logical consequences of its own positions. That doesn’t make them immune from criticism based on those logical consequences.
It’s true, of course, that EA proponents don’t do this, but that only shows that EA is unworkable even to EA proponents. If you have a charity budget, there’s no good principled reason why you should restrict your donation to your charity budget. Arguments I’ve seen include:
You need to be able to make money to perform EA and going poor would be counterproductive—true, but most of the money you spend on personal entertainment is not being used to help you make money.
You would find it psychologically intolerable to not spend a certain amount of money on personal entertainment. But by this reasoning, the amount you should spend on charity is an amount that makes you uncomfortable, but just as much uncomfortable as you can get without long term effects on your psychological health and your motivation to donate. (It also means that your first priority should be to self-modify to have less psychological need for entertainment.) Also, it could be used to justify almost any level of giving, and in the limit, it’s equivalent to “I put a higher value on myself, just for a slightly different reason than everyone else who ‘doesn’t value people equally’ puts a higher value on themselves.”
EA states that it is good to spend money on charity, but being good is not the same thing as having a moral obligation to do it; it’s okay to not do as much good as you conceivably could. I find this explanation unconvincing because it would then equally justify not doing any good at all.
Effective Altruism says that all humans have roughly equal intrinsic value and takes necessary steps to gather evidence and quantify the degree to which humans are helped.
Short, but pretty much summarizes the entirety of the appeal for me. Is there even a name for the two perspectives contained in that sentence?
I never actually realized that ‘all humans have roughly equal intrinsic value’ was a core tenet of EA.
I like Effective Altruism a lot—I follow a lot of effective altrusim blogs, I adopt a lot of mental models and tools, I think the idea is great for a lot of people.
I’m highly interested in how to be effective, and I’m highly interested in how to do good, and EA gives some great ideas on both concepts.
That being said, what I’m not interested in as my sole aim is to be maximally effective at doing good. I’m more interested in expressing my values in as large and impactful a way as possible—and in allowing others to do the same. This happens to coincide with doing lots and lots f good, but it definitely doesn’t mean that I would begin to sacrifice my other values (eg fun, peace, expression) to maximize good. I’m interested in allowing others to express THEIR values, even if it means they’re incredibly selfish and do very little good—I suppose this almost begins to sound utilitarian, and I suppose it is—but again, I’m not going to sacrifice appreciable amounts of my own utility if it means more utllity for others, and I don’t expect others to do the same.
In terms of your critique of EA, I think you’ve completely bought into the idea of “revealed preferences”—that people’s utility is revealed in what they want. However, a large portion of psychology research shows something very different—that the behavior people have that gets reinforced is a completey separate “compulsion” pathway than what they enjoy/find happiness from/get fulfilled from, etc.
Economics doesn’t really care about that shit if it doesn’t effect people’s actions, so it’s easier to talk about “revealed preferences.” But as a utilitarian, you should be aware of all the separate pathways that the brain evolved to survive and replicate—many of them separate from happiness, fulfillment, pleasure, and other things which we like to talk about when we talk about “utility’.
The upshot of how all this relates to your points is that the free market/racking up money often hits a bunch of these compulsion pathways through the accumulation of money, but often IGNORES other areas of utility. Givewell is trying to fix the imbalance.
hacking the norm of reciprocity for the evolutionary benefit of future generations
You know what, you’re lesswrong. I didn’t realise before reading your comment. You’ve completely reframed some of my thinking. Thank you.
I’m going to rebrand myself as an Effective Mutualist!
Then I’m going to get serious and start reading up on how we might otherwise infer what will help others feel happiness other than via their revealed preferences. I still feel compelled to help others, beyond that which will materially benefit me or society in the long term (my thinking is that, if everyone where more mutualistic, then over the long term the more parasitic people would die off).
edit 1: The left wing tries to abolish poverty, the right tries to abolish bureaucracy. Perhaps there’s some innate psychological divide between people who try to get rid of social problems immediately, and those who want to do it sustainably.
(Upvoted for willingness to change your mind.)
It’s interesting to ask to what extent this is true of everyone—I think we’ve discussed this before Matt.
Your version and phrasing of what you’re interested in is particular to you, but we could broaden the question out to ask how far people have gone a long way moving away from having primarily self-centred drives which overwhelm others when significant self-sacrifice is on the table. I think some people have gone a long way moving away from that, but I’m sceptical that any single human being goes the full distance. Most EAs plausibly don’t make any significant self-sacrifices if measured in terms of their happiness significantly dipping.* The people I know who have gone the furthest may be Joey and Kate Savoie, with whom I’ve talked about these issues a lot.
* Which doesn’t mean they haven’t done a lot of good! If people can donate 5% or 10% or 20% of their income without becoming significantly less happy then that’s great, and convincing people to do that is a low hanging fruit that we should prioritise, rather than focusing our energies on then squeezing out extra sacrifices that start to really eat into their happiness. The good consequences of people donating are what we really care about after all, not the level of sacrifice they themselves are making.
Yes, I think in terms of my actions, I’m probably similar to many effective altruists. There are routes that I wouldn’t consider, such as earning to give, but all in all I’m probably on a similar path with many other EA’s who want to get into tech entrepreneurship.
I think where I differ is not in my actions, but in my moral aims. Many EA’s, if given a pill that could make them be able to work all day on helping others, sustainably, without changing their enjoyment of said activities, would think they ought to take it—and a sizeable portion probably would take it. I’d never take that pill, and wouldn’t feel bad about that choice.
Could charity distorts market signals which cripples the ability of sponsored economies to develop sustainability, leading to negative utility in the long term
Hikma and Norbrook are examples of ethical UK/worldwide pharmaceutical companies. I’ve worked for and can vouch for both.
I’m sorry to say that this all seems rather muddled. I don’t know how much of the muddle is actually in my brain.
You say “Effective Altruism isn’t utilitarian” and then link to an LW post whose central complaint is that EA is too utilitarian. Then you say “EA is prioritarian” by which I guess you mean it says “pick the most important cause and give only to it” and link to an LW post that doesn’t say anything remotely like that (it just says: here is one particular cause, see how much good you can do by giving to it).
You say GiveWell doesn’t see market efficiency as inherently valuable. I am not aware of any evidence for that; what there is evidence for is that they don’t see market efficiency as something worth throwing money at, and I have to say this seems very obviously correct; am I missing something here?
You say GiveWell’s “theory of value relates to health status”, by which I think you mean that they assess benefit as increase in QALYs. That seems pretty reasonable to me and I don’t understand your objections. (I’m sure there are ways one can help people that don’t show up in a QALY measurement, but when evaluating charities that aim to save lives or cure diseases—which is a large fraction of what charities targeting the world’s neediest people are doing—it seems reasonable; and when they look at e.g. GiveDirectly I don’t think they try to translate everything into QALYs.) Would you like to clarify what you’re objecting to and why?
You say “Donation is inherently supply driven, so it will inevitably be inefficient” but the whole point of EA is to try and figure out where the demand is and move donations there. (Except that “demand” needs to be reinterpreted slightly. “Need” would be a better term.)
I don’t understand your paragraph beginning “Inefficient market for warm fuzzies” at all, but I doubt it matters since EA is supposed to be all about what one does to actually help people; warm fuzzies should be “purchased separately”.
You don’t have much to say about how you think this could all be done better. You talk about “market based solutions” but (to me—perhaps others are cleverer) it’s far from clear what these might be. Markets, roughly, optimize for utility weighted by wealth, and unsusprisingly enough the worst-off people by most measures tend to be very poor. Accordingly, no demand-driven market-based solution can possibly do much for them because they haven’t enough money to generate much demand in the economic sense. (Even if they had access to the relevant markets, which as you mention in passing they may well not.) So … what do you have in mind, and why is it credible that it comes closer to maximizing utility than present-day EA?
Thanks for your comment.
Read the first comment on that post and the discussion the OP has with them.
No, I’m saying that it ‘chooses more important causes and weights them higher’.
Is this the flow through effects link? I’m not sure what you’re talking about.
The evidence that they believe that is in the link- where Givewell says it and the other links are to 80K or GWWC echoing it (don’t recall which from memory).
I would say you are missing something—whether market efficiency is something worth throwing money at—well, market efficiency by definition refers to a case where money is beying thrown at something that is worthwhile—a coincidence of interest in supply and demand.
Certainly. If QALY’s are valuable, then curing disease and saving lives is inherently valuable. However, people experience death and disease differently. Very differently. How can we work out how ‘bad’ that is for them—well we could use QALY’s and generalise for the entire disease for all people—or, we could infer it from what people actually do in relation to it. Do they save up money to buy bednets, or do they spend that money on a donkey to visit their girlfriend in the next village (that’s a fictional kinda silly example but illustrates my point). If they have a preference for bed nets above all other alternative options, and still can’t afford it, they have incentive to contribute their labour, for instance, to their community in a way that improves the lives of others and helps those people reach their preferences, while earning money to buy those bednets. If they can’t be valuable to their community, then their death is an overall positive to the overall economic efficiency of their community. That is, unless they are artificially subsidised for that kind of lifestyle by certain kinds of charity.
Demand can only be reliably inferred from past behaviour. If someone buys a loaf of bread every week, that’s demand for bread. If there’s 1⁄23 chance someone in a village gets cholera every year, and that village has a reputation for being able to afford the cholera treatment, then that’s demand for cholera treatment. People ‘demanding’ or begging, or a tourist feeling sory for someone out of judgement for some kind of inferior lifestyle subjectively is not demand. It can be interpreted as need, or even modelled as a need by consequence to something else: ie—you need to eat food to survive—but then the question is something else—are you donating because they ‘demand something?’ - that you’re fulfilling a subjective desire or utility state for them—which I believe is empathy driven, or are you fulfilling a utility conditional of your guilt or something else?
If a non EA get’s 100% warm fuzzies from donating to save polar bears or another thing they intuit. Their dynamic inconsistency means their cause preference changes, and it’s not big deal for them to switch charities when they feel like it.
An EA gets warm fuzzies only if they can satisfy some complicated equation and approval of their EA buddies, while that approval changes as information gets updated. However, they’re also fighting against their intuitive warm fuzzies for things like polar bears, and the same dynamic inconsistency amongst non-effective causes that non-EA’s are—for instance, they feel like donating to save guide dogs when they are primed by seeing a local blind man. Since this is for more complicated, the prospect of regret would be higher—at least, I think so intuitively, no?
I had never thought about it like that. I have to think about this some more. What a novel way of looking at it—thanks!
That’s just your opinion. Many tourists love unique and different cultures for their own sake. Or they might have a unique language to share, or anything. If they are alive, it’s because they have survived in an evolutionarily sound way till now so as a rough hereustic—they’re okay until there’s some kind of disaster event.
I think setting up less difficult conditions for maximum utility makes it easier to maximise your utility. There’s no need to slap a label on it. If I call something ‘effective fruit eating’ where I maximise my utility by successfully eating the sultanas across the room from me right now, it’s not very hard for me to maximise my utility.
Could you explain the idea of markets optimising for utility weighted by wealth more? I’m having trouble wrapping my head around the concept.
edit 1: perhaps existing EA’s could maximise their utility more by getting treated for scrupulosity?
OK, done. Now what? (I did not find that reading that material changed (a) my opinion that Dias’s complaint was basically that EA is too utilitarian, nor (b) my impression that you are complaining it isn’t utilitarian enough.)
And you regard that as a bad thing? Evidently I’m missing something, because weighting more important things more highly seems obviously sensible. What am I missing?
No, it’s the one linked to the word “prioritarian” in your comment.
Have either you or I got something exactly backwards? The post at the far end of that link (the “flow-through effects” one, right?) has the founder of GiveWell saying explicitly that market efficiency is valuable, but you’re citing it as support for your claim that GiveWell doesn’t see market efficiency as valuable.
Any transaction in any market (efficient or not) is such a case (at least with a suitable, somewhat nonstandard, definition of “worthwhile”, but I think you need that for any claim along these lines to be true). It is not clear that the difference between a more and a less efficient market is in how money is being thrown at how-worthwhile things. (Is it?)
Sure. But if what you’re trying to do is get an overall estimate of how much good a particular intervention does (or, harder: how much good it would do) then (1) you are not particularly interested in all those personal idiosyncrasies, except in so far as they come together to make some kind of average, and (2) you almost certainly don’t have enough information about people’s actions to know how much they would value whatever-it-is—because it may simply not be available to them; they may not know about it; they may not know enough about it; and, in the sort of market-based scenario I think you have in mind, perceived benefit is confounded with ability to pay.
(I’ll have more to say about that last point later, but one crude example for now. Imagine someone who is in prison and has either no possessions, or at any rate no access to his possessions. He is tortured for three hours every day. You have a wonderful new device, the Tortur-B-Gon, which magically confers immunity to torture. Words can barely express how much benefit our hypothetical prisoner would get from the Tortur-B-Gon, but you will never find that out by putting it for sale on the open market and waiting, because the prisoner doesn’t know about the market, can’t get to the shops, and can’t pay for the device.)
You are, I think, taking “demand” strictly in the economic sense of willingness to pay. OK, but then note that the supply-versus-demand dichotomy you’re appealing to isn’t exhaustive; there are things that happen that are not either supply or demand. In particular, charitable donation is not “supply-driven” if we take “supply” strictly in the economic sense of willingness to produce at a given price; charitable donation is not the same thing as selling.
Suppose I dedicate my life to understanding patterns of starvation, and I find various patterns that extremely reliably predict when and where a lot of people are likely to starve to death. I also conduct research into how effective various obvious measures (e.g., dropping food parcels by helicopter, walking in and handing out money, or when there’s enough warning doing things like supplying fertilizer for crops ahead of time) will be in reducing starvation, and I find various highly predictive patterns there too.
And then I watch the world for these patterns, and when I find a place and time where lots of people are likely to starve to death and one of the readily available countermeasures is likely to be successful, I do it. (Of course this costs a pile of money; let’s suppose I’m rich.)
The result will be that a lot of people will survive who would otherwise have starved to death.
You may, if you please, categorize this as “supply-driven” and say it must therefore be inefficient. Does this insight enable you either to tell me why the scenario I’ve described is impossible, or else to show how to save more lives for the same amount of money by not being “supply-driven”?
(I’m still not sure I understand what you’re saying about warm fuzzies, but I still don’t think it matters because EA is not about warm fuzzies so I’m not going to try very hard.)
Everything I say is just my opinion. Do you mean something more than that? (And is it in fact your opinion that the worst-off people by most measures don’t tend to be very poor? For instance, suppose we looked at the following populations: 1. People who have involuntarily had nothing to eat for at least five days in the last month. 2. Parents who have had at least three children die. 3. People who die before the age of 40. I’m guessing that those groups are all statistically a lot poorer than the population as a whole.)
I have no idea what tourists’ love of unique and different cultures has to do with this. I agree that someone who is still alive is necessarily still alive and that puts an upper bound on how things are for them, but it seems to me to be a very low upper bound.
Sorry, I don’t think I understand how that’s responsive to the question I asked. Is there any chance that you could answer it (or, of course, explain why you choose not to) more explicitly?
What markets give us (in theory, subject to various conditions) is a Pareto-efficient allocation of resources. And there’s a theorem that says that (in theory, subject to various conditions) one can get any Pareto-efficient allocation of resources by doing a bunch of pure money-transfer operations and then letting the market do its thing.
That’s nice, and it indicates that the market is optimizing something that increases as individual utility does: some notion of net utility. But what, exactly? Well, it needs to be one that regards those money-transfers as net-utility neutral.
So, suppose I have $1M and you have $1K, and otherwise we’re fairly similar. Because of the diminishing marginal utility of money, a given amount of money is worth more to you than to me. A common approximation is to say that if you have $X then the marginal utility of an extra $1 is roughly proportional to 1/X; equivalently, that the marginal utility of an extra $1 is roughly proportional to 1/wealth. In that case, an extra $1 for you gains you about as much extra happiness as an extra $1K for me. Consider a transaction in which I find 1000 people like you and pay you each $1 in exchange for what you consider to be $1 worth of inconvenience or pain; I have lost $1K but will be content if I get what I consider to be $1K worth of convenience or pleasure. So we have a possible transaction to which all participants are indifferent: I get a certain amount of happiness; 1000 people each get a roughly equivalent amount of unhappiness; and some money is transferred between us. If money transfers are net-utility-neutral, then by reversing those transfers we get another simpler “utility-neutral” transaction: X units of happiness for me, X units of unhappiness each for 1000 people. So long as they’re 1000x poorer than me.
I get the impression that you’re not well informed about EA and the diverse stances EAs have, and that you’re singling out an idiosyncratic interpretation and giving it an unfair treatment.
The first link you cite talks about public good provision within the current economy. How do you conclude from this that e.g. the effective altruists focused on AI safety are being inefficient? And even if you’re talking about e.g. donations to GiveWell’s recommended charities, how does the first link establish that it’s inefficient? Sick people in Africa usually tend to not be included in calculations about economical common goods, but EAs care about more than just their country’s economy.
FYI, you’re using highly idiosyncratic terminology here. Outside of LW, “utilitarianism” is the name for a family of consequentialist views that also include solely welfare-focused varieties like negative hedonistic utilitarianism or classical hedonistic utilitarianism.
In addition, you repeat the mantra that it’s an objective fact that “human values are complex”. That’s misleading, what’s complex is human moral intuitions. When you define your goal in life, no one forces you to incorporate every single intuition that you have. You may instead choose to regard some of your intuitions as more important than others, and thereby end up with a utility function of low complexity. Your terminal values are not discovered somewhere within you (how would that process work, exactly?), they are chosen. As EY would say, “the buck has to stop somewhere”.
This claim is wrong, only about 5% of the EAs I know are prioritiarians (I have met close to 100 EAs personally). And the link you cite doesn’t support that EAs are prioritarians either, it just argues that you get more QALYs from donating to AMF than from doing other things.
Even less for me.
Thanks for your comment.
Yes, as you stated I was working with the visible sample of EA’s who aren’t focused on existential risk. I feel the term in relation to existential risk is redundant since effective thinking about existential risk on Lesswrong.
The crowding out effect occurs not just as the individual level (which isn’t applicable to individual EA’s given room for more funding consideration), but also at the movement level. Because EA’s act en-bloc, and factor into their considerations ‘what are other people not funding’, they compete the supply a demand for donations against established institutional donors like the Gate’s Foundation. One might wonder then that if that was true, why those Foundations don’t close the funding gaps as a priority—and it looks like someone is trying to answer that here. Admittedly, I haven’t got to reading the article fully but from a quick skim it looks like the magnitude of donations of high impact philanthropists is such it compensates for the ‘ineffectiveness of their cause’, since those charities Givewell recommends have less room for more funding—which becomes a higher order consideration at that scale. The obvious counterexample to this is GiveDirectly, but I wouldn’t be suprised if the reason philanthropists don’t like them is because of fear of setting a precedence (sp?) against productive mutualistic exchange.
I can’t find the original post about the buck stopping after a bit of Googling. I’d like to keep looking into this!
The post I’m referring to is here, but I should note that EY used the phrase in a different context, and my view on terminal values does not reflect his view. My critique of the idea that all human values are complex is that it presupposes too narrow of an interpretation of “values”. Let’s talk about “goals” instead, defined as follows:
I took the definition from this blogpost I wrote a while back. The comment section there contains a long discussion on a similar issue where I elaborate on my view of terminal values.
Anyway, the way my definition of “goals” seems to differ from the interpretation of “values” in the phrase “human values are complex” is that “goals” allow for self-modification. If I could, I would self-modify into a utilitarian super-robot, regardless of whether it was still conscious or not. According to “human values are complex”, I’d be making a mistake in doing so. What sort of mistake would I be making?
The situation is as follows: Unlike some conceivable goal-architectures we might choose for artificial intelligence, humans do not have a clearly defined goal. When you ask people on the street what their goals are in life, they usually can’t tell you, and if they do tell you something, they’ll likely revise it as soon as you press them with an extreme thought experiment. Many humans are not agenty. Learning about rationality and thinking about personal goals can turn people into agents. How does this transition happen? The “human values are complex” theory seems to imply that we introspect, find out that we care/have intuitions about 5+ different axes of value, and end up accepting all of them for our goals. This is probably how quite a few people are doing it, but they’re victim of a gigantic typical mind fallacy if they think that’s the only way to do it. Here’s what happened to me personally (and incidentally, to about “20+” agents I know personally and to all the hedonistic utilitarians who are familiar with Lesswrong content and still keep their hedonistic utilitarian goals):
I started out with many things I like (friendship, love, self-actualization, non-repetitiveness, etc) plus some moral intuitions (anti-harm, fairness). I then got interested in ethics and figuring out the best ethical theory. I turned into a moral anti-realist soon, but still wanted to find a theory that incorporates my most fundamental intuitions. I realized that I don’t care intrinsically about “fairness” and became a utiltiarian in terms of my other-regarding/moral values. I then had the decision to what extent I should invest into utilitarianism/altruism, and how much into values that are more about me specifically. I chose altruism, because I have a strong, OCD-like tendency for doing things either fully or not at all, and I thought saving for retirement, eating healthy etc is just as bothersome as trying to be altruistic, because I don’t strongly self-identify with a 100-year-old version of me anyway, so might as well try to make sure that all future sentience will be suffering-free. I still take a lot of care about my long-term happiness and survival, but much less so than if I had the goal to live forever, and as I said I would instantly press the “self-modify into utilitarian robot” button, if there was one. I’d be curious to hear whether I am being “irrational” somewhere, whether there was a step involved that was clearly mistaken. I cannot imagine how that would be the case, and the matter seems obvious to me. So every time I read the link “human values are complex”, it seems like an intellectually dishonest discussion stopper to me.
Here’s the thread on this at the EA Forum: Effective Altruism and Utilitarianism
I confess that I have not read much of what has been written on the subject, so what I am about to say may be dreadfully naive.
A. One should separate the concept of effective altruism from the mode-of-operation of the various organizations which currently take it as their motto.
A.i. Can anyone seriously oppose effective altruism in principle? I find it difficult to imagine someone supporting ineffective altruism. Surely, we should let our charity be guided by evidence, randomized experiments, hard thinking about tradeoffs, etc etc.
A.ii. On the other hand, one can certainly quibble with what various organization are now doing. Such quibbling can even be quite productive.
B. What comes next should be understood as quibbles.
B.i. As many others have pointed out, effective altruism implicitly assumes a set of values. As Daron Acemogulu asks (http://bostonreview.net/forum/logic-effective-altruism/daron-acemoglu-response-effective-altruism), “How much more valuable is to save the life of a one-year-old than to send a six-year-old to school?”
B.ii. I think GiveWell may be insufficiently transparent abut such things. For example, its explanation of criteria at http://www.givewell.org/criteria does not give a clearcut explanation of how it makes such determinations.
Caveat: this is onlybased on browsing the GiveWell webpage for 10 minutes. I’m open to being corrected on this point.
B.iii. Along the same lines I wonder: had GiveWell, or other effective altruists, existed in the 1920s, what would they say about funding a bunch of physicists who noticed some weird things were happening with the hydrogen atom? How does “develop quantum mechanics” rate in terms of benefit to humanity, compared to, say, keeping thirty children in school for an extra year?
B.iv. Peter Singer’s endorsement of effective altruism in the Boston Review (http://bostonreview.net/forum/peter-singer-logic-effective-altruism ) includes some criticism of donations to opera houses; indeed, in a world with poverty and starvation, surely there are better things to do with one’s money? This seems endorsed by GiveWell who list “serving the global poor” as their priority, and in context I doubt this means serving them via the production of poetry for their enjoyment.
I do not agree with this. Life is not merely about surviving; one must have something to live for. Poetry, music, novels—for many people, these are a big part of what makes existence worthwhile.
C. Ideally, I’d love to see the recommendations of multiple effective altruist organizations with different values, all completely transparent about the assumptions that go into their recommendations. Could anyone disagree that this would make the world a better place?
I emphatically don’t, but yes, one can. The quantitative/reductionist attitude you’ve outlined here biases us towards easily measurable causes.
Some examples of difficult to measure causes include: 1) All forms of funding-hungry research, scientific or otherwise 2) most x-risks, including this forum’s favorite AI risk 3) causes which claim to influence social, economic, military, and political matters in complex but possiblyhigh impact ways 4) (Typically local and community-driven) causes which do good via subtle virtuous cycles, human connections, and various other intangibles
Form my previous comment on the issue:
What does that mean?
Approximately: Applying ideas consistently, even outside of their usual context. Believing in the logical consequences of the things you already believe.
As opposed to: Believing some ideas, but then saying “oh no, that’s completely different!” for no logical reason when someone tries to use the same idea in an unusual situation. (Dividing the world into small compartments, each governed by completely different laws, mutually unconnected.)
See e.g. Outside the Laboratory
I love EA as a concept, I’ve proselytized for it, but I’ve never contributed actual money. I feel vaguely ashamed about that last part.
My problem with EA is that it lacks aggression towards its competitors. I think this is a very serious issue, for the following reasons.
The largest altruistic organisations, especially in political developmental aid, seriously suck. Much like religions, they enjoy some immunity from criticism and benefit from lots of goodwill from volunteer workers. That has made them complacent, and they do not seriously compete with each other. They’re intransparent, tribal and too badly managed to be effective. In many cases, they spend more money in the First World than in the Third. They’re typical places for semi-retired politicians and their relatives to get employment, which I’m sure often isn’t technically a https://en.wikipedia.org/wiki/Sinecure but still doesn’t help the job market at those places. Their financial streams are the opposite of an open market, spread across many nations, and directed by so few deciders that “I’ll make sure you lose funding” can be a credible threat. And fundamentally, what they’re consuming is altruistic impulses that would do more good elsewhere—a grossly unethical business model. That’s my own assessment, but lots of people, especially among those employed there, agree with most or all of these points.
Basically, I just get furious when I see large Amnesty International ads that promise people they can save Raif Badawi with a letter. Because that’s not only almost certainly a conscious lie—what they’re doing with that ad is cleverly solicit donations, much of which will be spent on the next ad campaign. And we know people keep different mental accounts: Whatever Amnesty International leeches out of the people’s altruistic accounts will not be available for much more effective organizations like Deworm the World, which means Amnesty International effectively kills people.
But almost nobody can do something about it. The people inside these organizations benefit from their comparatively cushy jobs and want to keep them—unlike in industry, staying at one of those places for life is not an unrealistic prospect. Towards the outside, they’re very well defended by their intransparency, their relative immunity from criticism (“at least they’re doing something”) and their excellent connections to lots of people in the political and media establishment. Criticisms of specific policies (such as UNICEFs work against international adoption) or specific programmes (such as the Red Cross work on Haiti disaster relief)are occasionally made, but these don’t endanger the swampy ecosystem that is large humanitarian organisations. Obviously there is little to be gained by attacking that.
Except for EA! EA is uniquely positioned to do something about it. It talks about Altruism, and why it should be Effective, anyway—it implicitly already condemns ineffective altruism, and doing so explicitly would be a small step. It is independently funded by its members and can’t be threatened with losing funding. It isn’t afraid people will suddenly stop being altruistic if, say, EuropeAid was rocked by scandal or if Oxfam suffered a collapse of donations after it got wikileaked. In 2011, Holden from GiveWell wrote a blog post on “Mega-Charities” that was quite critical, but still nowhere near hostile enough.
I’m confident a mere 10% increase in effectiveness of the “Mega-Charities” would move more dollars the right way than a doubling of the EA population. And it wouldn’t be hard to do; some investigative reporting can go a long way. But for actual actual investigative work you have to be willing to do some actual damage.
Everybody else has an excuse why they don’t do that. EA doesn’t. And that makes me think they just lack the aggression. Maybe Scott Alexander is right about EA people being super scupulous. Scrupulosity isn’t a fighting stance.
Here is an example: How the Red Cross Raised Half a Billion Dollars for Haiti and Built Six Homes.
I linked to that, but fucked up the link syntax so it wasn’t displayed. I’ve reposted the corrected comment.
I’m a fan of EA. They are spot on with attempting to help people make better decisions, rather than saying “this is what you should do, because our particular form of Utilitarianism is the best, and if you don’t agree you are simply wrong”. [EDIT: bolded for visibility, because based on the other comments in this thread that point isn’t well advertised. Apparently that’s something they need to work on.]
If I were to make a nitpick, however, it would be this sort of thing:
I’d like to see more numbers, and a framework grounded more in math. Good data probably doesn’t exist, but even just using example calculations with order-of magnitude guesses at figures would be useful. Doing this in material intended for a general audience would drastically limit the expansion of the movement, however. It would be nice to see a few more technical papers put out though, perhaps in a pier-reviewed philosophical journal. I suspect that such a thing would not be an optimal use of their time, however, so I can’t really fault them for a lack of full rigor. I’d like to see it happen some day, though. Perhaps once all the low-hanging fruit is gone, and such methods become necessary to fully optimize for the greatest good.
I’d also like to see more citations in their work. I would be interested in at least reading many of the abstracts of papers that support their positions. I’m primarily interested in existential risk, so when they make broad generalizations that are directed mainly at people trying to help traditional charities, I’d like to be able to make an educated guess as to how strongly this advice applies to me. This is probably easier than the above, since they are already doing some research to back up their advice, but it does require a little extra work.
Lastly, the feel of EA would have put me off of it if I’d discovered it 5 years ago. I suspect that a positive, optimistic feel is better for the movement as a whole. However, perhaps it could gain a footing among cynics and nihilists who would not otherwise donate if an alternative website existed for this sort of target audience. It’s quite possible that such a thing would be bad for the movement as a whole right now, but maybe some day if it grows large enough.
I suggest there are two mindsets at play.
effectiveness
altruism
I take effectiveness to mean; assume you have a rational goal (one that has been analysed as being the right goal and a right goal), what is the most effective way to get there (fastest, cheapest, smartest, most sustaining solution to the problem)?
The only argument I can think of against effectiveness is to do with the journey not travelled, (if you choose to shortcut the journey you don’t gain the experiences along the way that might help you when encountering future problems or the benefits of the journey—happiness, “it is a journey” etc). However this is a bit of a strawman argument, because any path is a journey and the path-not-taken could have killed you or been the bad bath etc.
Altruism. this is debatable, as to why an organism should have altruism, and can be analysed down to game theory, and up to high-level “preventing suffering”. Bottom line is; some people have it stronger than others and are compelled to be altruistic more than others. I am really no good at defending or attacking altruism, just because I am weakly informed about each side.
Given altruism, I believe the only path should be effective altruism. You might want to question altruism on its own (but some very smart people have already done that—so you can read their works if you are interested)
here—have a link—https://en.wikipedia.org/wiki/Altruism_(biology)
http://diegocaleiro.com/2015/05/26/effective-altruism-as-an-intensional-movement/
I love EA as a concept, I’ve proselytized for it, but I’ve never contributed actual money. I feel vaguely ashamed about that last part but I’m comfortable calling myself not EA because I do have a problem with it.
My problem with EA is that it lacks aggression towards its competitors. I think this is a very serious issue, for the following reasons.
The largest altruistic organisations, especially in political developmental aid, seriously suck. Much like religions, they enjoy some immunity from criticism and benefit from lots of goodwill from volunteer workers. That has made them complacent, and they do not seriously compete with each other. They’re intransparent, tribal and too badly managed to be effective. In many cases, they spend more money in the First World than in the Third. They’re typical places for semi-retired politicians and their relatives to get employment, which I’m sure often isn’t technically a sinecure but still doesn’t help the job market at those places. Their financial streams are the opposite of an open market, spread across many nations, and directed by so few deciders that “I’ll make sure you lose funding” can be a credible threat. And fundamentally, what they’re consuming is altruistic impulses that would do more good elsewhere—a grossly unethical business model. That’s my own assessment, but lots of people, especially among those employed there, agree with most or all of these points.
Basically, I just get furious when I see large Amnesty International ads that promise people they can save Raif Badawi with a letter. Because that’s not only almost certainly a conscious lie—what they’re doing with that ad is cleverly solicit donations, much of which will be spent on the next ad campaign. And we know people keep different mental accounts: Whatever Amnesty International leeches out of the people’s altruistic accounts will not be available for much more effective organizations like Deworm the World, which means Amnesty International effectively kills people.
But almost nobody can do something about it. The people inside these organizations benefit from their comparatively cushy jobs and want to keep them—unlike in industry, staying at one of those places for life is not an unrealistic prospect. Towards the outside, they’re very well defended by their intransparency, their relative immunity from criticism (“at least they’re doing something”) and their excellent connections to lots of people in the political and media establishment. Criticisms of specific policies (such as UNICEFs work against international adoption ) or specific programmes (such as the Red Cross work on Haiti disaster relief ) are occasionally made, but these don’t endanger the swampy ecosystem that is large humanitarian organisations. Obviously there is little to be gained by attacking that.
Except for EA! EA is uniquely positioned to do something about it. It talks about Altruism, and why it should be Effective, anyway—it implicitly already condemns ineffective altruism, and doing so explicitly would be a small step. It is independently funded by its members and can’t be threatened with losing funding. It isn’t afraid people will suddenly stop being altruistic if, say, EuropeAid was rocked by scandal or if Oxfam suffered a collapse of donations after it got wikileaked. In 2011, Holden from GiveWell wrote a blog post on “Mega-Charities” that was quite critical, but still nowhere near hostile enough.
I’m confident a mere 10% increase in effectiveness of the “Mega-Charities” would move more dollars the right way than a doubling of the EA population. And it wouldn’t be hard to do; some investigative reporting can go a long way. But for actual actual investigative work you have to be willing to do some actual damage.
Everybody else has an excuse why they don’t do that. EA doesn’t. And that makes me think they just lack the aggression. Maybe Scott Alexander is right about EA people being super scupulous. Scrupulosity isn’t a fighting stance.