Hi—answer to this will be posted along with the responses to other questions on Giles’ discussion page. If you e-mail me (will [dot] crouch [at] givingwhatwecan.org) then I can send you the calculations.
wdmacaskill
It’s a good question! I was going to respond, but I think that, rather than answering questions on this thread, I’ll just let people keep asking questions, and then I’ll respond to them all at once—hopefully that’ll make readability clearer for other users.
Here is the CEA website—but it’s just a stub linking to the others.
And no. To my knowledge, we haven’t contacted her. From the website, it seems like our approaches are quite different, though the terms we use are similar.
These are all good questions! Interestingly, they are all relevant to the empirical aspect of a research grant proposal I’m writing. Anyway, our research team is shared between 80,000 Hours and GWWC. They would certainly be interested in addressing all these questions (I think it would officially come under GWWC). I know that those at GiveWell are very interested in at least some of the above questions as well; hopefully they’ll write on them soon.
Feel free to post the questions just now, Giles, in case that there are others that people want to add.
Thanks for this, this is a common response to earning to give. However, we already have a number of success stories: people who have started their EtG jobs and are loving them.
It’s rare that someone had their heart set on a particular career, such as charity work, then completely changes their plans and begins EtG. Rather, much more common is that someone is thinking “I really want to do [lucrative career X], but I should do something more ethical” or that they think “I’m undecided between lucrative career X, and other careers Y and Z; all look like good options.” It’s much easier to convince these people.
We certainly want to track behaviour. We will have an annual survey of members, to find out what they are doing, and how much they are giving, and so on. If someone really isn’t complying with the spirit of 80k, or with their stated goals, then we’ll ask them to leave.
Thanks for this. Asking people “how much would you have pledged?” is of course only a semi-reliable method of ascertaining how much someone actually would have pledged. Some people—like yourself—might neglect that fact that they would have been convinced by the same arguments from other sources; others might be overoptimistic about how their future self would live up to their youthful ideals. We try to be as conservative as reasonable with our assumptions in this area: we take the data and then err on the side of caution. We assumed that 54% of the pledged donations would have happened anyway, that 25% of donations would have gone to comparably good charities, and that we have a dropout rate amortized over time equivalent to 50% of people dropping out immediately. It’s possible that these assumptions still aren’t conservative enough.
That’s right. If there’s a lot of concern, we can write up what we already know, and look into it further—we’re very happy to respond to demand. This would naturally go under EAA research.
Thanks benthamite, I think everything you said above was accurate.
It would be good to have more analysis of this.
Is saving someone from malaria really the most cost-effective way to speed technological progress per dollar?
The answer is that I don’t know. Perhaps it’s better to fund technology directly. But the benefit:cost ratio tends to be incredibly high for the best developing world interventions. So the best developing world health interventions would at least be contenders. In the discussion above, though, preventing malaria doesn’t need to be the most cost-effective way of speeding up technological progress. The point was only that that benefit outweighs the harm done by increasing the amount of farming.
On (a). The argument for this is based on the first half of Bostrom’s Astronomical Waste. In saving someone’s life (or some other good economic investment), you move technological progress forward by a tiny amount. The benefit you produce is the difference you make at the end of civilisation, when there’s much more at stake than there is now.
It’s almost certainly more like −10,000N I’d be cautious about making claims like this. We’re dealing with tricky issues, so I wouldn’t claim to be almost certain about anything in this area. The numbers I used in the above post were intended to be purely illustrative, and I apologise if they came across as being more definite than that.
Why might I worry about the −10,000N figure? Well, first, the number you reference is the number of animals eaten in a lifetime by an American—the greatest per capita meat consumers in the world. I presume that the number is considerably smaller for those in developing countries, and there is considerably less reliance on factory farming.
Even assuming we were talking about American lives, is the suffering that an American causes 10,000 times as great as the happiness of their lives? Let’s try a back of the envelope calculation. Let’s accept that 21000 figure. I can’t access the original source, but some other digging suggests that this breaks down into: 17,000 shellfish, 1700 other fish, 2147 chickens, with the rest constituting a much smaller number. I’m really not sure how to factor in shellfish and other fish: I don’t know if they have lives worth living or not, and I presume that most of these are farmed, so wouldn’t have existed were it not for farming practices. At any rate, from what I know I suspect that factory farmed chickens are likely to dominate the calculation (but I’m not certain). So let’s focus on the chickens. The average factory farmed chicken lives for 6 weeks, so that’s 252 factory farmed chicken-years per American lifetime. Assuming the average American lives for 70 years, one American life-year produces 3.6 factory farmed chicken years. What should our tradeoff be between producing factory farmed chicken-years and American human-years? Perhaps the life of the chicken is 10x as bad as the American life is good (that seems a high estimate to me, but I really don’t know): in which case we should be willing to shorten an American’s life by 10 years in order to prevent one factory-farmed chicken-year. That would mean that, if we call one American life a good of unit 1, the American’s meat consumption produces −36 units of value.
In order to get this estimate up to −10 000 units of value, we’d need to multiply that trade-off of 277: we should be indifferent between producing 2770 years of American life and preventing the existence of 1 factory farmed chicken-year (that is, we should be happy letting 4 vegan American children die in order to prevent 1 factory farmed chicken-year). That number seems too high too me; if you agree, perhaps you think that fish or shellfish suffering is the dominant consideration. Or you might bring in non-consequentialist considerations; as I said above, I think that the meat eater problem is likely more troubling for non-consequentialists.
At any rate, this is somewhat of a digression. If one thought that meat eater worries were strong enough that donating to GWWC or 80k was a net harm, I would think that a reasonable view (and one could give further arguments in favour of it, that we haven’t discussed), though not my own one for the reasons I’ve outlined. We knew that something animal welfare focused had been missing from CEA for too long and for that reason set up Effective Animal Activism—currently a sub-project of 80k, but able to accept restricted donations and, as it grows, likely to become an organisation in its own right. So if one thinks that animal welfare charities are likely to be the most cost-effective charities, and one finds the meta-charity argument plausible, then one might consider giving to EAA.
Interesting. The deeper reasons why I reject average utilitarianism is that it makes the value of lives non-seperable.
“Separability” of value just means being able to evaluate something without having to look at anything else. I think that, whether or not it’s a good thing to bring a new person into existence depends only on facts about that person (assuming they don’t have any causal effects on other people): the amount of their happiness or suffering. So, in deciding whether to bring a new person into existence, it shouldn’t be relevant what happened in the distant past. But average utilitarianism makes it relevant: because long-dead people affect the average wellbeing, and therefore affect whether it’s good or bad to bring that person into existence.
But, let’s return to the intuitive case above, and make it a little stronger.
Now suppose:
Population A: 1 person suffering a lot (utility −10)
Population B: That same person, suffering an arbitrarily large amount (utility -n, for any arbitrarily large n), and a very large number, m, of people suffering −9.9.
Average utilitarianism entails that, for any n, there is some m such that Population B is better than Population A. I.e. Average utilitarianism is willing to add horrendous suffering to someone’s already horrific life, in order to bring into existence many other people with horrific lives.
Do you still get the intuition in favour of average here?
By the way, thanks for the comments! Seeing as the post is getting positive feedback, I’m going to promote it to the main blog.
In order to get exceptional value for money you need to (correctly) believe that you are smarter than the big donors - >otherwise they’d already have funded whatever you’re planning on funding to the point where the returns diminish to the >same level as everything else.
That’s if you think that the big funders are rational and have similar goals as you. I think assuming they are rational is pretty close to the truth (though I’m not sure: charity doesn’t have the same feedback mechanisms as business, because if you get punished you don’t get punished in the same way). beoShaffer suggests that they just have different goals—they are aiming to make themselves look good, rather than do good. I think that could explain a lot of cases, but not all—e.g. it just doesn’t seem plausible to me for the Gates Foundation.
So I ask myself: why doesn’t Gates spend much more money on increasing revenue to good causes, through advertising etc? One answer is that he does spend such money: the Giving Pledge must be the most successful meta-charity ever. Another is that charities are restricted in how they can act by cultural norms. E.g. if they spent loads of money on advertising, then their reputation would take a big enough hit to outweigh the benefits through increased revenue.
I wouldn’t want to commit to an answer right now, but the Hansonian Hypothesis does make the right prediction in this case. If I’m directly helping, it’s very clear that I have altruistic motives. But if I’m doing something much more indirect, then my motives become less clear. (E.g. if I go into finance in order to donate, I no longer look so different from people who go into finance in order to make money for themselves). So you could take the absence of meta-charity as evidence in favour of the Hansonian Hypothesis.
That’s the hope! See below.
Hey,
80k members give to a variety of causes. When we surveyed, 34% were intending to give to x-risk, and it seems fairly common for people who start thinking about effective altruism to ultimately think that x-risk mitigation is one of or the most important cause area. As for how this pans out with additional members, we’ll have to wait and see. But I’d expect $1 to 80k to generate significantly more than $1′s worth of value even for existential risk mitigation alone. It certainly has done so far.
We did a little bit of impact-assessment for 80k (again, with a sample of 26 members). When we did, the estimates were even more optimistic than for GWWC. But we’d like to get firmer data set before going public with any numbers.
Though I was deeply troubled by the poor meater problem for some time, I’ve come to the conclusion that it isn’t that bad (for utilitarians—I think it’s much worse for non-consequentialists, though I’m not sure).
The basic idea is as follows. If I save the life of someone in the developing world, almost all the benefit I produce is through compounding effects: I speed up technological progress by a tiny margin, giving us a little bit more time at the end of civilisation, when there are far more people. This benefit dwarfs the benefit to the individual whose life I’ve saved (as Bostrom argues in the first half of Astronomical Waste). Now, I also increase the amount of animal suffering, because the person whose life I’ve saved consumes meat, and I speed up development of the country, which means that the country starts factory farming sooner. However, we should expect (or, at least, I expect) factory farming to disappear within the next few centuries, as cheaper and tastier meat substitutes are developed. So the increase in animal suffering doesn’t compound in the same way: whereas the benefits of saving a life continue until the humanity race (or its descendants) dies out, the harm of increasing meat consumption ends only after a few centuries (when we move beyond farming).
So let’s say the benefit to the person from having their live saved is N. The magnitude of the harm from increasing factory farming might be a bit more than N: maybe −10N. But the benefit from speeding up technological progress is vastly greater than that: 1000N, or something. So it’s still a good thing to save someone’s life in the developing world. (Though of course, if you take the arguments about x-risk seriously, then alleviating global poverty is dwarfed by existential risk mitigation).
- Aug 10, 2022, 8:54 AM; 30 points) 's comment on EA 1.0 and EA 2.0; highlighting critical changes in EA’s evolution by (EA Forum;
- Jan 26, 2023, 1:17 AM; 17 points) 's comment on When Did EA Start? by (EA Forum;
- Dec 22, 2024, 9:48 PM; 12 points) 's comment on PabloAMC ’s Quick takes by (EA Forum;
Thanks for that. I guess that means I’m not a rationalist! I try my best to practice (1). But I only contingently practice (2). Even if I didn’t care one jot about increasing happiness and decreasing suffering in the world, then I think I still ought to increase happiness and decrease suffering. I.e. I do what I do not because it’s what I happen to value, but because I think it’s objectively valuable (and if you value something else, like promoting suffering, then I think you’re mistaken!) That is, I’m a moral realist. Whereas the definition given in Eliezer’s post suggests that being a rationalist presupposes moral anti-realism. When I talk with other LW-ers, this often seems to be a point of disagreement, so I hope I’m not just being pedantic!
Haha! I don’t think I’m worthy of squeeing, but thank you all the same.
In terms of the philosophy, I think that average utilitarianism is hopeless as a theory of population ethics. Consider the following case:
Population A: 1 person exists, with a life full of horrific suffering. Her utility is −100.
Population B: 100 billion people exist, each with lives full of horrific suffering. Each of their utility levels is −99.9
Average utilitarianism says that Population B is better than Population A. That definitely seems wrong to me: bringing into existence people whose lives aren’t worth living just can’t be a good thing.
Can I clarify: I think you meant “CEA” rather than “EAA” in your first question?