I was reluctant to reply to this because it seemed like a comment on the general concept of donor lotteries, but not a comment on the actual post, which specifically responds to several points made in this comment. But one of my housemates mentioned that they felt the need to reply—so hopefully if I write this people will at least see that the main claims here have been addressed, and not spend their own time on this.
This is a pretty bold claim:
Keep in mind that much of the low-hanging analysis from a bog-standard EA’s perspective has already been performed by GiveWell, and you can’t really expect to meaningfully improve on their estimates.
It’s only relevant if you’re so confident in it that you don’t feel the need to do any double-checking—that the right amount of research to do is zero or nearly zero. I find it pretty implausible that a strategy that involves negligible research time manages to avoid simply having money extracted from it by whoever’s best at marketing. If GiveWell donors largely aren’t checking whether GiveWell’s recommendations are reasonable, this is good reason to suspect that GiveWell’s donors aren’t buying what they mean to buy.
Once you’re spending even a modest amount of time doing research, something like a donor lottery should be appealing for small amounts. As I wrote in the post, the lottery can be net-positive even with no additional research because you simply save whatever time you’d have spent on research, if you don’t win the lottery. For more on this, you might try reading the section titled “Diminishing marginal costs”.
The objection of value misalignment with other donors (“might donate it to the KKK”) should already be priced in if you’re not trying to double-count impact. The point of a donor lottery is to buy variance, if you think there are returns to scale for your giving. Coordinating between multiple donors just saves on transaction costs. For more on this, you might try reading the section titled “Lotteries, double-counting, and shared values”.
If you don’t care about the impact of your charitable giving, such that research that improves its impact doesn’t seem to further your interests (“a bunch of research that has low value to me to make a decision”), then I’m pretty confused about why you think you’re anything like the target market for this.
I’m not OP, but I have similar feelings about GiveWell. They have 19 full-time employees (at least 8 which are researchers). I am one person with a full-time non-research non-charity job. Assume I spend 40 hours on this if I win (around a month of free time). Running the numbers, I expect GiveWell to be able to spend at least 400x more time on this, and I expect their work to be far more productive because they wouldn’t be running themselves ragged with (effectively) two jobs, and the average GiveWell researcher already has more than a year of experience doing this and the connections that it comes with.
Regarding the target audience, I feel like the kinds of people who would enjoy doing this should either apply for a job at GiveWell, or start a new charity evaluator. If you think you can do better than they can, why rely a lottery victory to prove it?
I agree that GiveWell does high-quality research and identifies effective giving opportunities, and that donors can do reasonably well by deferring to their recommendations. I think it is not at all crazy to suspect that you can do better, and I do not personally give to GiveWell recommended charities. Note for example that Holden also does not donate exclusively to GiveWell charities, and indeed is generally supportive of using either lotteries or delegation to trusted individuals.
GiveWell does not purport to solve the general problem of “where should EA’s give money.” They purport to evaluate one kind of intervention: “programs that have been studied rigorously and ideally repeatedly, and whose benefits we can reasonably expect to generalize to large populations, though there are limits to the generalizability of any study results. The set of programs fitting this description is relatively limited, and mostly found in the category of health interventions” (here)
The situation isn’t “you think for X hours, and the more hours you think the better the opportunities you can find, which you can then spend arbitrarily large amounts of money on.” You need to do thinking in order to identify opportunities to do good, which can accept a certain amount of money. In order to have identify a better donation opportunity than GiveWell, one does not have to do more work than GiveWell / delegate to someone who has done more work.
By thinking longer, you could identify a different delegation strategy, rather than finding an object level recommendation. You aren’t improving on GiveWell’s research, just on your current view that GiveWell is the right person to defer to. There are many people who have spent much longer than you thinking about where to give, and at a minimum you are picking one of them. Having large piles of money and being thoughtful about where to give it is the kind of situation that (for example) makes it possible for GiveWell to get started, and it seems somewhat perverse to celebrate GiveWell while placing no value on the conditions that allow it to come to exist.
In a normal world, the easiest recommendations to notice/verify/follow would receive the most attention, and so all else equal you might get higher returns by looking for recommendations that are harder.
If you think GiveWell recommended charities are the best intervention, then you should be pretty much risk neutral over the scale of $100k or even $1M. So the cost is relatively low (perhaps mostly my 0.5% haircut) and you would have to be pretty confident in your view (despite the fact that many thoughtful people disagree) in order to make it worthless.
The point of lotteries is not to have fun or prove that we are clever, it is to use money well.
I think my answer to all of this is: that sounds great but wouldn’t it be better if it wasn’t random?
If you have the skills and interest to do charity evaluation, why wait to win the lottery when you could join or start a charity evaluator? If you need money, running a fundraiser seems better than hoping to win the lottery.
If you think you’re likely to find a better meta charity than GiveWell, it seems better to just do that research now and write a blog post to make other people aware your results, rather than the more convoluted method of writing blog posts to convince people to join a lottery and then hoping to win.
And if you aren’t very interested in charity research, why join a donor lottery that picks the decider at random when you could join one where it’s always the most competent member (100% of the time, GiveWell gets to decide how to allocate the donation)?
I think my answer to all of this is: that sounds great but wouldn’t it be better if it wasn’t random?
Why would that be better?
If you think you’re likely to find a better meta charity than GiveWell, it seems better to just do that research now and write a blog post to make other people aware your results
I think you are radically, radically underestimating the difficulty of reaching consensus on challenging questions.
For example: a significant fraction of openphil staff make significant contributions to charities other than GiveWell recommendations, and that in many cases they haven’t reached consensus with each other; some give to farm animal welfare, some to science, some to political causes, etc.; even within causes there is significant disagreement. This is despite the fact that they spend most of their time thinking about philanthropy (though not about their personal giving).
why join a donor lottery that picks the decider at random when you could join one where it’s always the most competent member
If you will certainly follow GiveWell recommendations after winning, then gambling makes no difference and isn’t worth the effort (though hopefully it will eventually take nearly 0 effort, so it’s really a wash). If you think that GiveWell is the most competent decider, yet somehow don’t think that you will follow their recommendations, then I’m not sure what to say to you. If you are concerned about other people making bad decisions with their money, well that’s not really your problem and it’s orthogonal to whether they gamble with it.
If GiveWell donors largely aren’t checking whether GiveWell’s recommendations are reasonable, this is good reason to suspect that GiveWell’s donors aren’t buying what they mean to buy.
One assumes that having a few people do sanity checks on randomly selected pieces of their work is good enough, plus one assumes that Givewell isn’t capable of a stealthy transformation into an evil organisation overnight without anyone on the inside raising the alarm.
Put another way, by doing this donor lottery thing, you giving Givewell a 1⁄10 rating already. It would be like my implicit 1⁄10 rating of the local supermarkets if I start growing my own food. Charity recommendations is their job! It’s what they specialised in! If you have to spend a lot of time DIYing it, then they suck!
It’s only relevant if you’re so confident in it that you don’t feel the need to do any double-checking—that the right amount of research to do is zero or nearly zero.
My contention is that the people who are willing to participate in this have already done non-negligible amounts of thinking on this topic, because they are EA hobbyists. How could one be engaging with the EA community if they are not spending time thinking about the core issues at hand? Because of diminishing marginal returns, they are already paying the costs for the research that has the highest marginal value, in terms of their engagement with the community and reflection on these topics. I do not believe this is addressed in the original article. I believe this is our fundamental disagreement.
The objection of value misalignment can’t be priced in because there is no pricing mechanism at play here, so I’m not sure what you mean (except for paulfchristiano’s fee for administering the fund). That exact point was not the main thrust of the paragraph, however. The main thrust of that paragraph was to explain the two possible outcomes in the lottery, and explain how both lead to potential negative outcomes in light of the diminishing marginal returns to original research and the availability of a person’s time in light of outside circumstances.
I am in the target market in the sense that I donate to EA charities, and I think that SOMEONE doing research improves its impact, but I guess I am not in the target market in the sense that I think that person has to be me.
Regarding your snips about my not reading the article, it’s true that if I had more time and more interest in this topic, I would offer better quality engagement with your ideas, so I apologize that I lack those things.
By “priced in,” I meant something like—you shouldn’t be counting the benefits from the cases where you lose anyway, otherwise you end up effectively double-counting contributions.
On trusting GiveWell:
Apple knows much, much more about what makes a smartphone good than I do. They’ve put huge amounts of research into it. Therefore I shouldn’t try to build my own smartphone (because I expect there are genuinely huge returns to scale). This doesn’t mean that I should defer to Apple’s judgment about whether I should buy a smartphone, or which one to buy.
Samsung’s also put much, much more work than I have into what the optimal arrangement of a smartphone is. That doesn’t help me decide whether to buy an iPhone or a Samsung.
McDonald’s has put similarly huge amounts of expert work into figuring out how to optimally produce hamburgers than I have, but I still expect that I can easily produce a much higher-quality product in my own home, so it’s not even always the case that some types of returns to scale mean one can’t compete on small batches.
Givewell is different than those examples certainly. Your examples all include a clear motive to convince people to use their product, even if there are better out there. Givewell are analysts, not producers of good, and are explicitly trying to guide people to the best choice (within a set of constraints).
A better example would be choosing a restaurant. Michelin and Yelp have far more data and have put far more work into evaluating and rating food providers than you ever can. But you still need to figure out how your preferences fit into their evaluation framework, and navigate the always-changing landscape to make an actual choice.
(note that the conclusion is the same: you still must expend some search cost
Givewell are analysts, not producers of good, and are explicitly trying to guide people to the best choice (within a set of constraints).
A lot of the mission of Givewell is also EA movement building. By advocating the standard that evidence is important existing charities will focus on on finding evidence for their claims.
I don’t think “incentive” cuts at the joints here, but selection pressure does. You’re going to hear about the best self-promoters targeting you, which is only an indicator of qualities you care about to the extent that those qualities contributes to self-promotion in that market.
Personal experience: I occasionally use Yelp, but in some cases it’s worse than useless because I care about a pretty high standard of food quality, and often Yelp restaurant reviews are about whether the waiter was nice, the restaurant seemed fancy, the portions were big, sometimes people mark restaurants down for having inventive & therefore challenging food, etc. So I often get better information from the Chowhound message board, which no one except foodies has heard of.
I was reluctant to reply to this because it seemed like a comment on the general concept of donor lotteries, but not a comment on the actual post, which specifically responds to several points made in this comment. But one of my housemates mentioned that they felt the need to reply—so hopefully if I write this people will at least see that the main claims here have been addressed, and not spend their own time on this.
This is a pretty bold claim:
It’s only relevant if you’re so confident in it that you don’t feel the need to do any double-checking—that the right amount of research to do is zero or nearly zero. I find it pretty implausible that a strategy that involves negligible research time manages to avoid simply having money extracted from it by whoever’s best at marketing. If GiveWell donors largely aren’t checking whether GiveWell’s recommendations are reasonable, this is good reason to suspect that GiveWell’s donors aren’t buying what they mean to buy.
Once you’re spending even a modest amount of time doing research, something like a donor lottery should be appealing for small amounts. As I wrote in the post, the lottery can be net-positive even with no additional research because you simply save whatever time you’d have spent on research, if you don’t win the lottery. For more on this, you might try reading the section titled “Diminishing marginal costs”.
The objection of value misalignment with other donors (“might donate it to the KKK”) should already be priced in if you’re not trying to double-count impact. The point of a donor lottery is to buy variance, if you think there are returns to scale for your giving. Coordinating between multiple donors just saves on transaction costs. For more on this, you might try reading the section titled “Lotteries, double-counting, and shared values”.
If you don’t care about the impact of your charitable giving, such that research that improves its impact doesn’t seem to further your interests (“a bunch of research that has low value to me to make a decision”), then I’m pretty confused about why you think you’re anything like the target market for this.
I’m not OP, but I have similar feelings about GiveWell. They have 19 full-time employees (at least 8 which are researchers). I am one person with a full-time non-research non-charity job. Assume I spend 40 hours on this if I win (around a month of free time). Running the numbers, I expect GiveWell to be able to spend at least 400x more time on this, and I expect their work to be far more productive because they wouldn’t be running themselves ragged with (effectively) two jobs, and the average GiveWell researcher already has more than a year of experience doing this and the connections that it comes with.
Regarding the target audience, I feel like the kinds of people who would enjoy doing this should either apply for a job at GiveWell, or start a new charity evaluator. If you think you can do better than they can, why rely a lottery victory to prove it?
I agree that GiveWell does high-quality research and identifies effective giving opportunities, and that donors can do reasonably well by deferring to their recommendations. I think it is not at all crazy to suspect that you can do better, and I do not personally give to GiveWell recommended charities. Note for example that Holden also does not donate exclusively to GiveWell charities, and indeed is generally supportive of using either lotteries or delegation to trusted individuals.
GiveWell does not purport to solve the general problem of “where should EA’s give money.” They purport to evaluate one kind of intervention: “programs that have been studied rigorously and ideally repeatedly, and whose benefits we can reasonably expect to generalize to large populations, though there are limits to the generalizability of any study results. The set of programs fitting this description is relatively limited, and mostly found in the category of health interventions” (here)
The situation isn’t “you think for X hours, and the more hours you think the better the opportunities you can find, which you can then spend arbitrarily large amounts of money on.” You need to do thinking in order to identify opportunities to do good, which can accept a certain amount of money. In order to have identify a better donation opportunity than GiveWell, one does not have to do more work than GiveWell / delegate to someone who has done more work.
By thinking longer, you could identify a different delegation strategy, rather than finding an object level recommendation. You aren’t improving on GiveWell’s research, just on your current view that GiveWell is the right person to defer to. There are many people who have spent much longer than you thinking about where to give, and at a minimum you are picking one of them. Having large piles of money and being thoughtful about where to give it is the kind of situation that (for example) makes it possible for GiveWell to get started, and it seems somewhat perverse to celebrate GiveWell while placing no value on the conditions that allow it to come to exist.
In a normal world, the easiest recommendations to notice/verify/follow would receive the most attention, and so all else equal you might get higher returns by looking for recommendations that are harder.
If you think GiveWell recommended charities are the best intervention, then you should be pretty much risk neutral over the scale of $100k or even $1M. So the cost is relatively low (perhaps mostly my 0.5% haircut) and you would have to be pretty confident in your view (despite the fact that many thoughtful people disagree) in order to make it worthless.
The point of lotteries is not to have fun or prove that we are clever, it is to use money well.
I think my answer to all of this is: that sounds great but wouldn’t it be better if it wasn’t random?
If you have the skills and interest to do charity evaluation, why wait to win the lottery when you could join or start a charity evaluator? If you need money, running a fundraiser seems better than hoping to win the lottery.
If you think you’re likely to find a better meta charity than GiveWell, it seems better to just do that research now and write a blog post to make other people aware your results, rather than the more convoluted method of writing blog posts to convince people to join a lottery and then hoping to win.
And if you aren’t very interested in charity research, why join a donor lottery that picks the decider at random when you could join one where it’s always the most competent member (100% of the time, GiveWell gets to decide how to allocate the donation)?
Why would that be better?
I think you are radically, radically underestimating the difficulty of reaching consensus on challenging questions.
For example: a significant fraction of openphil staff make significant contributions to charities other than GiveWell recommendations, and that in many cases they haven’t reached consensus with each other; some give to farm animal welfare, some to science, some to political causes, etc.; even within causes there is significant disagreement. This is despite the fact that they spend most of their time thinking about philanthropy (though not about their personal giving).
If you will certainly follow GiveWell recommendations after winning, then gambling makes no difference and isn’t worth the effort (though hopefully it will eventually take nearly 0 effort, so it’s really a wash). If you think that GiveWell is the most competent decider, yet somehow don’t think that you will follow their recommendations, then I’m not sure what to say to you. If you are concerned about other people making bad decisions with their money, well that’s not really your problem and it’s orthogonal to whether they gamble with it.
One assumes that having a few people do sanity checks on randomly selected pieces of their work is good enough, plus one assumes that Givewell isn’t capable of a stealthy transformation into an evil organisation overnight without anyone on the inside raising the alarm.
Put another way, by doing this donor lottery thing, you giving Givewell a 1⁄10 rating already. It would be like my implicit 1⁄10 rating of the local supermarkets if I start growing my own food. Charity recommendations is their job! It’s what they specialised in! If you have to spend a lot of time DIYing it, then they suck!
My contention is that the people who are willing to participate in this have already done non-negligible amounts of thinking on this topic, because they are EA hobbyists. How could one be engaging with the EA community if they are not spending time thinking about the core issues at hand? Because of diminishing marginal returns, they are already paying the costs for the research that has the highest marginal value, in terms of their engagement with the community and reflection on these topics. I do not believe this is addressed in the original article. I believe this is our fundamental disagreement.
The objection of value misalignment can’t be priced in because there is no pricing mechanism at play here, so I’m not sure what you mean (except for paulfchristiano’s fee for administering the fund). That exact point was not the main thrust of the paragraph, however. The main thrust of that paragraph was to explain the two possible outcomes in the lottery, and explain how both lead to potential negative outcomes in light of the diminishing marginal returns to original research and the availability of a person’s time in light of outside circumstances.
I am in the target market in the sense that I donate to EA charities, and I think that SOMEONE doing research improves its impact, but I guess I am not in the target market in the sense that I think that person has to be me.
Regarding your snips about my not reading the article, it’s true that if I had more time and more interest in this topic, I would offer better quality engagement with your ideas, so I apologize that I lack those things.
By “priced in,” I meant something like—you shouldn’t be counting the benefits from the cases where you lose anyway, otherwise you end up effectively double-counting contributions.
On trusting GiveWell:
Apple knows much, much more about what makes a smartphone good than I do. They’ve put huge amounts of research into it. Therefore I shouldn’t try to build my own smartphone (because I expect there are genuinely huge returns to scale). This doesn’t mean that I should defer to Apple’s judgment about whether I should buy a smartphone, or which one to buy.
Samsung’s also put much, much more work than I have into what the optimal arrangement of a smartphone is. That doesn’t help me decide whether to buy an iPhone or a Samsung.
McDonald’s has put similarly huge amounts of expert work into figuring out how to optimally produce hamburgers than I have, but I still expect that I can easily produce a much higher-quality product in my own home, so it’s not even always the case that some types of returns to scale mean one can’t compete on small batches.
Do you think GiveWell’s substantially different?
Givewell is different than those examples certainly. Your examples all include a clear motive to convince people to use their product, even if there are better out there. Givewell are analysts, not producers of good, and are explicitly trying to guide people to the best choice (within a set of constraints).
A better example would be choosing a restaurant. Michelin and Yelp have far more data and have put far more work into evaluating and rating food providers than you ever can. But you still need to figure out how your preferences fit into their evaluation framework, and navigate the always-changing landscape to make an actual choice.
(note that the conclusion is the same: you still must expend some search cost
A lot of the mission of Givewell is also EA movement building. By advocating the standard that evidence is important existing charities will focus on on finding evidence for their claims.
I don’t think “incentive” cuts at the joints here, but selection pressure does. You’re going to hear about the best self-promoters targeting you, which is only an indicator of qualities you care about to the extent that those qualities contributes to self-promotion in that market.
Personal experience: I occasionally use Yelp, but in some cases it’s worse than useless because I care about a pretty high standard of food quality, and often Yelp restaurant reviews are about whether the waiter was nice, the restaurant seemed fancy, the portions were big, sometimes people mark restaurants down for having inventive & therefore challenging food, etc. So I often get better information from the Chowhound message board, which no one except foodies has heard of.