Saying “Yes, I can apply this framework to concrete examples,” does not actually make anything more concrete.
Did Holden ever do the calculation or endorse someone else’s calculation? What heuristic did he use to reject the calculation? “Never pursue a small chance of a large effect”? “Weird charities don’t work”?
If you calculate that this is ineffective or use heuristics to reject the calculation, I’d like to see this explicitly. Which heuristics?
Did Holden ever do the calculation or endorse someone else’s calculation? What heuristic did he use to reject the calculation? “Never pursue a small chance of a large effect”? “Weird charities don’t work”?
Which calculation are you referring to? In order to do a calculation one needs to have in mind a specific intervention, not just “asteroid risk prevention” as a cause.
Before worrying about specific interventions, you can compute an idealized version as in, say, the Copenhagen Consensus. There are existing asteroid detection programs. I don’t know if any of them take donations, but this does allow assessments of realistic organizations. At some level of cost-effectiveness, you have to consider other interventions, like starting your own organization or promoting the cause. Not having a list of interventions is no excuse for not computing the value of intervening.
I would guess that it’s fairly straightforward to compute the cost-effectiveness of an asteroid strike reduction program to within an order of magnitude in either direction.
The situation becomes much more complicated with assessing the cost-effectiveness of something like a “Friendly AI program” where the relevant issues are so much more murky than the issues relevant to asteroid strike prevention.
GiveWell is funded by a committed base of donors. It’s not clear to me that these donors are sufficiently interested in x-risk reduction so that they would fund GiveWell if GiveWell were to focus on finding x-risk reduction charities.
I think that it’s sensible for GiveWell to have started by investigating the cause of international health. This has allowed them to gain experience, credibility and empirical feedback which has strengthened the organization.
Despite the above three points I share your feeling that at present it would be desirable for GiveWell to put more time into studying x-risks and x-risk reduction charities. I think that they’re now sufficiently established so that at the margin they could do more x-risk related research while simultaneously satisfying their existing constituents.
Concerning the issue of asteroid strike risk in particular, it presently looks to me as though there are likely x-risk reduction efforts which are more cost effective; largely because it seems as though people are already taking care of the asteroid strike issue. See Hellman’s article on nuclear risk and this article from Pan-STARRS (HT wallowinmaya). I’m currently investigating the issue of x-risk precipitated by nuclear war & what organizations are working on nuclear nonproliferation.
Sure, but my comment is not about what GiveWell or anyone should do in general, but in the context of this article: Holden is engaging with x-risk and trying to clarify disagreement, so let’s not worry if or when he should (and he has made many other comments about it over the years). I think it would be better to do so concretely, rather than claiming that vague abstract principles lead to unspecified disagreements with unnamed people. I think he would better convey the principles by applying them. I’m not asking for 300 hours of asteroid research, just as much time as it took to write this article. I could be wrong, but I think a very sloppy treatment of asteroids would be useful.
The article has relevance to thinking about effective philanthropy independently of whether one is considering x-risk reduction charities. I doubt that it was written exclusively with x-risk in mind
I can’t speak for Holden here but I would guess that to the extent that he wrote the article with x-risk in mind, he did so to present a detailed account of an important relevant point which he can refer to in the future so as to streamline subsequent discussions without sacrificing detail and clarity.
So he could have written a concrete account of the disagreement with Deworm the World. The only concrete section was on BeerAdvocate and that was the only useful section. Pointing to the other sections in the future is a sacrifice of detail and clarity.
Saying “Yes, I can apply this framework to concrete examples,” does not actually make anything more concrete.
Did Holden ever do the calculation or endorse someone else’s calculation? What heuristic did he use to reject the calculation? “Never pursue a small chance of a large effect”? “Weird charities don’t work”?
If you calculate that this is ineffective or use heuristics to reject the calculation, I’d like to see this explicitly. Which heuristics?
Which calculation are you referring to? In order to do a calculation one needs to have in mind a specific intervention, not just “asteroid risk prevention” as a cause.
Before worrying about specific interventions, you can compute an idealized version as in, say, the Copenhagen Consensus. There are existing asteroid detection programs. I don’t know if any of them take donations, but this does allow assessments of realistic organizations. At some level of cost-effectiveness, you have to consider other interventions, like starting your own organization or promoting the cause. Not having a list of interventions is no excuse for not computing the value of intervening.
I would guess that it’s fairly straightforward to compute the cost-effectiveness of an asteroid strike reduction program to within an order of magnitude in either direction.
The situation becomes much more complicated with assessing the cost-effectiveness of something like a “Friendly AI program” where the relevant issues are so much more murky than the issues relevant to asteroid strike prevention.
GiveWell is funded by a committed base of donors. It’s not clear to me that these donors are sufficiently interested in x-risk reduction so that they would fund GiveWell if GiveWell were to focus on finding x-risk reduction charities.
I think that it’s sensible for GiveWell to have started by investigating the cause of international health. This has allowed them to gain experience, credibility and empirical feedback which has strengthened the organization.
Despite the above three points I share your feeling that at present it would be desirable for GiveWell to put more time into studying x-risks and x-risk reduction charities. I think that they’re now sufficiently established so that at the margin they could do more x-risk related research while simultaneously satisfying their existing constituents.
Concerning the issue of asteroid strike risk in particular, it presently looks to me as though there are likely x-risk reduction efforts which are more cost effective; largely because it seems as though people are already taking care of the asteroid strike issue. See Hellman’s article on nuclear risk and this article from Pan-STARRS (HT wallowinmaya). I’m currently investigating the issue of x-risk precipitated by nuclear war & what organizations are working on nuclear nonproliferation.
Sure, but my comment is not about what GiveWell or anyone should do in general, but in the context of this article: Holden is engaging with x-risk and trying to clarify disagreement, so let’s not worry if or when he should (and he has made many other comments about it over the years). I think it would be better to do so concretely, rather than claiming that vague abstract principles lead to unspecified disagreements with unnamed people. I think he would better convey the principles by applying them. I’m not asking for 300 hours of asteroid research, just as much time as it took to write this article. I could be wrong, but I think a very sloppy treatment of asteroids would be useful.
The article has relevance to thinking about effective philanthropy independently of whether one is considering x-risk reduction charities. I doubt that it was written exclusively with x-risk in mind
I can’t speak for Holden here but I would guess that to the extent that he wrote the article with x-risk in mind, he did so to present a detailed account of an important relevant point which he can refer to in the future so as to streamline subsequent discussions without sacrificing detail and clarity.
So he could have written a concrete account of the disagreement with Deworm the World. The only concrete section was on BeerAdvocate and that was the only useful section. Pointing to the other sections in the future is a sacrifice of detail and clarity.