I agree with much of this post, but find a disconnect between the specific criticisms and the overall conclusion of withholding funds from SI even for “donors determined to donate within this cause”, and even aside from whether SI’s FAI approach increases risk. I see a couple of ways in which the conclusion might hold.
SI is doing worse than they are capable of, due to wrong beliefs. Withholding funds provides incentive for them to do what you think is right, without having to change their beliefs. But this could lead to waste if people disagree in different directions, and funds end up sitting unused because SI can’t satisfy everyone, or if SI thinks the benefit of doing what they think is optimal is greater than the value of extra funds they could get from doing what you think is best.
A more capable organization already exists or will come up later and provide a better use of your money. This seems unlikely in the near future, given that we’re already familiar with the “major players” in the existential risk area and based on past history, it doesn’t seem likely that a new group of highly capable people would suddenly get interested in the cause. In the longer run, it’s likely that many more people will be attracted to work in this area as time goes on and the threat of a bad-by-default Singularity becomes more obvious, but those people have the disadvantage of having less time for their work to take effect (which reduces the average value of donations), and there will probably also be many more willing donors than at this time (which reduces the marginal value of donations).
So neither of these ways to fill in the missing part of the argument seems very strong. I’d be interested to know what Holden’s own thoughts are, or if anyone else can make stronger arguments on his behalf.
If Holden believes that: A) reducing existential risk is valuable, and B) SI’s effectiveness at reducing existential risk is a significant contributor to the future of existential risk, and C) SI is being less effective at reducing existential risk than they would be if they fixed some set of problems P, and D) withholding GiveWell’s endorsement while pre-committing to re-evaluating that refusal if given evidence that P has been fixed increases the chances that SI will fix P...
...it seems to me that Holden should withhold GiveWell’s endorsement while pre-committing to re-evaluating that refusal if given evidence that P has been fixed.
Which seems to be what he’s doing. (Of course, I don’t know whether those are his reasons.)
What, on your view, ought he do instead, if he believes those things?
Holden must believe some additional relevant statements, because A-D (with “existential risk” suitably replaced) could be applied to every other charity, as presumably no charity is perfect.
I guess what I most want to know is what Holden thinks are the reasons SI hasn’t already fixed the problems P. If it’s lack of resources or lack of competence, then “withholding … while pre-committing …” isn’t going to help. If it’s wrong beliefs, then arguing seems better than “incentivizing”, since that provides a permanent instead of temporary solution, and in the course of arguing you might find out that you’re wrong yourself. What does Holden believe that causes him to think that providing explicit incentives to SI is a good thing to do?
AFAICT charities generally have perverse incentives—to do what will bring in donations, rather than what will do the most good. That can usually argue against things like transparency, for example. So I think when Holden usually says “don’t donate to X yet” it’s as part of an effort to make these incentives saner.
As it happens, I don’t think this problem applies especially strongly to SI, but others may differ.
That is indeed relevant, in that it describes some perverse incentives and weird behaviors of nonprofits, with an interesting example. But knowing this context without having to click the link would have been useful. It is customary to explain what a link is about rather than just drop it.
But C applies more to some charities than others. And evaluating how much of a charity’s potential effectiveness is lost to internal flaws is a big piece of what GiveWell does.
Absolutely agreed that if D is false—for example, if increasing SI’s incentive to fix P doesn’t in fact increase SI’s chances of fixing P, or if a withholding+precommitting strategy doesn’t in fact increase SI’s incentive to fix P, or some other reason—then the strategy I describe makes no sense.
However, I don’t think that “Cause X is the one I care about and Organization Y is the only one working on it” to be a good reason to support Organization Y.
This addresses your point (2). Holden believes that SI is grossly inefficient at best, and actively harmful at worst (since he thinks that they might inadvertently increase AI risk). Therefore, giving money to SI would be counterproductive, and a donor would get a better return on investment in other places.
As for point (1), my impression is that Holden’s low estimate of SI’s competence is due to a combination of what he sees as wrong beliefs, as well as an insufficient capability to implement even the correct beliefs into practice. SI claims to be supremely rational, but their list of achievements is lackluster at best—which indicates a certain amount of Donning-Kruger effect that’s going on. Furthermore, SI appears to be focused on growing SI and teaching rationality workshops, as opposed to their stated mission of researching FAI theory.
Additionally, Holden indicted SI members pretty strongly (though very politely) for what I will (in a less polite fashion) label as arrogance. The prevailing attitude of SI members seems to be (according to Holden) that the rest of the world is just too irrational to comprehend their brilliant insights, and therefore the rest of the world has little to offer—and therefore, any criticism of SI’s goals or actions can be dismissed out of hand.
I agree with much of this post, but find a disconnect between the specific criticisms and the overall conclusion of withholding funds from SI even for “donors determined to donate within this cause”, and even aside from whether SI’s FAI approach increases risk. I see a couple of ways in which the conclusion might hold.
SI is doing worse than they are capable of, due to wrong beliefs. Withholding funds provides incentive for them to do what you think is right, without having to change their beliefs. But this could lead to waste if people disagree in different directions, and funds end up sitting unused because SI can’t satisfy everyone, or if SI thinks the benefit of doing what they think is optimal is greater than the value of extra funds they could get from doing what you think is best.
A more capable organization already exists or will come up later and provide a better use of your money. This seems unlikely in the near future, given that we’re already familiar with the “major players” in the existential risk area and based on past history, it doesn’t seem likely that a new group of highly capable people would suddenly get interested in the cause. In the longer run, it’s likely that many more people will be attracted to work in this area as time goes on and the threat of a bad-by-default Singularity becomes more obvious, but those people have the disadvantage of having less time for their work to take effect (which reduces the average value of donations), and there will probably also be many more willing donors than at this time (which reduces the marginal value of donations).
So neither of these ways to fill in the missing part of the argument seems very strong. I’d be interested to know what Holden’s own thoughts are, or if anyone else can make stronger arguments on his behalf.
If Holden believes that:
A) reducing existential risk is valuable, and
B) SI’s effectiveness at reducing existential risk is a significant contributor to the future of existential risk, and
C) SI is being less effective at reducing existential risk than they would be if they fixed some set of problems P, and
D) withholding GiveWell’s endorsement while pre-committing to re-evaluating that refusal if given evidence that P has been fixed increases the chances that SI will fix P...
...it seems to me that Holden should withhold GiveWell’s endorsement while pre-committing to re-evaluating that refusal if given evidence that P has been fixed.
Which seems to be what he’s doing. (Of course, I don’t know whether those are his reasons.)
What, on your view, ought he do instead, if he believes those things?
Holden must believe some additional relevant statements, because A-D (with “existential risk” suitably replaced) could be applied to every other charity, as presumably no charity is perfect.
I guess what I most want to know is what Holden thinks are the reasons SI hasn’t already fixed the problems P. If it’s lack of resources or lack of competence, then “withholding … while pre-committing …” isn’t going to help. If it’s wrong beliefs, then arguing seems better than “incentivizing”, since that provides a permanent instead of temporary solution, and in the course of arguing you might find out that you’re wrong yourself. What does Holden believe that causes him to think that providing explicit incentives to SI is a good thing to do?
Thanks for making this argument!
AFAICT charities generally have perverse incentives—to do what will bring in donations, rather than what will do the most good. That can usually argue against things like transparency, for example. So I think when Holden usually says “don’t donate to X yet” it’s as part of an effort to make these incentives saner.
As it happens, I don’t think this problem applies especially strongly to SI, but others may differ.
Relevant
That is indeed relevant, in that it describes some perverse incentives and weird behaviors of nonprofits, with an interesting example. But knowing this context without having to click the link would have been useful. It is customary to explain what a link is about rather than just drop it.
(Or at least it should be)
But C applies more to some charities than others. And evaluating how much of a charity’s potential effectiveness is lost to internal flaws is a big piece of what GiveWell does.
Absolutely agreed that if D is false—for example, if increasing SI’s incentive to fix P doesn’t in fact increase SI’s chances of fixing P, or if a withholding+precommitting strategy doesn’t in fact increase SI’s incentive to fix P, or some other reason—then the strategy I describe makes no sense.
Holden said,
This addresses your point (2). Holden believes that SI is grossly inefficient at best, and actively harmful at worst (since he thinks that they might inadvertently increase AI risk). Therefore, giving money to SI would be counterproductive, and a donor would get a better return on investment in other places.
As for point (1), my impression is that Holden’s low estimate of SI’s competence is due to a combination of what he sees as wrong beliefs, as well as an insufficient capability to implement even the correct beliefs into practice. SI claims to be supremely rational, but their list of achievements is lackluster at best—which indicates a certain amount of Donning-Kruger effect that’s going on. Furthermore, SI appears to be focused on growing SI and teaching rationality workshops, as opposed to their stated mission of researching FAI theory.
Additionally, Holden indicted SI members pretty strongly (though very politely) for what I will (in a less polite fashion) label as arrogance. The prevailing attitude of SI members seems to be (according to Holden) that the rest of the world is just too irrational to comprehend their brilliant insights, and therefore the rest of the world has little to offer—and therefore, any criticism of SI’s goals or actions can be dismissed out of hand.
EDIT: found the right quote, duh.