Thanks for the helpful comments! I was uninformed about all those details above.
These posts are not about GiveWell’s process.
One of the posts has the sub-heading “The GiveWell approach” and all of the analysis in both posts use examples of charities you’re comparing. I agree you weren’t just talking about the GiveWell process… you were talking about a larger philosophy of science you have that informs things like the GiveWell process.
I recognize that you’re making sophisticated arguments for your points. Especially the assumptions that you claim simply must be true to satisfy your intuition that charities should be rewarded for transparency and punished otherwise. Those seem wise from a “getting things done” point of view for an org like GiveWell—even when there is no mathematical reason those assumptions should be true—but only a human-level tit-for-tat shame/enforcement mechanism you hope eventually makes this circularly “true” through repeated application. Seems fair enough.
But adding regression adjustments to cancel out the effectiveness of any charity which looks too effective to be believed (based on the common sense of the evaluator) seems like a pretty big finger on the scale. Why do so much analysis in the beginning if the last step of the algorithm is just “re-adjust effectiveness and expected value to equal what feels right”? Your adjustment factor amounts to a kind of Egalitarian Effectiveness Assumption: We are all created equal at turning money into goodness. Or perhaps it’s more of a negative statement, like, “None of us is any better than the best of us at turning money into goodness”—where the upper limit on the best is something like 1000x or whatever the evaluator has encountered in the past. Any claims made above the best limit gets adjusted back down—those guys were trying to Pascal’s Mug us! That’s the way in which there’s a blinding effect. You disbelieve the claims of any groups who claims to be more effective per capita than you think is possible.
Louie, I think you’re mischaracterizing these posts and their implications. The argument is much closer to “extraordinary claims require extraordinary evidence” than it is to “extraordinary claims should simply be disregarded.” And I have outlined (in the conversation with SIAI) ways in which I believe SIAI could generate the evidence needed for me to put greater weight on its claims.
I wrote more in my comment followup on the first post about why an aversion to arguments that seem similar to “Pascal’s Mugging” does not entail an aversion to supporting x-risk charities. (As mentioned in that comment, it appears that important SIAI staff share such an aversion, whether or not they agree with my formal defense of it.)
I also think the message of these posts is consistent with the best available models of how the world works—it isn’t just about trying to set incentives. That’s probably a conversation for another time—there seems to be a lot of confusion on these posts (especially the second) and I will probably post some clarification at a later date.
Thanks for the helpful comments! I was uninformed about all those details above.
One of the posts has the sub-heading “The GiveWell approach” and all of the analysis in both posts use examples of charities you’re comparing. I agree you weren’t just talking about the GiveWell process… you were talking about a larger philosophy of science you have that informs things like the GiveWell process.
I recognize that you’re making sophisticated arguments for your points. Especially the assumptions that you claim simply must be true to satisfy your intuition that charities should be rewarded for transparency and punished otherwise. Those seem wise from a “getting things done” point of view for an org like GiveWell—even when there is no mathematical reason those assumptions should be true—but only a human-level tit-for-tat shame/enforcement mechanism you hope eventually makes this circularly “true” through repeated application. Seems fair enough.
But adding regression adjustments to cancel out the effectiveness of any charity which looks too effective to be believed (based on the common sense of the evaluator) seems like a pretty big finger on the scale. Why do so much analysis in the beginning if the last step of the algorithm is just “re-adjust effectiveness and expected value to equal what feels right”? Your adjustment factor amounts to a kind of Egalitarian Effectiveness Assumption: We are all created equal at turning money into goodness. Or perhaps it’s more of a negative statement, like, “None of us is any better than the best of us at turning money into goodness”—where the upper limit on the best is something like 1000x or whatever the evaluator has encountered in the past. Any claims made above the best limit gets adjusted back down—those guys were trying to Pascal’s Mug us! That’s the way in which there’s a blinding effect. You disbelieve the claims of any groups who claims to be more effective per capita than you think is possible.
Louie, I think you’re mischaracterizing these posts and their implications. The argument is much closer to “extraordinary claims require extraordinary evidence” than it is to “extraordinary claims should simply be disregarded.” And I have outlined (in the conversation with SIAI) ways in which I believe SIAI could generate the evidence needed for me to put greater weight on its claims.
I wrote more in my comment followup on the first post about why an aversion to arguments that seem similar to “Pascal’s Mugging” does not entail an aversion to supporting x-risk charities. (As mentioned in that comment, it appears that important SIAI staff share such an aversion, whether or not they agree with my formal defense of it.)
I also think the message of these posts is consistent with the best available models of how the world works—it isn’t just about trying to set incentives. That’s probably a conversation for another time—there seems to be a lot of confusion on these posts (especially the second) and I will probably post some clarification at a later date.