Hi, here are the details of whom I spoke with and why:
I originally emailed Michael Vassar, letting him know I was going to be in the Bay Area and asking whether there was anyone appropriate for me to meet with. He set me up with Jasen Murray.
Justin Shovelain and an SIAI donor were also present when I spoke with Jasen. There may have been one or two others; I don’t recall.
After we met, I sent the notes to Jasen for review. He sent back comments and also asked me to run it by Amy Willey and Michael Vassar, who each provided some corrections via email that I incorporated.
A couple of other comments:
If SIAI wants to set up another room for more funding discussion, I’d be happy to do that and to post new notes.
In general, we’re always happy to post corrections or updates on any content we post, including how that content is framed and presented. The best way to get our attention is to email us at info@givewell.org
And a tangential comment/question for Louie: I do not understand why you link to my two LW posts using the anchor text you use. These posts are not about GiveWell’s process. They both argue that standard Bayesian inference indicates against the literal use of non-robust expected value estimates, particularly in “Pascal’s Mugging” type scenarios. Michael Vassar’s response to the first of these was that I was attacking a straw man. There are unresolved disagreements about some of the specific modeling assumptions and implications of these posts, but I don’t see any way in which they imply a “limited process” or “blinding to the possibility of SIAI’s being a good giving opportunity.” I do agree that SIAI hasn’t been a fit for our standard process (and is more suited to GiveWell Labs) but I don’t see anything in these posts that illustrates that—what do you have in mind here?
I just read this thread today. I made a clarification upthread about the description of my comment above, under Louie’s. Also, I’d like to register that I thought your characterization of that interview as such was fine, even without the clarifications you make here.
They both argue that standard Bayesian inference indicates against the literal use of non-robust expected value estimates, particularly in “Pascal’s Mugging” type scenarios.
As a technical point, I don’t think these posts address “Pascal’s Mugging” scenarios in any meaningful way.
Bayesian adjustment is a standard part of Pascal’s Mugging. The problem is that Solomonoff complexity priors have fat tails, because describing fundamental laws of physics that allow large payoffs is not radically more complex than laws that only allow small payoffs. It doesn’t take an extra 10^1000 bits to describe a world where an action generates 2^(10^1000) times as many, e.g. happy puppies. So we can’t rule out black swansa priori in that framework (without something like an anthropic assumption that amounts to the Doomsday Argument).
The only thing in your posts that could help with Pascal’s Mugging is the assumption of infinite certainty in a distribution without relevantly fat tails or black swans, like a normal or log-normal distribution. But that would be an extreme move, taking coherent worlds of equal simplicity and massively penalizing the ones with high payoffs, so that no evidence that could fit in a human brain could convince us we were in the high-payoff worlds. Without some justification, that seems to amount to assuming the problem away, not addressing it.
Disclaimer 1: This is about expected value measured in the currency of “goods” like happy puppies, rather than expected utility, since agents can have bounded utility, e.g. simply not caring much more about saving a billion billion puppies rather than a billion. This seems fairly true of most people, at least emotionally.
Disclaimer 2: Occam’s razor priors give high value to Pascal’s Mugging cases, but they also give higher expectations to all other actions. For instance, the chance that space colonization will let huge populations be created increases the expected value of reducing existential risk by many orders of magnitude to total utilitarians. But it also greatly increases the expected payoffs of anything else that reduces existential risk by even a little. So if vaccinating African kids is expected to improve the odds of human survival going forward (not obvious but plausible) then its expected value will be driven to within sight of focused existential risk reductions, e.g. vaccination might be a billionth the cost-effectiveness of focused risk-reduction efforts but probably not smaller by a factor of 10^20. By the same token, different focused existential risk interventions will compete against one another, so one will not want to support the relatively ineffective ones.
Carl, it looks like we have a pretty substantial disagreement about key properties of the appropriate prior distribution over expected value of one’s actions.
I am not sure whether you are literally endorsing a particular distribution (I am not sure whether “Solomonoff complexity prior” is sufficiently well-defined or, if so, whether you are endorsing that or a varied/adjusted version). I myself have not endorsed a particular distribution. So it seems like the right way to resolve our disagreement is for at least one of us to be more specific about what properties are core to our argument and why we believe any reasonable prior ought to have these properties. I’m not sure when I will be able to do this on my end and will likely contact you by email when I do.
What I do not agree with is the implication that my analysis is irrelevant to Pascal’s Mugging. It may be irrelevant for people who endorse the sorts of priors you endorse. But not everyone agrees with you about what the proper prior looks like, and many people who are closer to me on what the appropriate prior looks like still seem unaware of the implications for Pascal’s Mugging. If nothing else, my analysis highlights a relationship between one’s prior distribution and Pascal’s Mugging that I believe many others weren’t aware of. Whether it is a decisive refutation of Pascal’s Mugging is unresolved (and depends on the disagreement I refer to above).
Thanks for the helpful comments! I was uninformed about all those details above.
These posts are not about GiveWell’s process.
One of the posts has the sub-heading “The GiveWell approach” and all of the analysis in both posts use examples of charities you’re comparing. I agree you weren’t just talking about the GiveWell process… you were talking about a larger philosophy of science you have that informs things like the GiveWell process.
I recognize that you’re making sophisticated arguments for your points. Especially the assumptions that you claim simply must be true to satisfy your intuition that charities should be rewarded for transparency and punished otherwise. Those seem wise from a “getting things done” point of view for an org like GiveWell—even when there is no mathematical reason those assumptions should be true—but only a human-level tit-for-tat shame/enforcement mechanism you hope eventually makes this circularly “true” through repeated application. Seems fair enough.
But adding regression adjustments to cancel out the effectiveness of any charity which looks too effective to be believed (based on the common sense of the evaluator) seems like a pretty big finger on the scale. Why do so much analysis in the beginning if the last step of the algorithm is just “re-adjust effectiveness and expected value to equal what feels right”? Your adjustment factor amounts to a kind of Egalitarian Effectiveness Assumption: We are all created equal at turning money into goodness. Or perhaps it’s more of a negative statement, like, “None of us is any better than the best of us at turning money into goodness”—where the upper limit on the best is something like 1000x or whatever the evaluator has encountered in the past. Any claims made above the best limit gets adjusted back down—those guys were trying to Pascal’s Mug us! That’s the way in which there’s a blinding effect. You disbelieve the claims of any groups who claims to be more effective per capita than you think is possible.
Louie, I think you’re mischaracterizing these posts and their implications. The argument is much closer to “extraordinary claims require extraordinary evidence” than it is to “extraordinary claims should simply be disregarded.” And I have outlined (in the conversation with SIAI) ways in which I believe SIAI could generate the evidence needed for me to put greater weight on its claims.
I wrote more in my comment followup on the first post about why an aversion to arguments that seem similar to “Pascal’s Mugging” does not entail an aversion to supporting x-risk charities. (As mentioned in that comment, it appears that important SIAI staff share such an aversion, whether or not they agree with my formal defense of it.)
I also think the message of these posts is consistent with the best available models of how the world works—it isn’t just about trying to set incentives. That’s probably a conversation for another time—there seems to be a lot of confusion on these posts (especially the second) and I will probably post some clarification at a later date.
Hi, here are the details of whom I spoke with and why:
I originally emailed Michael Vassar, letting him know I was going to be in the Bay Area and asking whether there was anyone appropriate for me to meet with. He set me up with Jasen Murray.
Justin Shovelain and an SIAI donor were also present when I spoke with Jasen. There may have been one or two others; I don’t recall.
After we met, I sent the notes to Jasen for review. He sent back comments and also asked me to run it by Amy Willey and Michael Vassar, who each provided some corrections via email that I incorporated.
A couple of other comments:
If SIAI wants to set up another room for more funding discussion, I’d be happy to do that and to post new notes.
In general, we’re always happy to post corrections or updates on any content we post, including how that content is framed and presented. The best way to get our attention is to email us at info@givewell.org
And a tangential comment/question for Louie: I do not understand why you link to my two LW posts using the anchor text you use. These posts are not about GiveWell’s process. They both argue that standard Bayesian inference indicates against the literal use of non-robust expected value estimates, particularly in “Pascal’s Mugging” type scenarios. Michael Vassar’s response to the first of these was that I was attacking a straw man. There are unresolved disagreements about some of the specific modeling assumptions and implications of these posts, but I don’t see any way in which they imply a “limited process” or “blinding to the possibility of SIAI’s being a good giving opportunity.” I do agree that SIAI hasn’t been a fit for our standard process (and is more suited to GiveWell Labs) but I don’t see anything in these posts that illustrates that—what do you have in mind here?
Hi Holden,
I just read this thread today. I made a clarification upthread about the description of my comment above, under Louie’s. Also, I’d like to register that I thought your characterization of that interview as such was fine, even without the clarifications you make here.
As a technical point, I don’t think these posts address “Pascal’s Mugging” scenarios in any meaningful way.
Bayesian adjustment is a standard part of Pascal’s Mugging. The problem is that Solomonoff complexity priors have fat tails, because describing fundamental laws of physics that allow large payoffs is not radically more complex than laws that only allow small payoffs. It doesn’t take an extra 10^1000 bits to describe a world where an action generates 2^(10^1000) times as many, e.g. happy puppies. So we can’t rule out black swans a priori in that framework (without something like an anthropic assumption that amounts to the Doomsday Argument).
The only thing in your posts that could help with Pascal’s Mugging is the assumption of infinite certainty in a distribution without relevantly fat tails or black swans, like a normal or log-normal distribution. But that would be an extreme move, taking coherent worlds of equal simplicity and massively penalizing the ones with high payoffs, so that no evidence that could fit in a human brain could convince us we were in the high-payoff worlds. Without some justification, that seems to amount to assuming the problem away, not addressing it.
Disclaimer 1: This is about expected value measured in the currency of “goods” like happy puppies, rather than expected utility, since agents can have bounded utility, e.g. simply not caring much more about saving a billion billion puppies rather than a billion. This seems fairly true of most people, at least emotionally.
Disclaimer 2: Occam’s razor priors give high value to Pascal’s Mugging cases, but they also give higher expectations to all other actions. For instance, the chance that space colonization will let huge populations be created increases the expected value of reducing existential risk by many orders of magnitude to total utilitarians. But it also greatly increases the expected payoffs of anything else that reduces existential risk by even a little. So if vaccinating African kids is expected to improve the odds of human survival going forward (not obvious but plausible) then its expected value will be driven to within sight of focused existential risk reductions, e.g. vaccination might be a billionth the cost-effectiveness of focused risk-reduction efforts but probably not smaller by a factor of 10^20. By the same token, different focused existential risk interventions will compete against one another, so one will not want to support the relatively ineffective ones.
Carl, it looks like we have a pretty substantial disagreement about key properties of the appropriate prior distribution over expected value of one’s actions.
I am not sure whether you are literally endorsing a particular distribution (I am not sure whether “Solomonoff complexity prior” is sufficiently well-defined or, if so, whether you are endorsing that or a varied/adjusted version). I myself have not endorsed a particular distribution. So it seems like the right way to resolve our disagreement is for at least one of us to be more specific about what properties are core to our argument and why we believe any reasonable prior ought to have these properties. I’m not sure when I will be able to do this on my end and will likely contact you by email when I do.
What I do not agree with is the implication that my analysis is irrelevant to Pascal’s Mugging. It may be irrelevant for people who endorse the sorts of priors you endorse. But not everyone agrees with you about what the proper prior looks like, and many people who are closer to me on what the appropriate prior looks like still seem unaware of the implications for Pascal’s Mugging. If nothing else, my analysis highlights a relationship between one’s prior distribution and Pascal’s Mugging that I believe many others weren’t aware of. Whether it is a decisive refutation of Pascal’s Mugging is unresolved (and depends on the disagreement I refer to above).
Thanks for the helpful comments! I was uninformed about all those details above.
One of the posts has the sub-heading “The GiveWell approach” and all of the analysis in both posts use examples of charities you’re comparing. I agree you weren’t just talking about the GiveWell process… you were talking about a larger philosophy of science you have that informs things like the GiveWell process.
I recognize that you’re making sophisticated arguments for your points. Especially the assumptions that you claim simply must be true to satisfy your intuition that charities should be rewarded for transparency and punished otherwise. Those seem wise from a “getting things done” point of view for an org like GiveWell—even when there is no mathematical reason those assumptions should be true—but only a human-level tit-for-tat shame/enforcement mechanism you hope eventually makes this circularly “true” through repeated application. Seems fair enough.
But adding regression adjustments to cancel out the effectiveness of any charity which looks too effective to be believed (based on the common sense of the evaluator) seems like a pretty big finger on the scale. Why do so much analysis in the beginning if the last step of the algorithm is just “re-adjust effectiveness and expected value to equal what feels right”? Your adjustment factor amounts to a kind of Egalitarian Effectiveness Assumption: We are all created equal at turning money into goodness. Or perhaps it’s more of a negative statement, like, “None of us is any better than the best of us at turning money into goodness”—where the upper limit on the best is something like 1000x or whatever the evaluator has encountered in the past. Any claims made above the best limit gets adjusted back down—those guys were trying to Pascal’s Mug us! That’s the way in which there’s a blinding effect. You disbelieve the claims of any groups who claims to be more effective per capita than you think is possible.
Louie, I think you’re mischaracterizing these posts and their implications. The argument is much closer to “extraordinary claims require extraordinary evidence” than it is to “extraordinary claims should simply be disregarded.” And I have outlined (in the conversation with SIAI) ways in which I believe SIAI could generate the evidence needed for me to put greater weight on its claims.
I wrote more in my comment followup on the first post about why an aversion to arguments that seem similar to “Pascal’s Mugging” does not entail an aversion to supporting x-risk charities. (As mentioned in that comment, it appears that important SIAI staff share such an aversion, whether or not they agree with my formal defense of it.)
I also think the message of these posts is consistent with the best available models of how the world works—it isn’t just about trying to set incentives. That’s probably a conversation for another time—there seems to be a lot of confusion on these posts (especially the second) and I will probably post some clarification at a later date.