Okay, thinking it over for the last hour, I now have a concrete statement to make about my willingness to donate to SIAI. I promise to donate $2000 to SIAI in a year’s time if by that time SIAI has secured a 2-star rating from GiveWell for donors who are interested in existential risk.
I will urge GiveWell to evaluate existential risk charities with a view toward making this condition a fair one. If after year’s time GiveWell has not yet evaluated SIAI, my offer will still stand.
[Edit: Slightly rephrased, removed condition involving quotes which had been taken out of context.]
Thanks for expressing an interest in donating to SIAI.
(a) SIAI has secured a 2 star donation from GiveWell for donors who are interested in existential risk.
I assure you that we are very interested in getting the GiveWell stamp of approval. Michael Vassar and Anna Salamon have corresponded with Holden Karnofsky on the matter and we’re trying figure out the best way to proceed.
If it were just a matter of SIAI becoming more transparent and producing a larger number of clear outputs I would say that it is only a matter of time. As it stands, GiveWell does not know how to objectively evaluate activities focused on existential risk reduction. For that matter, neither do we, at least not directly. We don’t know of any way to tell what percentage of worlds that branch off from this one go on to flourish and how many go on to die. If GiveWell decides not to endorse charities focused on existential risk reduction as a general policy, there is little we can do about it. Would you consider an alternative set of criteria if this turns out to be the case?
We think that UFAI is the largest known existential risk and that the most complete solution—FAI—addresses all other known risks (as well as the goals of every other charitable cause) as a special case. I don’t mean to imply that AI is the only risk worth addressing at the moment, but it certainly seems to us to be the best value on the margin. We are working to make the construction of UFAI less likely through outreach (conferences like the Summit, academic publications, blog posts like The Sequences, popular books and personal communication) and make the construction of FAI more likely through direct work on FAI theory and the identification and recruitment of more people capable of working on FAI. We’ve met and worked with several promising candidates in the past few months. We’ll be informing interested folk about our specific accomplishments in our new monthly newsletter, the June/July issue of which was sent out a few weeks ago. You can sign up here.
(b) You publically apologize for and qualify your statements quoted by XiXiDu here. I believe that these statements are very bad for public relations. Even if true, they are only true at the margin and so at very least need to be qualified in that way.
It would have been a good idea for you to watch the videos yourself before assuming that XiXiDu’s summaries (not actual quotes, despite the quotation marks that surrounded them) were accurate. Eliezer makes it very clear, over and over, that he is speaking about the value of contributions at the margin. As others have already pointed out, it should not be surprising that we think the best way to “help save the human race” is to contribute to FAI being built before UFAI. If we thought there was another higher-value project then we would be working on that. Really, we would. Everyone at SIAI is an aspiring rationalist first and singularitarian second.
If GiveWell decides not to endorse charities focused on existential risk reduction as a general policy, there is little we can do about it. Would you consider an alternative set of criteria if this turns out to be the case?
Yes, I would consider an alternative set of criteria if this turns out to be the case.
I have long felt that GiveWell places too much emphasis on demonstrated impact and believe that in doing so GiveWell may be missing some of the highest expected value opportunities for donors.
It would have been a good idea for you to watch the videos yourself before assuming that XiXiDu’s summaries (not actual quotes, despite the quotation marks that surrounded them) were accurate.
I was not sure that XiXiDu’s summaries were accurate which is why I added a disclaimer to my original remark. I have edited my original comment accordingly.
I apologize to Eliezer for inadvertantly publicizing a misinterpretation of his views.
(b) You publically apologize for and qualify your statements quoted by XiXiDu here. I believe that these statements are very bad for public relations. Even if true, they are only true at the margin and so at very least need to be qualified in that way.
(Note: I have not had the chance to verify that XiXiDu is quoting you correctly because I have not had access to online video for the past few weeks—condition (b) is given on the assumption that the quotes are accurate)
It is always wrong to demand the retraction of a quote which you have not seen in context.
As I said, condition (b) is given based on an assumption which may be wrong. In any case, for public relations purposes, I want SIAI to very clearly indicate that it does not request arbitrarily large amounts of funding from donors.
I want SIAI to very clearly indicate that it does not request arbitrarily large amounts of funding from donors.
In one of the videos XiXiDu cites as reference, Eliezer predicts that funding at the level of a billion dollars would be counterproductive, because it would attract the wrong kind of attention.
Okay, thinking it over for the last hour, I now have a concrete statement to make about my willingness to donate to SIAI. I promise to donate $2000 to SIAI in a year’s time if by that time SIAI has secured a 2-star rating from GiveWell for donors who are interested in existential risk.
I will urge GiveWell to evaluate existential risk charities with a view toward making this condition a fair one. If after year’s time GiveWell has not yet evaluated SIAI, my offer will still stand.
[Edit: Slightly rephrased, removed condition involving quotes which had been taken out of context.]
Jonah,
Thanks for expressing an interest in donating to SIAI.
I assure you that we are very interested in getting the GiveWell stamp of approval. Michael Vassar and Anna Salamon have corresponded with Holden Karnofsky on the matter and we’re trying figure out the best way to proceed.
If it were just a matter of SIAI becoming more transparent and producing a larger number of clear outputs I would say that it is only a matter of time. As it stands, GiveWell does not know how to objectively evaluate activities focused on existential risk reduction. For that matter, neither do we, at least not directly. We don’t know of any way to tell what percentage of worlds that branch off from this one go on to flourish and how many go on to die. If GiveWell decides not to endorse charities focused on existential risk reduction as a general policy, there is little we can do about it. Would you consider an alternative set of criteria if this turns out to be the case?
We think that UFAI is the largest known existential risk and that the most complete solution—FAI—addresses all other known risks (as well as the goals of every other charitable cause) as a special case. I don’t mean to imply that AI is the only risk worth addressing at the moment, but it certainly seems to us to be the best value on the margin. We are working to make the construction of UFAI less likely through outreach (conferences like the Summit, academic publications, blog posts like The Sequences, popular books and personal communication) and make the construction of FAI more likely through direct work on FAI theory and the identification and recruitment of more people capable of working on FAI. We’ve met and worked with several promising candidates in the past few months. We’ll be informing interested folk about our specific accomplishments in our new monthly newsletter, the June/July issue of which was sent out a few weeks ago. You can sign up here.
It would have been a good idea for you to watch the videos yourself before assuming that XiXiDu’s summaries (not actual quotes, despite the quotation marks that surrounded them) were accurate. Eliezer makes it very clear, over and over, that he is speaking about the value of contributions at the margin. As others have already pointed out, it should not be surprising that we think the best way to “help save the human race” is to contribute to FAI being built before UFAI. If we thought there was another higher-value project then we would be working on that. Really, we would. Everyone at SIAI is an aspiring rationalist first and singularitarian second.
Yes, I would consider an alternative set of criteria if this turns out to be the case.
I have long felt that GiveWell places too much emphasis on demonstrated impact and believe that in doing so GiveWell may be missing some of the highest expected value opportunities for donors.
I was not sure that XiXiDu’s summaries were accurate which is why I added a disclaimer to my original remark. I have edited my original comment accordingly.
I apologize to Eliezer for inadvertantly publicizing a misinterpretation of his views.
It is always wrong to demand the retraction of a quote which you have not seen in context.
Thanks, I have edited my comment accordingly.
As I said, condition (b) is given based on an assumption which may be wrong. In any case, for public relations purposes, I want SIAI to very clearly indicate that it does not request arbitrarily large amounts of funding from donors.
In one of the videos XiXiDu cites as reference, Eliezer predicts that funding at the level of a billion dollars would be counterproductive, because it would attract the wrong kind of attention.
Thanks for pointing this out. I edited my comment accordingly.
Statements misquoted by XiXiDu—it looks like!
Thanks for pointing this out