Thanks for expressing an interest in donating to SIAI.
(a) SIAI has secured a 2 star donation from GiveWell for donors who are interested in existential risk.
I assure you that we are very interested in getting the GiveWell stamp of approval. Michael Vassar and Anna Salamon have corresponded with Holden Karnofsky on the matter and we’re trying figure out the best way to proceed.
If it were just a matter of SIAI becoming more transparent and producing a larger number of clear outputs I would say that it is only a matter of time. As it stands, GiveWell does not know how to objectively evaluate activities focused on existential risk reduction. For that matter, neither do we, at least not directly. We don’t know of any way to tell what percentage of worlds that branch off from this one go on to flourish and how many go on to die. If GiveWell decides not to endorse charities focused on existential risk reduction as a general policy, there is little we can do about it. Would you consider an alternative set of criteria if this turns out to be the case?
We think that UFAI is the largest known existential risk and that the most complete solution—FAI—addresses all other known risks (as well as the goals of every other charitable cause) as a special case. I don’t mean to imply that AI is the only risk worth addressing at the moment, but it certainly seems to us to be the best value on the margin. We are working to make the construction of UFAI less likely through outreach (conferences like the Summit, academic publications, blog posts like The Sequences, popular books and personal communication) and make the construction of FAI more likely through direct work on FAI theory and the identification and recruitment of more people capable of working on FAI. We’ve met and worked with several promising candidates in the past few months. We’ll be informing interested folk about our specific accomplishments in our new monthly newsletter, the June/July issue of which was sent out a few weeks ago. You can sign up here.
(b) You publically apologize for and qualify your statements quoted by XiXiDu here. I believe that these statements are very bad for public relations. Even if true, they are only true at the margin and so at very least need to be qualified in that way.
It would have been a good idea for you to watch the videos yourself before assuming that XiXiDu’s summaries (not actual quotes, despite the quotation marks that surrounded them) were accurate. Eliezer makes it very clear, over and over, that he is speaking about the value of contributions at the margin. As others have already pointed out, it should not be surprising that we think the best way to “help save the human race” is to contribute to FAI being built before UFAI. If we thought there was another higher-value project then we would be working on that. Really, we would. Everyone at SIAI is an aspiring rationalist first and singularitarian second.
If GiveWell decides not to endorse charities focused on existential risk reduction as a general policy, there is little we can do about it. Would you consider an alternative set of criteria if this turns out to be the case?
Yes, I would consider an alternative set of criteria if this turns out to be the case.
I have long felt that GiveWell places too much emphasis on demonstrated impact and believe that in doing so GiveWell may be missing some of the highest expected value opportunities for donors.
It would have been a good idea for you to watch the videos yourself before assuming that XiXiDu’s summaries (not actual quotes, despite the quotation marks that surrounded them) were accurate.
I was not sure that XiXiDu’s summaries were accurate which is why I added a disclaimer to my original remark. I have edited my original comment accordingly.
I apologize to Eliezer for inadvertantly publicizing a misinterpretation of his views.
Jonah,
Thanks for expressing an interest in donating to SIAI.
I assure you that we are very interested in getting the GiveWell stamp of approval. Michael Vassar and Anna Salamon have corresponded with Holden Karnofsky on the matter and we’re trying figure out the best way to proceed.
If it were just a matter of SIAI becoming more transparent and producing a larger number of clear outputs I would say that it is only a matter of time. As it stands, GiveWell does not know how to objectively evaluate activities focused on existential risk reduction. For that matter, neither do we, at least not directly. We don’t know of any way to tell what percentage of worlds that branch off from this one go on to flourish and how many go on to die. If GiveWell decides not to endorse charities focused on existential risk reduction as a general policy, there is little we can do about it. Would you consider an alternative set of criteria if this turns out to be the case?
We think that UFAI is the largest known existential risk and that the most complete solution—FAI—addresses all other known risks (as well as the goals of every other charitable cause) as a special case. I don’t mean to imply that AI is the only risk worth addressing at the moment, but it certainly seems to us to be the best value on the margin. We are working to make the construction of UFAI less likely through outreach (conferences like the Summit, academic publications, blog posts like The Sequences, popular books and personal communication) and make the construction of FAI more likely through direct work on FAI theory and the identification and recruitment of more people capable of working on FAI. We’ve met and worked with several promising candidates in the past few months. We’ll be informing interested folk about our specific accomplishments in our new monthly newsletter, the June/July issue of which was sent out a few weeks ago. You can sign up here.
It would have been a good idea for you to watch the videos yourself before assuming that XiXiDu’s summaries (not actual quotes, despite the quotation marks that surrounded them) were accurate. Eliezer makes it very clear, over and over, that he is speaking about the value of contributions at the margin. As others have already pointed out, it should not be surprising that we think the best way to “help save the human race” is to contribute to FAI being built before UFAI. If we thought there was another higher-value project then we would be working on that. Really, we would. Everyone at SIAI is an aspiring rationalist first and singularitarian second.
Yes, I would consider an alternative set of criteria if this turns out to be the case.
I have long felt that GiveWell places too much emphasis on demonstrated impact and believe that in doing so GiveWell may be missing some of the highest expected value opportunities for donors.
I was not sure that XiXiDu’s summaries were accurate which is why I added a disclaimer to my original remark. I have edited my original comment accordingly.
I apologize to Eliezer for inadvertantly publicizing a misinterpretation of his views.