Okay, thinking it over for the last hour, I now have a concrete statement to make about my willingness to donate to SIAI. I promise to donate $2000 to SIAI in a year’s time if by that time SIAI has secured a 2-star rating from GiveWell for donors who are interested in existential risk.
I will urge GiveWell to evaluate existential risk charities with a view toward making this condition a fair one. If after year’s time GiveWell has not yet evaluated SIAI, my offer will still stand.
[Edit: Slightly rephrased, removed condition involving quotes which had been taken out of context.]
Thanks for expressing an interest in donating to SIAI.
(a) SIAI has secured a 2 star donation from GiveWell for donors who are interested in existential risk.
I assure you that we are very interested in getting the GiveWell stamp of approval. Michael Vassar and Anna Salamon have corresponded with Holden Karnofsky on the matter and we’re trying figure out the best way to proceed.
If it were just a matter of SIAI becoming more transparent and producing a larger number of clear outputs I would say that it is only a matter of time. As it stands, GiveWell does not know how to objectively evaluate activities focused on existential risk reduction. For that matter, neither do we, at least not directly. We don’t know of any way to tell what percentage of worlds that branch off from this one go on to flourish and how many go on to die. If GiveWell decides not to endorse charities focused on existential risk reduction as a general policy, there is little we can do about it. Would you consider an alternative set of criteria if this turns out to be the case?
We think that UFAI is the largest known existential risk and that the most complete solution—FAI—addresses all other known risks (as well as the goals of every other charitable cause) as a special case. I don’t mean to imply that AI is the only risk worth addressing at the moment, but it certainly seems to us to be the best value on the margin. We are working to make the construction of UFAI less likely through outreach (conferences like the Summit, academic publications, blog posts like The Sequences, popular books and personal communication) and make the construction of FAI more likely through direct work on FAI theory and the identification and recruitment of more people capable of working on FAI. We’ve met and worked with several promising candidates in the past few months. We’ll be informing interested folk about our specific accomplishments in our new monthly newsletter, the June/July issue of which was sent out a few weeks ago. You can sign up here.
(b) You publically apologize for and qualify your statements quoted by XiXiDu here. I believe that these statements are very bad for public relations. Even if true, they are only true at the margin and so at very least need to be qualified in that way.
It would have been a good idea for you to watch the videos yourself before assuming that XiXiDu’s summaries (not actual quotes, despite the quotation marks that surrounded them) were accurate. Eliezer makes it very clear, over and over, that he is speaking about the value of contributions at the margin. As others have already pointed out, it should not be surprising that we think the best way to “help save the human race” is to contribute to FAI being built before UFAI. If we thought there was another higher-value project then we would be working on that. Really, we would. Everyone at SIAI is an aspiring rationalist first and singularitarian second.
If GiveWell decides not to endorse charities focused on existential risk reduction as a general policy, there is little we can do about it. Would you consider an alternative set of criteria if this turns out to be the case?
Yes, I would consider an alternative set of criteria if this turns out to be the case.
I have long felt that GiveWell places too much emphasis on demonstrated impact and believe that in doing so GiveWell may be missing some of the highest expected value opportunities for donors.
It would have been a good idea for you to watch the videos yourself before assuming that XiXiDu’s summaries (not actual quotes, despite the quotation marks that surrounded them) were accurate.
I was not sure that XiXiDu’s summaries were accurate which is why I added a disclaimer to my original remark. I have edited my original comment accordingly.
I apologize to Eliezer for inadvertantly publicizing a misinterpretation of his views.
(b) You publically apologize for and qualify your statements quoted by XiXiDu here. I believe that these statements are very bad for public relations. Even if true, they are only true at the margin and so at very least need to be qualified in that way.
(Note: I have not had the chance to verify that XiXiDu is quoting you correctly because I have not had access to online video for the past few weeks—condition (b) is given on the assumption that the quotes are accurate)
It is always wrong to demand the retraction of a quote which you have not seen in context.
As I said, condition (b) is given based on an assumption which may be wrong. In any case, for public relations purposes, I want SIAI to very clearly indicate that it does not request arbitrarily large amounts of funding from donors.
I want SIAI to very clearly indicate that it does not request arbitrarily large amounts of funding from donors.
In one of the videos XiXiDu cites as reference, Eliezer predicts that funding at the level of a billion dollars would be counterproductive, because it would attract the wrong kind of attention.
Sorry for attaching a misrepresentation of your view to you based on descriptions of videos which I had not seen! I have edited my other comment accordingly.
I donated 10% of my annual (graduate student) income to VillageReach a few weeks ago. As I say in my post, this is not because I have special attachment to international aid.
I would be willing to donate to SIAI if SIAI coud convince me that it would use the money well. At present I don’t even know how much money SIAI receives in donations a year, much less how it’s used and whether the organization has room for more funding.
I believe that there are others like me and that in the long run exhibiting transparency would allow SIAI to attract more than enough extra donations to cover the costs of transparency. Note that GiveWell leveraged 1 million dollars last year and that this amount may be increasing exponentially (as GiveWell is still quite young).
At present I don’t even know how much money SIAI receives in donations a year, much less how it’s used and whether the organization has room for more funding.
I would also like to see SIAI post a description of its finances and leadership structure.
I agree it would be good if more info on finances was readily available. There are tax returns (although I think the most recent is 2008) available on Guidestar (with free registration). But as for leadership structure, is this link the sort of thing you had in mind or were you looking for an actual org chart or something?
Having run a small non-profit operation for a few years now, the standard of transparency I now like is simply publishing our General Ledger to the Web every year.
What’s nice about it: it’s a) feasible (our accounts are in Xero and once you’ve figured out the export it’s a breeze), b) the ultimate in transparency. We still do summary reports to give people an idea of what’s happened with the money, but anyone who complains or for some other reason wants the raw data, I can just point at the GL.
How much will you donate to cover the costs? It’s always easy to spend other people’s money.
Okay, thinking it over for the last hour, I now have a concrete statement to make about my willingness to donate to SIAI. I promise to donate $2000 to SIAI in a year’s time if by that time SIAI has secured a 2-star rating from GiveWell for donors who are interested in existential risk.
I will urge GiveWell to evaluate existential risk charities with a view toward making this condition a fair one. If after year’s time GiveWell has not yet evaluated SIAI, my offer will still stand.
[Edit: Slightly rephrased, removed condition involving quotes which had been taken out of context.]
Jonah,
Thanks for expressing an interest in donating to SIAI.
I assure you that we are very interested in getting the GiveWell stamp of approval. Michael Vassar and Anna Salamon have corresponded with Holden Karnofsky on the matter and we’re trying figure out the best way to proceed.
If it were just a matter of SIAI becoming more transparent and producing a larger number of clear outputs I would say that it is only a matter of time. As it stands, GiveWell does not know how to objectively evaluate activities focused on existential risk reduction. For that matter, neither do we, at least not directly. We don’t know of any way to tell what percentage of worlds that branch off from this one go on to flourish and how many go on to die. If GiveWell decides not to endorse charities focused on existential risk reduction as a general policy, there is little we can do about it. Would you consider an alternative set of criteria if this turns out to be the case?
We think that UFAI is the largest known existential risk and that the most complete solution—FAI—addresses all other known risks (as well as the goals of every other charitable cause) as a special case. I don’t mean to imply that AI is the only risk worth addressing at the moment, but it certainly seems to us to be the best value on the margin. We are working to make the construction of UFAI less likely through outreach (conferences like the Summit, academic publications, blog posts like The Sequences, popular books and personal communication) and make the construction of FAI more likely through direct work on FAI theory and the identification and recruitment of more people capable of working on FAI. We’ve met and worked with several promising candidates in the past few months. We’ll be informing interested folk about our specific accomplishments in our new monthly newsletter, the June/July issue of which was sent out a few weeks ago. You can sign up here.
It would have been a good idea for you to watch the videos yourself before assuming that XiXiDu’s summaries (not actual quotes, despite the quotation marks that surrounded them) were accurate. Eliezer makes it very clear, over and over, that he is speaking about the value of contributions at the margin. As others have already pointed out, it should not be surprising that we think the best way to “help save the human race” is to contribute to FAI being built before UFAI. If we thought there was another higher-value project then we would be working on that. Really, we would. Everyone at SIAI is an aspiring rationalist first and singularitarian second.
Yes, I would consider an alternative set of criteria if this turns out to be the case.
I have long felt that GiveWell places too much emphasis on demonstrated impact and believe that in doing so GiveWell may be missing some of the highest expected value opportunities for donors.
I was not sure that XiXiDu’s summaries were accurate which is why I added a disclaimer to my original remark. I have edited my original comment accordingly.
I apologize to Eliezer for inadvertantly publicizing a misinterpretation of his views.
It is always wrong to demand the retraction of a quote which you have not seen in context.
Thanks, I have edited my comment accordingly.
As I said, condition (b) is given based on an assumption which may be wrong. In any case, for public relations purposes, I want SIAI to very clearly indicate that it does not request arbitrarily large amounts of funding from donors.
In one of the videos XiXiDu cites as reference, Eliezer predicts that funding at the level of a billion dollars would be counterproductive, because it would attract the wrong kind of attention.
Thanks for pointing this out. I edited my comment accordingly.
Statements misquoted by XiXiDu—it looks like!
Thanks for pointing this out
Sorry for attaching a misrepresentation of your view to you based on descriptions of videos which I had not seen! I have edited my other comment accordingly.
I donated 10% of my annual (graduate student) income to VillageReach a few weeks ago. As I say in my post, this is not because I have special attachment to international aid.
I would be willing to donate to SIAI if SIAI coud convince me that it would use the money well. At present I don’t even know how much money SIAI receives in donations a year, much less how it’s used and whether the organization has room for more funding.
I believe that there are others like me and that in the long run exhibiting transparency would allow SIAI to attract more than enough extra donations to cover the costs of transparency. Note that GiveWell leveraged 1 million dollars last year and that this amount may be increasing exponentially (as GiveWell is still quite young).
I would also like to see SIAI post a description of its finances and leadership structure.
I agree it would be good if more info on finances was readily available. There are tax returns (although I think the most recent is 2008) available on Guidestar (with free registration). But as for leadership structure, is this link the sort of thing you had in mind or were you looking for an actual org chart or something?
Having run a small non-profit operation for a few years now, the standard of transparency I now like is simply publishing our General Ledger to the Web every year.
What’s nice about it: it’s a) feasible (our accounts are in Xero and once you’ve figured out the export it’s a breeze), b) the ultimate in transparency. We still do summary reports to give people an idea of what’s happened with the money, but anyone who complains or for some other reason wants the raw data, I can just point at the GL.
OK, the leadership structure info is satisfactory.