I definitely think that, alongside the introductory What is the Singularity? and Why work toward the Singularity? pages, SIAI should have a prominent page stating the basic case for donating to SIAI. Why work toward the Singularity? already explains why bringing about a positive Singularity would have a very high humanitarian impact, but it would probably be beneficial to make the additional case that SIAI’s research program is likely increase the probability of that outcome, and that donations at its current funding level have a high marginal expected utility compared to other charities.
Anna’s two Singularity Summit 2009 talks have some valuable content that would be relevant to such a page, I think. (But it would need to cover more than that.)
I think the page makes a case that it is worth doing something about AI risk, and that SIAI is doing something. The page gives no one any reason to think that SIAI is doing better than anything else you could do about x-risk (there could be reasons elsewhere).
In this respect, the page is similar to other non-profit pages: (i) argue that there is a problem, (ii) argue that you’re doing something to solve the problem, but don’t (iii) try to show that you’re solving the problem better than others. Maybe that’s reasonable, since that rubs some donors the wrong way and is hard to establish that you’re the best; but it doesn’t advance our discussion about the best way to reduce x-risk.
I definitely think that, alongside the introductory What is the Singularity? and Why work toward the Singularity? pages, SIAI should have a prominent page stating the basic case for donating to SIAI. Why work toward the Singularity? already explains why bringing about a positive Singularity would have a very high humanitarian impact, but it would probably be beneficial to make the additional case that SIAI’s research program is likely increase the probability of that outcome, and that donations at its current funding level have a high marginal expected utility compared to other charities.
Anna’s two Singularity Summit 2009 talks have some valuable content that would be relevant to such a page, I think. (But it would need to cover more than that.)
I thought this was such a page:
http://singinst.org/riskintro/index.html
I think the page makes a case that it is worth doing something about AI risk, and that SIAI is doing something. The page gives no one any reason to think that SIAI is doing better than anything else you could do about x-risk (there could be reasons elsewhere).
In this respect, the page is similar to other non-profit pages: (i) argue that there is a problem, (ii) argue that you’re doing something to solve the problem, but don’t (iii) try to show that you’re solving the problem better than others. Maybe that’s reasonable, since that rubs some donors the wrong way and is hard to establish that you’re the best; but it doesn’t advance our discussion about the best way to reduce x-risk.
Ah, yes, I had forgotten about that. Thanks.