Say I have $1000 to donate. Can you give me your elevator pitch about why I should donate it (in totality or in part) to the SIAI instead of to the SENS Foundation?
Updating top level with expanded question:
I ask because that’s my personal dilemma: SENS or SIAI, or maybe both, but in what proportions?
So far I’ve donated roughly 6x more to SENS than to SIAI because, while I think a friendly AGI is “bigger”, it seems like SENS has a higher probability of paying off first, which would stop the massacre of aging and help ensure I’m still around when a friendly AGI is launched if it ends up taking a while (usual caveats; existential risks, etc).
It also seems to me like more dollars for SENS are almost assured to result in a faster rate of progress (more people working in labs, more compounds screened, more and better equipment, etc), while more dollars for the SIAI doesn’t seem like it would have quite such a direct effect on the rate of progress (but since I know less about what the SIAI does than about what SENS does, I could be mistaken about the effect that additional money would have).
If you don’t want to pitch SIAI over SENS, maybe you could discuss these points so that I, and others, are better able to make informed decisions about how to spend our philanthropic monies.
I’m glad to see you asking not just how to do good with your dollar, but how to do the most good with your dollar. Optimization is lives-saving.
Regarding what SIAI could do with a marginal $1000, the one sentence version is: “more rapidly mobilize talented or powerful people (many of them outside of SIAI) to work seriously to reduce AI risks”. My impression is that we are strongly money-limited at the moment: more donations allows us to more significantly reduce existential risk.
In more detail:
Existential risk can be reduced by (among other pathways):
Getting folks with money, brains, academic influence, money-making influence, and other forms of power to take UFAI risks seriously; and
Creating better strategy, and especially, creating better well-written, credible, readable strategy, for how interested people can reduce AI risks.
SIAI is currently engaged in a number of specific projects toward both #1 and #2, and we have a backlog of similar projects waiting for skilled person-hours with which to do them. Our recent efforts along these lines have gotten good returns on the money and time we invested, and I’d expect similar returns from the (similar) projects we can’t currently get to. I’ll list some examples of projects we have recently done, and their fruits, to give you a sense of what this looks like:
Academic talks and journal articles (which have given us a number of high-quality academic allies, and have created more academic literature and hence increased academic respectability for AI risks):
“Changing the frame of AI futurism: From storytelling to heavy-tailed, high-dimensional probability distributions”, by Steve Rayhawk, myself, Tom McCabe, Rolf Nelson, and Michael Anissimov. (Presented at the European Conference of Computing and Philosophy in July ’09 (ECAP))
“Arms Control and Intelligence Explosions”, by Carl Shulman (Also presented at ECAP)
“Machine Ethics and Superintelligence”, by Carl Shulman and Henrik Jonsson (Presented at the Asia-Pacific Conference of Computing and Philosophy in October ’09 (APCAP))
“Which Consequentialism? Machine Ethics and Moral Divergence”, by Carl Shulman and Nick Tarleton (Also presented at APCAP);
“Long-term AI forecasting: Building methodologies that work”, an invited presentation by myself at the Santa Fe Institute conference on forecasting;
And several more at various stages of the writing process, including some journal papers.
The Singularity Summit, and the academic workshop discussions that followed it. (This was a net money-maker for SIAI if you don’t count Michael Vassar’s time; if you do count his time the Summit roughly broke even, but created significant increased interest among academics, among a number of potential donors, and among others who may take useful action in various ways; some good ideas were generated at the workshop, also.)
The 2009 SIAI Summer Fellows Program (This cost about $30k, counting stipends for the SIAI staff involved. We had 15 people for varying periods of time over 3 months. Some of the papers above were completed there; also, human capital gains were significant, as at least three of the program’s graduates have continued to do useful research with the skills they gained, and at least three others plan to become long-term donors who earn money and put it toward existential risk reduction.)
Miscellaneous additional examples:
The “Uncertain Future” AI timelines modeling webapp (currently in alpha)
A decision theory research paper discussing the idea of “acausal trade” in various decision theories, and its implications for the importance of the decision theory built into powerful or seed AIs (this project is being funded by Less Wronger ‘Utilitarian’
Planning and market research for a popular book on AI risks and FAI (just started, with a small grant from a new donor)
A pilot program for conference grants to enable the presentation of work relating to AI risks (also just getting started, with a second small grant from the same donor)
Internal SIAI strategy documents, helping sort out a coherent strategy for the activities above.
(This activity is a change from past time-periods: SIAI added a bunch of new people and project-types in the last year, notably our president Michael Vassar, and also Steve Rayhawk, myself, Michael Anissimov, volunteer Zack Davis, and some longer-term volunteers from the SIAI Summer Fellows Program mentioned above.)
(There are also core SIAI activities that are not near the margin but are supported by our current donation base, notably Eliezer’s writing and research.)
How efficiently can we turn a marginal $1000 into more rapid project-completion?
As far as I can tell, rather efficiently. The skilled people we have today are booked and still can’t find time for all the high-value projects in our backlog (including many academic papers for which the ideas have long been floating around, but which aren’t yet written where academia can see and respond to them). A marginal $1k can buy nearly an extra person-month of effort from a Summer Fellow type; such research assistants can speed projects now, and will probably be able to lead similar projects by themselves (or with new research assistants) after a year of such work.
As to SIAI vs. SENS:
SIAI and SENS have different aims, so which organization gives you more goodness per dollar will depend somewhat on your goals. SIAI is aimed at existential risk reduction, and offers existential risk reduction at a rate that I might very crudely ballpark at 8 expected current lives saved per dollar donated (plus an orders of magnitude larger number of potential future lives). You can attempt a similar estimate for SENS by estimating the number of years that SENS advances the timeline for longevity medicine, looking at global demographics, and adjusting for the chances of existential catastrophe while SENS works.
The Future of Humanity Institute at Oxford University is another institution that is effectively reducing existential risk and that could do more with more money. You may wish to include them in your comparison study. (Just don’t let the number of options distract you from in fact using your dollars to purchase expected goodness.)
There’s a lot more to say on all of these points, but I’m trying to be brief—if you want more info on a specific point, let me know which.
It may also be worth mentioning that SIAI accepts donations earmarked for specific projects (provided we think the projects worthwhile). If you’re interested in donating but wish to donate to a specific current or potential project, please email me: anna at singinst dot org. (You don’t need to fully know what you’re doing to go this route; for anyone considering a donation of $1k or more, I’d be happy to brainstorm with you and to work something out together.)
at 8 expected current lives saved per dollar donated
Even though there is a large margin of error this is at least 1500 times more effective than the best death-averting charities according to GiveWell. There’s is a side note though that while normal charities are incrementally beneficial SIAI has (roughly speaking) only two possible modes, a total failure mode and a total win mode. Still, expected utility is expected utility. A paltry 150 dollars to save as many lives as Schindler… It’s a shame warm fuzzies scale up so badly...
Someone should update SIAI’s recent publications page, which is really out of date. In the mean time, I found two of the papers you referred using Google:
Those interested in the cost-effectiveness of donations to the SIAI may also want to check Alan Dawrst’s donation recommendation. (Dawrst is “Utilitarian”, the donor that Anna mentions above.)
Thanks for that Anna. I could only find two of the five Academic talks and journal articles you mentioned online. Would you mind posting all of them online and point me to where I will be able to access them?
I tend to regard the SENS Foundation as major fellow travelers, and think that we both tend to benefit from each other’s positive publicity. For this reason I’ve usually tended to avoid this kind of elevator pitch!
I’ll flag Vassar or Salamon to describe what sort of efforts SIAI would like to marginally expand into as a function of our present and expected future reliability of funding.
Say I have $1000 to donate. Can you give me your elevator pitch about why I should donate it (in totality or in part) to the SIAI instead of to the SENS Foundation?
Updating top level with expanded question:
I ask because that’s my personal dilemma: SENS or SIAI, or maybe both, but in what proportions?
So far I’ve donated roughly 6x more to SENS than to SIAI because, while I think a friendly AGI is “bigger”, it seems like SENS has a higher probability of paying off first, which would stop the massacre of aging and help ensure I’m still around when a friendly AGI is launched if it ends up taking a while (usual caveats; existential risks, etc).
It also seems to me like more dollars for SENS are almost assured to result in a faster rate of progress (more people working in labs, more compounds screened, more and better equipment, etc), while more dollars for the SIAI doesn’t seem like it would have quite such a direct effect on the rate of progress (but since I know less about what the SIAI does than about what SENS does, I could be mistaken about the effect that additional money would have).
If you don’t want to pitch SIAI over SENS, maybe you could discuss these points so that I, and others, are better able to make informed decisions about how to spend our philanthropic monies.
Hi there MichaelGR,
I’m glad to see you asking not just how to do good with your dollar, but how to do the most good with your dollar. Optimization is lives-saving.
Regarding what SIAI could do with a marginal $1000, the one sentence version is: “more rapidly mobilize talented or powerful people (many of them outside of SIAI) to work seriously to reduce AI risks”. My impression is that we are strongly money-limited at the moment: more donations allows us to more significantly reduce existential risk.
In more detail:
Existential risk can be reduced by (among other pathways):
Getting folks with money, brains, academic influence, money-making influence, and other forms of power to take UFAI risks seriously; and
Creating better strategy, and especially, creating better well-written, credible, readable strategy, for how interested people can reduce AI risks.
SIAI is currently engaged in a number of specific projects toward both #1 and #2, and we have a backlog of similar projects waiting for skilled person-hours with which to do them. Our recent efforts along these lines have gotten good returns on the money and time we invested, and I’d expect similar returns from the (similar) projects we can’t currently get to. I’ll list some examples of projects we have recently done, and their fruits, to give you a sense of what this looks like:
Academic talks and journal articles (which have given us a number of high-quality academic allies, and have created more academic literature and hence increased academic respectability for AI risks):
“Changing the frame of AI futurism: From storytelling to heavy-tailed, high-dimensional probability distributions”, by Steve Rayhawk, myself, Tom McCabe, Rolf Nelson, and Michael Anissimov. (Presented at the European Conference of Computing and Philosophy in July ’09 (ECAP))
“Arms Control and Intelligence Explosions”, by Carl Shulman (Also presented at ECAP)
“Machine Ethics and Superintelligence”, by Carl Shulman and Henrik Jonsson (Presented at the Asia-Pacific Conference of Computing and Philosophy in October ’09 (APCAP))
“Which Consequentialism? Machine Ethics and Moral Divergence”, by Carl Shulman and Nick Tarleton (Also presented at APCAP);
“Long-term AI forecasting: Building methodologies that work”, an invited presentation by myself at the Santa Fe Institute conference on forecasting;
And several more at various stages of the writing process, including some journal papers.
The Singularity Summit, and the academic workshop discussions that followed it. (This was a net money-maker for SIAI if you don’t count Michael Vassar’s time; if you do count his time the Summit roughly broke even, but created significant increased interest among academics, among a number of potential donors, and among others who may take useful action in various ways; some good ideas were generated at the workshop, also.)
The 2009 SIAI Summer Fellows Program (This cost about $30k, counting stipends for the SIAI staff involved. We had 15 people for varying periods of time over 3 months. Some of the papers above were completed there; also, human capital gains were significant, as at least three of the program’s graduates have continued to do useful research with the skills they gained, and at least three others plan to become long-term donors who earn money and put it toward existential risk reduction.)
Miscellaneous additional examples:
The “Uncertain Future” AI timelines modeling webapp (currently in alpha)
A decision theory research paper discussing the idea of “acausal trade” in various decision theories, and its implications for the importance of the decision theory built into powerful or seed AIs (this project is being funded by Less Wronger ‘Utilitarian’
Planning and market research for a popular book on AI risks and FAI (just started, with a small grant from a new donor)
A pilot program for conference grants to enable the presentation of work relating to AI risks (also just getting started, with a second small grant from the same donor)
Internal SIAI strategy documents, helping sort out a coherent strategy for the activities above.
(This activity is a change from past time-periods: SIAI added a bunch of new people and project-types in the last year, notably our president Michael Vassar, and also Steve Rayhawk, myself, Michael Anissimov, volunteer Zack Davis, and some longer-term volunteers from the SIAI Summer Fellows Program mentioned above.)
(There are also core SIAI activities that are not near the margin but are supported by our current donation base, notably Eliezer’s writing and research.)
How efficiently can we turn a marginal $1000 into more rapid project-completion?
As far as I can tell, rather efficiently. The skilled people we have today are booked and still can’t find time for all the high-value projects in our backlog (including many academic papers for which the ideas have long been floating around, but which aren’t yet written where academia can see and respond to them). A marginal $1k can buy nearly an extra person-month of effort from a Summer Fellow type; such research assistants can speed projects now, and will probably be able to lead similar projects by themselves (or with new research assistants) after a year of such work.
As to SIAI vs. SENS:
SIAI and SENS have different aims, so which organization gives you more goodness per dollar will depend somewhat on your goals. SIAI is aimed at existential risk reduction, and offers existential risk reduction at a rate that I might very crudely ballpark at 8 expected current lives saved per dollar donated (plus an orders of magnitude larger number of potential future lives). You can attempt a similar estimate for SENS by estimating the number of years that SENS advances the timeline for longevity medicine, looking at global demographics, and adjusting for the chances of existential catastrophe while SENS works.
The Future of Humanity Institute at Oxford University is another institution that is effectively reducing existential risk and that could do more with more money. You may wish to include them in your comparison study. (Just don’t let the number of options distract you from in fact using your dollars to purchase expected goodness.)
There’s a lot more to say on all of these points, but I’m trying to be brief—if you want more info on a specific point, let me know which.
It may also be worth mentioning that SIAI accepts donations earmarked for specific projects (provided we think the projects worthwhile). If you’re interested in donating but wish to donate to a specific current or potential project, please email me: anna at singinst dot org. (You don’t need to fully know what you’re doing to go this route; for anyone considering a donation of $1k or more, I’d be happy to brainstorm with you and to work something out together.)
Please post a copy of this comment as a top-level post on the SIAI blog.
You can donate to FHI too? Dang, now I’m conflicted.
Wait… their web form only works with UK currency, and the Americas form requires FHI to be a write-in and may not get there appropriately.
Crisis averted by tiny obstacles.
Even though there is a large margin of error this is at least 1500 times more effective than the best death-averting charities according to GiveWell. There’s is a side note though that while normal charities are incrementally beneficial SIAI has (roughly speaking) only two possible modes, a total failure mode and a total win mode. Still, expected utility is expected utility. A paltry 150 dollars to save as many lives as Schindler… It’s a shame warm fuzzies scale up so badly...
Someone should update SIAI’s recent publications page, which is really out of date. In the mean time, I found two of the papers you referred using Google:
Machine Ethics and Superintelligence
Which Consequentialism? Machine Ethics and Moral Divergence
Those interested in the cost-effectiveness of donations to the SIAI may also want to check Alan Dawrst’s donation recommendation. (Dawrst is “Utilitarian”, the donor that Anna mentions above.)
Thanks for that Anna. I could only find two of the five Academic talks and journal articles you mentioned online. Would you mind posting all of them online and point me to where I will be able to access them?
Thank you very much, Anna.
This will help me decide, and I’m sure that it will help others too.
I second Katja’s idea; a version of this should be posted on the SIAI blog.
Kaj’s. :P
I’m sorry, for some reason I thought you were Katja Grace. My mistake.
I tend to regard the SENS Foundation as major fellow travelers, and think that we both tend to benefit from each other’s positive publicity. For this reason I’ve usually tended to avoid this kind of elevator pitch!
Pass to Michael Vassar: Should I answer this?
[I’ve moved what was here to the top level comment]
I’ll flag Vassar or Salamon to describe what sort of efforts SIAI would like to marginally expand into as a function of our present and expected future reliability of funding.