CFAR’s Inaugural Fundraising Drive
http://appliedrationality.org/fundraising/
(interested in hearing how other donors frame allocation between SI and CFAR)
http://appliedrationality.org/fundraising/
(interested in hearing how other donors frame allocation between SI and CFAR)
I still only donate to SI. It’s great that we can supposedly aim the money at FAI now, due to the pivot towards research.
But I would also love to see EY’s appeal to MoR readers succeed:
I’m donating to CFAR but not SI because CFAR would help in a wider variety of scenarios.
If AGI will be developed by a single person or a very small team, it seems likely that it won’t be done by someone we recognize in advance as likely to do it (for example, think of the inventions of the airplane or the web). CFAR is more oriented toward influencing large enough numbers of smart people that it will be more likely to reach such a developer.
Single-person AGI development seems like a low probability scenario to me, but the more people that are needed to create an AGI, the less plausible it seems that intelligence will be intelligible enough to go foom. So I imagine a relatively high fraction of scenarios in which UFAI takes over the world as coming from very small development teams.
Plus it’s quite possible that we’re all asking the wrong questions about existential risks. CFAR seems more likely than SI to help in those scenarios.
I was a July minicamp attendee, AMA that will help inform your donation decisions.