I’m donating to CFAR but not SI because CFAR would help in a wider variety of scenarios.
If AGI will be developed by a single person or a very small team, it seems likely that it won’t be done by someone we recognize in advance as likely to do it (for example, think of the inventions of the airplane or the web). CFAR is more oriented toward influencing large enough numbers of smart people that it will be more likely to reach such a developer.
Single-person AGI development seems like a low probability scenario to me, but the more people that are needed to create an AGI, the less plausible it seems that intelligence will be intelligible enough to go foom. So I imagine a relatively high fraction of scenarios in which UFAI takes over the world as coming from very small development teams.
Plus it’s quite possible that we’re all asking the wrong questions about existential risks. CFAR seems more likely than SI to help in those scenarios.
I’m donating to CFAR but not SI because CFAR would help in a wider variety of scenarios.
If AGI will be developed by a single person or a very small team, it seems likely that it won’t be done by someone we recognize in advance as likely to do it (for example, think of the inventions of the airplane or the web). CFAR is more oriented toward influencing large enough numbers of smart people that it will be more likely to reach such a developer.
Single-person AGI development seems like a low probability scenario to me, but the more people that are needed to create an AGI, the less plausible it seems that intelligence will be intelligible enough to go foom. So I imagine a relatively high fraction of scenarios in which UFAI takes over the world as coming from very small development teams.
Plus it’s quite possible that we’re all asking the wrong questions about existential risks. CFAR seems more likely than SI to help in those scenarios.