My biggest criticism of SI is that I cannot decide between:
A. promoting AI and FAI issues awareness will decrease the chance of UFAI catastrophe; or
B. promoting AI and FAI issues awareness will increase the chance of UFAI catastrophe
This criticism seems district from the ones that Holden makes. But it is my primary concern. (Perhaps the closest example is Holden’s analogy that SI is trying to develop facebook before the Internet).
A seems intuitive. Basically everyone associated with SI assumes that A is true, as far as I can tell. But A is not obviously true to me. It seems to me at least plausible that:
A1. promoting AI and FAI issues will get lots of scattered groups around the world more interested in creating AGI
A2. one of these groups will develop AGI faster than otherwise due to A1
A3. the world will be at greater risk of UFAI catastrophe than otherwise due to A2 (i.e. the group creates AGI faster than otherwise, and fails at FAI)
More simply: SI’s general efforts, albeit well intended, might accelerate the creation of AGI, and the acceleration of AGI might decrease the odds of the first AGI being friendly. This is one path by which B, not A, would be true.
SI might reply that, although it promotes AGI, it very specifically limits its promotion to FAI. Although that is SI’s intention, it is not at all clear that promoting FAI will not have the unintended consequence of accelerating UFAI. By analogy, if a responsible older brother goes around promoting gun safety all the time, the little brother might be more likely to accidentally blow his face off, than if the older brother had just kept his mouth shut. Maybe the older brother shouldn’t have kept his mouth shut, maybe he should have… it’s not clear either way.
If B is more true than A, the best thing that SI could do would probably be develop clandestine missions to assassinate people who try to develop AGI. SI does almost the exact opposite.
SI’s efforts are based on the assumption that A is true. But it’s far from clear to me that A, instead of B, is true. Maybe it is, maybe it is. SI seems overconfident that A is true. I’ve never heard anyone at SI (or elsewhere) really address this criticism.
I like your gun safety analogy. Actually however, it seems to me that a significant portion of LW shares your doubts, or even favors view B. I second your call for some (more?) direct discussion on the question.
My biggest criticism of SI is that I cannot decide between:
A. promoting AI and FAI issues awareness will decrease the chance of UFAI catastrophe; or B. promoting AI and FAI issues awareness will increase the chance of UFAI catastrophe
This criticism seems district from the ones that Holden makes. But it is my primary concern. (Perhaps the closest example is Holden’s analogy that SI is trying to develop facebook before the Internet).
A seems intuitive. Basically everyone associated with SI assumes that A is true, as far as I can tell. But A is not obviously true to me. It seems to me at least plausible that:
A1. promoting AI and FAI issues will get lots of scattered groups around the world more interested in creating AGI A2. one of these groups will develop AGI faster than otherwise due to A1 A3. the world will be at greater risk of UFAI catastrophe than otherwise due to A2 (i.e. the group creates AGI faster than otherwise, and fails at FAI)
More simply: SI’s general efforts, albeit well intended, might accelerate the creation of AGI, and the acceleration of AGI might decrease the odds of the first AGI being friendly. This is one path by which B, not A, would be true.
SI might reply that, although it promotes AGI, it very specifically limits its promotion to FAI. Although that is SI’s intention, it is not at all clear that promoting FAI will not have the unintended consequence of accelerating UFAI. By analogy, if a responsible older brother goes around promoting gun safety all the time, the little brother might be more likely to accidentally blow his face off, than if the older brother had just kept his mouth shut. Maybe the older brother shouldn’t have kept his mouth shut, maybe he should have… it’s not clear either way.
If B is more true than A, the best thing that SI could do would probably be develop clandestine missions to assassinate people who try to develop AGI. SI does almost the exact opposite.
SI’s efforts are based on the assumption that A is true. But it’s far from clear to me that A, instead of B, is true. Maybe it is, maybe it is. SI seems overconfident that A is true. I’ve never heard anyone at SI (or elsewhere) really address this criticism.
I like your gun safety analogy. Actually however, it seems to me that a significant portion of LW shares your doubts, or even favors view B. I second your call for some (more?) direct discussion on the question.