My biggest criticism of SI is that I cannot decide between:
A. promoting AI and FAI issues awareness will decrease the chance of UFAI catastrophe; or B. promoting AI and FAI issues awareness will increase the chance of UFAI catastrophe
This criticism seems district from the ones that Holden makes. But it is my primary concern. (Perhaps the closest example is Holden’s analogy that SI is trying to develop facebook before the Internet).
A seems intuitive. Basically everyone associated with SI assumes that A is true, as far as I can tell. But A is not obviously true to me. It seems to me at least plausible that:
A1. promoting AI and FAI issues will get lots of scattered groups around the world more interested in creating AGI A2. one of these groups will develop AGI faster than otherwise due to A1 A3. the world will be at greater risk of UFAI catastrophe than otherwise due to A2 (i.e. the group creates AGI faster than otherwise, and fails at FAI)
More simply: SI’s general efforts, albeit well intended, might accelerate the creation of AGI, and the acceleration of AGI might decrease the odds of the first AGI being friendly. This is one path by which B, not A, would be true.
SI might reply that, although it promotes AGI, it very specifically limits its promotion to FAI. Although that is SI’s intention, it is not at all clear that promoting FAI will not have the unintended consequence of accelerating UFAI. By analogy, if a responsible older brother goes around promoting gun safety all the time, the little brother might be more likely to accidentally blow his face off, than if the older brother had just kept his mouth shut. Maybe the older brother shouldn’t have kept his mouth shut, maybe he should have… it’s not clear either way.
If B is more true than A, the best thing that SI could do would probably be develop clandestine missions to assassinate people who try to develop AGI. SI does almost the exact opposite.
SI’s efforts are based on the assumption that A is true. But it’s far from clear to me that A, instead of B, is true. Maybe it is, maybe it is. SI seems overconfident that A is true. I’ve never heard anyone at SI (or elsewhere) really address this criticism.
Although I’m a lawyer, I’ve developed my own pet meta-approach to philosophy. I call it the “Cognitive Biases Plus Semantic Ambiguity” approach (CB+SA). Both prongs (CB and SA) help explain the amazing lack of progress in philosophy.
First, cognitive biases—or (roughly speaking) cognitive illusions—are persistent by nature. The fact that cognitive illusions (like visual illusions) are persistent, and the fact that philosophy problems are persistent, is not a coincidence. Philosophy problems cluster around those that involve cognitive illusions (positive outcome bias, the just world phenomenon, the Lake Wobegon effect, the fundamental attribution error), etc. I see this in my favorite topic area (the free will problem), but I believe that it likely applies broadly across philosophy.
Second, semantic ambiguity creates persistent problems if not identified and fixed. The solutions to several of Hilbert’s 100 problems are “no answer—problem statement is not well defined.” That approach is unsexy, and emotionally dissatisfying (all of this work, yet we get no answer!). Perhaps for that reason, philosophers (but not mathematicians) seem completely incapable of doing it. On only the rarest occasions do philosophers suggest that some term (“good”, “morality,” “rationalism”, “free will”, “soul”, “knowledge”) might not possess a definition that is precise enough to do the work that we ask of it. In fact, as with CB, philosophy problems tend to cluster around problems that persist because of SA. (If the problems didn’t persist, they might be considered trivial or boring.)