even if you think there’s a 20% chance we make it, that’s not the same as thinking that 20% of Everett branches starting in this position make it
Although worlds starting in this position are a tiny minority anyway, right? Most of the Everett branches containing “humanity” have histories very different from our own. And if alignment is neither easy nor impossible—if it requires insights fitting “in a textbook from the future”, per Eliezer—I think we can say with reasonable (logical) confidence that a non-trivial fraction of worlds will see a successful humanity, because all that is required for success in such a scenario is having a competent alignment-aware world government. Looking at the history of Earth governments, I think we can say that while such a scenario may be unlikely, it is not so unlikely as to render us overwhelmingly likely to fail.
I think a more likely reason for preponderance of “failure” is that alignment in full generality may be intractable. But such a scenario would have its upsides, as well as making a hard binary of “failure/success” less meaningful.
Although worlds starting in this position are a tiny minority anyway, right? Most of the Everett branches containing “humanity” have histories very different from our own. And if alignment is neither easy nor impossible—if it requires insights fitting “in a textbook from the future”, per Eliezer—I think we can say with reasonable (logical) confidence that a non-trivial fraction of worlds will see a successful humanity, because all that is required for success in such a scenario is having a competent alignment-aware world government. Looking at the history of Earth governments, I think we can say that while such a scenario may be unlikely, it is not so unlikely as to render us overwhelmingly likely to fail.
I think a more likely reason for preponderance of “failure” is that alignment in full generality may be intractable. But such a scenario would have its upsides, as well as making a hard binary of “failure/success” less meaningful.