Yes, I agree that this conditional statement is obvious. But while we’re on the general topic of whether Earth will be kept alive, it would be nice to see some engagement with Paul Christiano’s arguments (which Carl Shulman “agree[s] with [...] approximately in full”) that superintelligences might care about what happens to you a little bit, articulated in a comment thread on Soares’s “But Why Would the AI Kill Us?” and another thread on “Cosmopolitan Values Don’t Come Free”,
Nate Soares engaged extensively with this in reasonable-seeming ways that I’d thus expect Eliezer Yudkowsky to mostly agree with. Mostly it seems like a disagreement where Paul Christiano doesn’t really have a model of what realistically causes good outcomes and so he’s really uncertain, whereas Soares has a proper model and so is less uncertain.
But you can’t really argue with someone whose main opinion is “I don’t know”, since “I don’t know” is just garbage. He’s gotta at least present some new powerful observable forces, or reject some of the forces presented, rather than postulating that maybe there’s an unobserved kindness force that arbitrarily explains all the kindness that we see.
Nate Soares engaged extensively with this in reasonable-seeming ways that I’d thus expect Eliezer Yudkowsky to mostly agree with. Mostly it seems like a disagreement where Paul Christiano doesn’t really have a model of what realistically causes good outcomes and so he’s really uncertain, whereas Soares has a proper model and so is less uncertain.
But you can’t really argue with someone whose main opinion is “I don’t know”, since “I don’t know” is just garbage. He’s gotta at least present some new powerful observable forces, or reject some of the forces presented, rather than postulating that maybe there’s an unobserved kindness force that arbitrarily explains all the kindness that we see.