I view this as one of the single best arguments against risks from paperclippers. I’m a little concerned that it hasn’t been dealt with properly by SIAI folks—aside from a few comments by Carl Shulman on Katja’s blog.
The Fermi Paradox was considered a paradox even before anybody started talking about paperclippers. And even if we knew for certain that superintelligence was impossible, the Fermi Paradox would still remain a mystery—it’s not paperclippers (one possible form of colonizer) in particular that are hard to reconcile with the Fermi Paradox, it’s the idea of colonizers in general.
Simply the fact that the paradox exists says little about the likelyhood of paperclippers, though it does somewhat suggest that we might run into some even worse x-risk before the paperclippers show up. (What value you attach to that “somewhat” depends on whether you think it’s reasonable to presume that we’ve already passed the Great Filter.)
The Fermi Paradox was considered a paradox even before anybody started talking about paperclippers. And even if we knew for certain that superintelligence was impossible, the Fermi Paradox would still remain a mystery—it’s not paperclippers (one possible form of colonizer) in particular that are hard to reconcile with the Fermi Paradox, it’s the idea of colonizers in general.
Simply the fact that the paradox exists says little about the likelyhood of paperclippers, though it does somewhat suggest that we might run into some even worse x-risk before the paperclippers show up. (What value you attach to that “somewhat” depends on whether you think it’s reasonable to presume that we’ve already passed the Great Filter.)