The Fermi paradox provides some evidence against long-lived civilization of any kind, hostile or non-hostile. Entangling the Fermi paradox with questions about the character of future civilization (such as AI risk) doesn’t seem very helpful.
Obviously, an intelligence looking only to grow itself (and maximize paperclips or whatever) can do this much more easily than one restrained by its biological-or-similar parents.
I disagree. See this post, and Armstrong and Sandberg’s analysis.
The Fermi paradox provides some evidence against long-lived civilization of any kind, hostile or non-hostile. Entangling the Fermi paradox with questions about the character of future civilization (such as AI risk) doesn’t seem very helpful.
To put this point slightly differently, the Fermi paradox isn’t strong evidence for any of the following over the others: (a) Humanity will create Friendly AI; (b) humanity create Unfriendly AI; (c) humanity will not be able to produce any sort of FOOMing AI, but will develop into a future civilization capable of colonizing the stars. This is because all of these, if the analog had in the past happened on an alien planet sufficiently close to us (e.g. in our galaxy), we would see the difference: to the degree that the Fermi paradox provides evidence about (a), (b) and (c), it provides about the same amount of evidence against each. (It does provide evidence against each, since one possible explanation for the Fermi paradox is a Great Filter that’s still ahead of us.)
An FAI will always have more rules to follow (“do not eat the ones with life on them”) and I just don’t see how these would have advantages over a UFAI without those restrictions.
Among the six possibilities at the end of Armstrong and Sandberg’s analysis, the “dominant old species” scenario is what I mean—if there is one, it isn’t a UFAI.
An FAI will always have more rules to follow (“do not eat the ones with life on them”) and I just don’t see how these would have advantages over a UFAI without those restrictions.
They mostly don’t have life on them, even in the Solar System, intergalactic travel involves more or less “straight shots” without stopovers (nowhere to stop), and the slowdown is negligibly small.
The Fermi paradox provides some evidence against long-lived civilization of any kind, hostile or non-hostile. Entangling the Fermi paradox with questions about the character of future civilization (such as AI risk) doesn’t seem very helpful.
I disagree. See this post, and Armstrong and Sandberg’s analysis.
To put this point slightly differently, the Fermi paradox isn’t strong evidence for any of the following over the others: (a) Humanity will create Friendly AI; (b) humanity create Unfriendly AI; (c) humanity will not be able to produce any sort of FOOMing AI, but will develop into a future civilization capable of colonizing the stars. This is because all of these, if the analog had in the past happened on an alien planet sufficiently close to us (e.g. in our galaxy), we would see the difference: to the degree that the Fermi paradox provides evidence about (a), (b) and (c), it provides about the same amount of evidence against each. (It does provide evidence against each, since one possible explanation for the Fermi paradox is a Great Filter that’s still ahead of us.)
Brilliant links, thank you!
An FAI will always have more rules to follow (“do not eat the ones with life on them”) and I just don’t see how these would have advantages over a UFAI without those restrictions.
Among the six possibilities at the end of Armstrong and Sandberg’s analysis, the “dominant old species” scenario is what I mean—if there is one, it isn’t a UFAI.
A UFAI would well have more rules to follow, but these rules will not be as well chosen. It’s not clear that these rules will become negligible.
They mostly don’t have life on them, even in the Solar System, intergalactic travel involves more or less “straight shots” without stopovers (nowhere to stop), and the slowdown is negligibly small.