The Fermi paradox provides some evidence against long-lived civilization of any kind, hostile or non-hostile. Entangling the Fermi paradox with questions about the character of future civilization (such as AI risk) doesn’t seem very helpful.
To put this point slightly differently, the Fermi paradox isn’t strong evidence for any of the following over the others: (a) Humanity will create Friendly AI; (b) humanity create Unfriendly AI; (c) humanity will not be able to produce any sort of FOOMing AI, but will develop into a future civilization capable of colonizing the stars. This is because all of these, if the analog had in the past happened on an alien planet sufficiently close to us (e.g. in our galaxy), we would see the difference: to the degree that the Fermi paradox provides evidence about (a), (b) and (c), it provides about the same amount of evidence against each. (It does provide evidence against each, since one possible explanation for the Fermi paradox is a Great Filter that’s still ahead of us.)
Even if that were true (which I don’t see: like FAIs, uFAIs will have goals they are trying to maximize, and their options will be limited to those not in conflict with those goals): Why on Earth would this difference take the form of “given millions of years, you can’t colonize the galaxy”? And moreover, why would it reliably have taken this form for every single civilization that has arisen in the past? We’d certainly expect an FAI built by humanity to go to the stars!