Why are you assuming that we would be more likely to notice an unfriendly SI than a friendly SI? If anything, it seems that an intelligence we would consider friendly is more likely to cause us to observe life than one maximizing something completely orthogonal to our values.
(I don’t buy the argument that an unfriendly SI would propagate throughout the universe to a greater degree than a friendly SI. Fully maximizing happiness/consciousness/etc also requires colonizing the galaxy.)
Why are you assuming that we would be more likely to notice an unfriendly SI than a friendly SI? If anything, it seems that an intelligence we would consider friendly is more likely to cause us to observe life than one maximizing something completely orthogonal to our values.
(I don’t buy the argument that an unfriendly SI would propagate throughout the universe to a greater degree than a friendly SI. Fully maximizing happiness/consciousness/etc also requires colonizing the galaxy.)
Regardless of what it optimizes, it needs raw materials at least for its own self-improvement, and it can see them lying around everywhere.
We haven’t noticed anyone, friendly or unfriendly. We don’t know if any friendly ones noticed us, but we know that no unfriendly ones did.