In so far as the Fermi paradox implies we’re in great danger, it also suggests that exciting newly-possible things we might try could be more dangerous than they look. Perhaps some strange feedback loop involving intelligence enhancement is part of the danger. (The usual intelligence-enhancement feedback loop people worry about around here involves AI, of course, but perhaps that’s not the only one that’s scary.)
Hostile intelligences would presumably still create Dyson spheres/colonise the galaxy/emit radio waves/do something to alert other civilisations to their presence. The Fermi paradox has to be something like superweapons, not superintelllegnece.
In so far as the Fermi paradox implies we’re in great danger, it also suggests that exciting newly-possible things we might try could be more dangerous than they look. Perhaps some strange feedback loop involving intelligence enhancement is part of the danger. (The usual intelligence-enhancement feedback loop people worry about around here involves AI, of course, but perhaps that’s not the only one that’s scary.)
Hostile intelligences would presumably still create Dyson spheres/colonise the galaxy/emit radio waves/do something to alert other civilisations to their presence. The Fermi paradox has to be something like superweapons, not superintelllegnece.