I assumed the question was about the first few decades after “first contact”.
A large chunk of my probability mass is on first contact being unintentional, and something neither side can do much about. Or perhaps one “side” is unaware of it. Like if we receive some message directed to no one in particular, or recording the remnants of some extreme cosmic event that seems mighty unatural.
It feels like we’re near certain to have created an AGI by then. I am unsure enough about the long term time scales of AGI improvement, and their limits, that I can assign some credence to the AGI we make possessing relatively advanced technology. And so, it may be in a good bargainning position. If we make plenty of AI, maybe they’ll be less powerful individually, but they should still be quite potent in the face of a superior adversary.
My personal reasons:
I assumed the question was about the first few decades after “first contact”.
A large chunk of my probability mass is on first contact being unintentional, and something neither side can do much about. Or perhaps one “side” is unaware of it. Like if we receive some message directed to no one in particular, or recording the remnants of some extreme cosmic event that seems mighty unatural.
It feels like we’re near certain to have created an AGI by then. I am unsure enough about the long term time scales of AGI improvement, and their limits, that I can assign some credence to the AGI we make possessing relatively advanced technology. And so, it may be in a good bargainning position. If we make plenty of AI, maybe they’ll be less powerful individually, but they should still be quite potent in the face of a superior adversary.