Why do people have such low credences for “The effect of First contact is mostly harmful (e.g., selfish ETI, hazards)”? Most alien minds probably don’t care about us? But perhaps caring about variety is evolutionarily convergent? If not, why wouldn’t our “first contact” be extremely negative (given their tech advantage)?
I assumed the question was about the first few decades after “first contact”.
A large chunk of my probability mass is on first contact being unintentional, and something neither side can do much about. Or perhaps one “side” is unaware of it. Like if we receive some message directed to no one in particular, or recording the remnants of some extreme cosmic event that seems mighty unatural.
It feels like we’re near certain to have created an AGI by then. I am unsure enough about the long term time scales of AGI improvement, and their limits, that I can assign some credence to the AGI we make possessing relatively advanced technology. And so, it may be in a good bargainning position. If we make plenty of AI, maybe they’ll be less powerful individually, but they should still be quite potent in the face of a superior adversary.
Why do people have such low credences for “The effect of First contact is mostly harmful (e.g., selfish ETI, hazards)”? Most alien minds probably don’t care about us? But perhaps caring about variety is evolutionarily convergent? If not, why wouldn’t our “first contact” be extremely negative (given their tech advantage)?
My personal reasons:
I assumed the question was about the first few decades after “first contact”.
A large chunk of my probability mass is on first contact being unintentional, and something neither side can do much about. Or perhaps one “side” is unaware of it. Like if we receive some message directed to no one in particular, or recording the remnants of some extreme cosmic event that seems mighty unatural.
It feels like we’re near certain to have created an AGI by then. I am unsure enough about the long term time scales of AGI improvement, and their limits, that I can assign some credence to the AGI we make possessing relatively advanced technology. And so, it may be in a good bargainning position. If we make plenty of AI, maybe they’ll be less powerful individually, but they should still be quite potent in the face of a superior adversary.