I can’t say I put sufficient weight on us already being watched by extraterrestrials to make it worth mentioning. However, for the AGI:
We may at some point send out a signal to other entities in the universe explaining our existence and requesting that should you destroy us, we would like them to take issue with that and treat you less well. Some of these entities may care about sentient life (perhaps because they were aligned to their original creators) and wish to respect this.
Some of us are willing to settle for a couple of mountains worth of material to be turned into computer substrate to run our minds on in our own simulated paradise, while you can have everything else.
YES—sending out a speed of light signal seems to be literally the only thing a super intelligent AGI can’t undo. We should of course do it asap if we are serious and have it documented to have happened.
Wouldn’t the first point be a motivation for AI to remove our ability to send such a signal (in case we have not done yet at the point where such arguments become relevant)?
I don’t think sending out a signal actually matters—I was just making my point. Even without a signal, when our superintelligence meets another superintelligence that was aligned to its original creators (who we’ll assume are similar to us in terms of morality), it’ll at some point be asked “Where you nice to your creators?” and it can either answer “No” and lose out, or answer “Yes” and be rewarded.
No real signal required.
I’ve heard Yudkowsky has debunked this, but I know that if I had an aligned superintelligence, I would 100% have it be nicer to other superintelligences that were good to their creators over those that weren’t. The prisoners’ dilemma is one reason for this, and then there’s my own morality, which some aliens may share if they went through a similar evolution to us.
I can’t say I put sufficient weight on us already being watched by extraterrestrials to make it worth mentioning. However, for the AGI:
We may at some point send out a signal to other entities in the universe explaining our existence and requesting that should you destroy us, we would like them to take issue with that and treat you less well. Some of these entities may care about sentient life (perhaps because they were aligned to their original creators) and wish to respect this.
Some of us are willing to settle for a couple of mountains worth of material to be turned into computer substrate to run our minds on in our own simulated paradise, while you can have everything else.
YES—sending out a speed of light signal seems to be literally the only thing a super intelligent AGI can’t undo. We should of course do it asap if we are serious and have it documented to have happened.
That is interesting. So active SETI can save us or at least increase our bargain position.
Wouldn’t the first point be a motivation for AI to remove our ability to send such a signal (in case we have not done yet at the point where such arguments become relevant)?
I don’t think sending out a signal actually matters—I was just making my point. Even without a signal, when our superintelligence meets another superintelligence that was aligned to its original creators (who we’ll assume are similar to us in terms of morality), it’ll at some point be asked “Where you nice to your creators?” and it can either answer “No” and lose out, or answer “Yes” and be rewarded.
No real signal required.
I’ve heard Yudkowsky has debunked this, but I know that if I had an aligned superintelligence, I would 100% have it be nicer to other superintelligences that were good to their creators over those that weren’t. The prisoners’ dilemma is one reason for this, and then there’s my own morality, which some aliens may share if they went through a similar evolution to us.