I disagree that my point has been fully discussed, and even if it had been, I think it would be burying the lede to start with a paragraph like this:
“Suppose that 1% of the world’s resources are controlled by unaligned AI, and 99% of the world’s resources are controlled by humans. We might hope that at least 99% of the universe’s resources end up being used for stuff-humans-like (in expectation).”
Without following it up with something like:
“Of course, the strategic considerations here are such that an immoral actor with 1% could choose to eliminate the 99% and thus have 100% of the future resources. Furthermore, if the unaligned AI team had so far hidden its existence, then this option would be asymmetrical since the 99% of humans wouldn’t know that they even had an opponent or that they were in imminent danger of being wiped out. Thus, we’d need to assume a very different offense-defense balance, or a failure of secrecy, to expect anything other than 100% of future resources going to the unaligned AI team. The remainder of this post explores the specific branch of the hypothetical future in which elimination of the opponent (in either direction) is not an option for some unspecified reason.”
I think it does address and discuss this, see items 4, 8 and 11.
I’m sympathetic to disagreeing with Paul overall, but it’s not as though these considerations haven’t been discussed.
I disagree that my point has been fully discussed, and even if it had been, I think it would be burying the lede to start with a paragraph like this:
Without following it up with something like:
“Of course, the strategic considerations here are such that an immoral actor with 1% could choose to eliminate the 99% and thus have 100% of the future resources. Furthermore, if the unaligned AI team had so far hidden its existence, then this option would be asymmetrical since the 99% of humans wouldn’t know that they even had an opponent or that they were in imminent danger of being wiped out. Thus, we’d need to assume a very different offense-defense balance, or a failure of secrecy, to expect anything other than 100% of future resources going to the unaligned AI team. The remainder of this post explores the specific branch of the hypothetical future in which elimination of the opponent (in either direction) is not an option for some unspecified reason.”