I think the difference is that I, personally, and I think many other humans have this nonlinearity in our utility function that I’m willing to pay a galaxy in the worls we win, for keeping Earth in the world we lose. If there are other AIs in the multiverse that have similarly non-linear interests in our Universe, they can also bargain for planets, but I suspect these will be quite rare, as they don’t already have a thing in our Universe they want to protect. So I think it will be hard to outbid humanity for Earth in particular.
There could be other trades that the AIs who have linear returns can still make, like producing objects that are both paperclips and corkscrews if that’s more efficient, but that doesn’t really affect our deal about Earth.
This nonlinearity also seems strange to have, without also accepting quantum-immortality-type arguments. In particular, you only need to bargain for UFAIs to kill all humans painlessly and instantaneously; and then you just simulate those same humans yourself. (And if you want to save on compute, you can flip quantum coins for a bit.) Maybe it makes sense to have this nonlinearity but not accept this—I’d be curious to see what that position looks like.
I think the difference is that I, personally, and I think many other humans have this nonlinearity in our utility function that I’m willing to pay a galaxy in the worls we win, for keeping Earth in the world we lose. If there are other AIs in the multiverse that have similarly non-linear interests in our Universe, they can also bargain for planets, but I suspect these will be quite rare, as they don’t already have a thing in our Universe they want to protect. So I think it will be hard to outbid humanity for Earth in particular.
There could be other trades that the AIs who have linear returns can still make, like producing objects that are both paperclips and corkscrews if that’s more efficient, but that doesn’t really affect our deal about Earth.
This nonlinearity also seems strange to have, without also accepting quantum-immortality-type arguments. In particular, you only need to bargain for UFAIs to kill all humans painlessly and instantaneously; and then you just simulate those same humans yourself. (And if you want to save on compute, you can flip quantum coins for a bit.) Maybe it makes sense to have this nonlinearity but not accept this—I’d be curious to see what that position looks like.