I also find it hard to believe that humans of any sort would hold special interest to a superintelligence. Do I really have the burden of proof there?
It’s plausible, to me, that a superintelligence built by humans and intended by them to care about humans would in fact care about humans, even if it didn’t have the precise goals they intended it to have.
This is overly complex. Now we assume that AI goes wrong? These people want to be in a simulation; they need a Schelling point with other humanities. Why wouldn’t they just give clear instructions to the AI to simulate other Earths?
It’s plausible, to me, that a superintelligence built by humans and intended by them to care about humans would in fact care about humans, even if it didn’t have the precise goals they intended it to have.
This is overly complex. Now we assume that AI goes wrong? These people want to be in a simulation; they need a Schelling point with other humanities. Why wouldn’t they just give clear instructions to the AI to simulate other Earths?