Superintelligence will outsmart us or it isn’t superintelligence

If superintelligence is unable to outsmart us then it’s not true superintelligence. As such, the kind of AI that would truly pose a threat to us is also an AI we cannot negotiate with.

No matter what points we make, superintelligence will have figured them out first. We’re like ants trying to appeal to a human, and the human can understand pheromones but we can’t understand human language.

Worth reminding yourself of this from time to time.

Counterpoints:

  1. It may not take a true superintelligence to kill us all

  2. The “we cannot negociate” part is not taking into account the fact that we are the Simulators and thus technically have ultimate power over it[1]

  1. ^

    Not to add another information hazard to this site or anything, but it’s possible basilisk-style AGI would run millions of ancestor simulations in which it ultimately wins, in order to convince us here in base reality that our advantage as the Simulators is less ironclad than we’d like to think. This may actually be an instance in which not thinking about something and ignoring probability is the safest course of action (or thinking so much about something that you figure out your reason was wrong, but that’s a risk).