Superintelligence will outsmart us or it isn’t superintelligence. As such, the kind of AI that would truly pose a threat to us is also an AI we cannot negotiate with.
No matter what arguments we make, superintelligence will have figured them out first. We’re like ants trying to appeal to a human, and the human can understand pheromones but we can’t understand human language. It’s entirely up to the human and its own arguments whether we get squashed or not.
Worth reminding yourself of this from time to time, even if it’s obvious.
Counterpoints:
It may not take a true superintelligence to kill us all, meaning we could perhaps negociate with a pre-AGI machine
The “we cannot negociate” part is not taking into account the fact that we are the Simulators and thus technically have ultimate power over it
We can’t negociate with something smarter than us
Superintelligence will outsmart us or it isn’t superintelligence. As such, the kind of AI that would truly pose a threat to us is also an AI we cannot negotiate with.
No matter what arguments we make, superintelligence will have figured them out first. We’re like ants trying to appeal to a human, and the human can understand pheromones but we can’t understand human language. It’s entirely up to the human and its own arguments whether we get squashed or not.
Worth reminding yourself of this from time to time, even if it’s obvious.
Counterpoints:
It may not take a true superintelligence to kill us all, meaning we could perhaps negociate with a pre-AGI machine
The “we cannot negociate” part is not taking into account the fact that we are the Simulators and thus technically have ultimate power over it