In addition to gwern’s reply, it’s also highly relevant for replying to arguments suggesting that game theory shows co-operation to be the most beneficial course of action, therefore we should expect AIs to want to co-operate with humans. The obvious flaw with that argument is that somebody might have more beneficial courses of action—such as exploitation—available to them, in which case we would expect AIs to ruthlessly exploit us if they were in the right position. It seems reasonable to presume that if you are in a position of power over someone, it easily becomes more beneficial to exploit them than to co-operate.
“Are humans more likely to exploit other humans when they have more power over them” is a testable prediction of this hypothesis: if exploitation is more beneficial than co-operation when you’re powerful enough, then we would expect our brains to be evolved to take advantage of that.
In addition to gwern’s reply, it’s also highly relevant for replying to arguments suggesting that game theory shows co-operation to be the most beneficial course of action, therefore we should expect AIs to want to co-operate with humans. The obvious flaw with that argument is that somebody might have more beneficial courses of action—such as exploitation—available to them, in which case we would expect AIs to ruthlessly exploit us if they were in the right position. It seems reasonable to presume that if you are in a position of power over someone, it easily becomes more beneficial to exploit them than to co-operate.
“Are humans more likely to exploit other humans when they have more power over them” is a testable prediction of this hypothesis: if exploitation is more beneficial than co-operation when you’re powerful enough, then we would expect our brains to be evolved to take advantage of that.