Can you give some examples of where human cooperation is mainly being stopped by difficulty with bargaining?
Two kids fighting over a toy; a married couple arguing about who should do the dishes; war.
But now I think I can answer my own question. War only happens if two agents don’t have common knowledge about who would win (otherwise they’d agree to skip the costs of war). So if AIs are better than humans at establishing that kind of common knowledge, that makes bargaining failure less likely.
War only happens if two agents don’t have common knowledge about who would win (otherwise they’d agree to skip the costs of war).
But that assumes strong ability to enforcement agreements (which humans typically don’t have). For example suppose it’s common knowledge that if countries A and B went to war, A would conquer B with probability .9 and it would cost each side $1 trillion. If they could enforce agreements, then they could agree to roll a 10-sided die in place of the war and save $1 trillion each, but if they couldn’t, then A would go to war with B anyway if it lost the roll, so now B has a .99 probability of being taken over. Alternatively maybe B agrees to be taken over by A with certainty but get some compensation to cover the .1 chance that it doesn’t lose the war. But after taking over B, A could just expropriate all of B’s property including the compensation that it paid.
War only happens if two agents don’t have common knowledge about who would win (otherwise they’d agree to skip the costs of war).
They might also have poorly aligned incentives, like a war between two countries that allows both governments to gain power and prestige, at the cost of destruction that is borne by the ordinary people of both countries. But this sort of principle-agent problem also seems like something AIs should be better at dealing with.
Not only of who would win, but also about the costs it would have. I think the difficulty in establishing common knowledge about this is in part due to people traing to deceive each other. Its not clear that the ability to see through deception improves faster than the ability to deceive with increasing intelligence.
Two kids fighting over a toy; a married couple arguing about who should do the dishes; war.
But now I think I can answer my own question. War only happens if two agents don’t have common knowledge about who would win (otherwise they’d agree to skip the costs of war). So if AIs are better than humans at establishing that kind of common knowledge, that makes bargaining failure less likely.
But that assumes strong ability to enforcement agreements (which humans typically don’t have). For example suppose it’s common knowledge that if countries A and B went to war, A would conquer B with probability .9 and it would cost each side $1 trillion. If they could enforce agreements, then they could agree to roll a 10-sided die in place of the war and save $1 trillion each, but if they couldn’t, then A would go to war with B anyway if it lost the roll, so now B has a .99 probability of being taken over. Alternatively maybe B agrees to be taken over by A with certainty but get some compensation to cover the .1 chance that it doesn’t lose the war. But after taking over B, A could just expropriate all of B’s property including the compensation that it paid.
They might also have poorly aligned incentives, like a war between two countries that allows both governments to gain power and prestige, at the cost of destruction that is borne by the ordinary people of both countries. But this sort of principle-agent problem also seems like something AIs should be better at dealing with.
Not only of who would win, but also about the costs it would have. I think the difficulty in establishing common knowledge about this is in part due to people traing to deceive each other. Its not clear that the ability to see through deception improves faster than the ability to deceive with increasing intelligence.