I suspect most readers will not find the KS solution to be more intuitively appealing?
The problem in your example is that you failed to identify a reasonable disagreement point. In the situation you described (1,1) is the disagreement point since every agent can guarantee emself a payoff of 1 unilaterally, so the KS solution is also (1,1) (since the disagreement point is already on the Pareto frontier).
In general it is not that obvious what the disagreement point should be, but maximin payoffs is one natural choice. Nash equilibrium is the obvious alternative, but it’s not clear what to do if we have several.
For applications such as voting and multi-user AI alignment that’s less natural since, even if we know the utility functions, it’s not clear what action spaces should we consider. In that case a possible choice of disagreement point is maximizing the utility of a randomly chosen participant. If the problem can be formulated as partitioning resources, then the uniform partition is another natural choice.
The problem in your example is that you failed to identify a reasonable disagreement point. In the situation you described (1,1) is the disagreement point since every agent can guarantee emself a payoff of 1 unilaterally, so the KS solution is also (1,1) (since the disagreement point is already on the Pareto frontier).
In general it is not that obvious what the disagreement point should be, but maximin payoffs is one natural choice. Nash equilibrium is the obvious alternative, but it’s not clear what to do if we have several.
For applications such as voting and multi-user AI alignment that’s less natural since, even if we know the utility functions, it’s not clear what action spaces should we consider. In that case a possible choice of disagreement point is maximizing the utility of a randomly chosen participant. If the problem can be formulated as partitioning resources, then the uniform partition is another natural choice.
Ahh, yeahh, that’s a good point.