I asked this in person, but I don’t think you’ve addressed it in the write-up:
The use of utility functions to try to capture the power dynamic seems to run into problems. Wei_Dai has an example with non-linear utility functions, but we can make it even more obvious.
In your original example, say the slug just doesn’t really care about anything other than his leaf. It would marginally prefer the entire galaxy to be turned into a garden of leaves, but that preference is really miniscule compared to its normal preferences about its own domain. Then we are in the situation where the two agents—the galactic empire and slug—each care a lot about their own domain, and very little about that of the other. If it happens that the slug is even more self-centred than the empire, then with your solution the slug’s preferences win out … to the limited degree permitted by the constraint that the empire can’t lose more than it can gain.
Even if you don’t think you’d naturally find utility functions like this (although many people seem to have preferences of roughly this shape), if this were a standard bargaining system it would become pragmatic to modify your own utility function to care very little about things you don’t expect to be able to affect.
As a side note, I wonder if it would be better to name apart what you’re trying to do here as something other than ‘bargaining’. It runs against my intuitive understanding of the word, and I believe the standard use in the literature (but I’m not certain), to establish a defection point then allow ‘bargains’ which are worse for one side than the defection point.
I asked this in person, but I don’t think you’ve addressed it in the write-up:
The use of utility functions to try to capture the power dynamic seems to run into problems. Wei_Dai has an example with non-linear utility functions, but we can make it even more obvious.
In your original example, say the slug just doesn’t really care about anything other than his leaf. It would marginally prefer the entire galaxy to be turned into a garden of leaves, but that preference is really miniscule compared to its normal preferences about its own domain. Then we are in the situation where the two agents—the galactic empire and slug—each care a lot about their own domain, and very little about that of the other. If it happens that the slug is even more self-centred than the empire, then with your solution the slug’s preferences win out … to the limited degree permitted by the constraint that the empire can’t lose more than it can gain.
Even if you don’t think you’d naturally find utility functions like this (although many people seem to have preferences of roughly this shape), if this were a standard bargaining system it would become pragmatic to modify your own utility function to care very little about things you don’t expect to be able to affect.
As a side note, I wonder if it would be better to name apart what you’re trying to do here as something other than ‘bargaining’. It runs against my intuitive understanding of the word, and I believe the standard use in the literature (but I’m not certain), to establish a defection point then allow ‘bargains’ which are worse for one side than the defection point.
If the slug gains little, it can lose little. That’s the only constraint; we don’t know who’s preferences will “win”.