My favorite answer to this problem comes from “How to Cut a Cake: And Other Mathematical Conundrums.”
The solution in the book was that “fair” means “no one has cause to complain.” It doesn’t work in the case here, since one party wants to divide the pie unevenly, but if you were trying to make even cuts, it works.
The algorithm was:
Make a cut from the center to the edge.
Have one person hold the knife over that cut,
Slowly rotate the knife (or the pie) at, say, a few degrees per second.
At any time, any person (including the one holding the knife) can say “cut.” A cut is made there, and the speaker gets the thus-cut piece.
At the end, anyone who thinks they got too little (meaning, someone else got too much) could have said “cut” before that other person’s cut got too big.
That’s actually a really good idea. Like the ‘cut the deck and the other person gets to pick half’ idea but this one actually generalizes to multiple people. Elegant.
‘cut the deck and the other person gets to pick half’
That’s the simplest form. AnthonyC adapts it to work for multiple people, provided that everyone agrees that the utility should be divided up evenly. I think it’s possible to adapt the principle further, so that it also applies to situations posed by others on this thread. (Insulin should be given preferentially to diabetics, and antidote should be distributed so as to maximize the number of lives saved.)
If no one knows whether they are one of the parties that will benefit from unfair distribution, then even selfish Bayesian agents will agree on a distribution. This might be accomplished if a group can decide in advance what to do in certain circumstances.
For example, say a group of N people thinks that some of them might be poisoned, but no one is exhibiting symptoms yet. The group might decide to administer 1 unit of antidote to the first person to show visible symptoms. If they continue to treat each person who shows symptoms, in order, they may well run out of n units of antidote. Before anyone shows symptoms, even in a worst-case scenario where they all are poisoned, self-interested parties will find it fairly easy to agree to n/N chances of survival. When they are down to their L^th and last unit of antidote, however, all parties but the one showing symptoms have a strong incentive to withhold the antidote. If they are all poisoned, then they have a 0% chance of survival.
This assumes that all parties get equal value out of the same utility, however. It’s much more difficult when one party gets an amount of utility that can only be judged qualitatively. For example, if Xannon and Yancy don’t really like pie all that much and aren’t all that hungry, but Zaire hasn’t eaten anything in the last day or two. Alternatively, if we want to compare how much a pig values its own life with the utility of a much more intelligent human’s pleasure out of eating bacon.
If you can determine a conversion factor though, or agree on the relative benefits of each, then it becomes pretty obvious which option leads to the greatest total utility. You just choose the option with the highest expected total utility. All of the difficulty is contained in assessing utility between different parties, without making apples-to-oranges comparisons.
My favorite answer to this problem comes from “How to Cut a Cake: And Other Mathematical Conundrums.” The solution in the book was that “fair” means “no one has cause to complain.” It doesn’t work in the case here, since one party wants to divide the pie unevenly, but if you were trying to make even cuts, it works. The algorithm was:
Make a cut from the center to the edge.
Have one person hold the knife over that cut,
Slowly rotate the knife (or the pie) at, say, a few degrees per second.
At any time, any person (including the one holding the knife) can say “cut.” A cut is made there, and the speaker gets the thus-cut piece.
At the end, anyone who thinks they got too little (meaning, someone else got too much) could have said “cut” before that other person’s cut got too big.
Nice! Thanks a lot.
That’s actually a really good idea. Like the ‘cut the deck and the other person gets to pick half’ idea but this one actually generalizes to multiple people. Elegant.
That’s the simplest form. AnthonyC adapts it to work for multiple people, provided that everyone agrees that the utility should be divided up evenly. I think it’s possible to adapt the principle further, so that it also applies to situations posed by others on this thread. (Insulin should be given preferentially to diabetics, and antidote should be distributed so as to maximize the number of lives saved.)
If no one knows whether they are one of the parties that will benefit from unfair distribution, then even selfish Bayesian agents will agree on a distribution. This might be accomplished if a group can decide in advance what to do in certain circumstances.
For example, say a group of N people thinks that some of them might be poisoned, but no one is exhibiting symptoms yet. The group might decide to administer 1 unit of antidote to the first person to show visible symptoms. If they continue to treat each person who shows symptoms, in order, they may well run out of n units of antidote. Before anyone shows symptoms, even in a worst-case scenario where they all are poisoned, self-interested parties will find it fairly easy to agree to n/N chances of survival. When they are down to their L^th and last unit of antidote, however, all parties but the one showing symptoms have a strong incentive to withhold the antidote. If they are all poisoned, then they have a 0% chance of survival.
This assumes that all parties get equal value out of the same utility, however. It’s much more difficult when one party gets an amount of utility that can only be judged qualitatively. For example, if Xannon and Yancy don’t really like pie all that much and aren’t all that hungry, but Zaire hasn’t eaten anything in the last day or two. Alternatively, if we want to compare how much a pig values its own life with the utility of a much more intelligent human’s pleasure out of eating bacon.
If you can determine a conversion factor though, or agree on the relative benefits of each, then it becomes pretty obvious which option leads to the greatest total utility. You just choose the option with the highest expected total utility. All of the difficulty is contained in assessing utility between different parties, without making apples-to-oranges comparisons.