There are quite a few assumptions to pin down solutions that seem to unnecessarily restrict the solution space for bargaining strategies. For example,
“A player which contributes absolutely nothing to the project and just sits around, regardless of circumstances, should get 0 dollars.”
We might want solutions that benefit players who cannot contribute. For example, in an AGI world, a large number of organic humans may not be able to contribute because overhead swamps gains from trade in comparative advantage. We still want to give these people a slice of the pie. We want to value human life, not just production.
Maybe you could reconceive the project as including a “has more happy humans” term. This makes all participants contributors.
Related, is the implicit assumption that the player’s input is what should determine the “chaa” result. I’d rather divide up the pie on consequentialist terms: what division brings the maximum utility for the worst off person or median person or maximum mean utility. A Marxist would want to distribute the gains according to the players’ “needs.” If our fellow humans come up with such different notions, an alien or AI can scarcely be expected to be more similar. Unfortunately the inputs to the problem are missing terms for “need” and long term population utility.
The assumption that if the total pile is a times as big, everyone should get a times as much is also unwarranted. Utility arising from 500,000,000 pieces of candy is less thank 100,000,000 times the utility of 5 pieces. We get more mean and median utility when the extra gains go disproportionately to those who would have been allotted less.
The CoCo solution has it’s share of of assumptions. For example: Payoff dominance. If player A gets more money than player B in all cells, then player A will leave the game with more money than player B.
I don’t see why this is the way we want to design an allocation method. We may need this to make an incentive structure for certain types of behavior, but for arbitrary situations, I don’t think this is a requirement.
This isn’t a philosophical post about how you would reshape the world if you had godlike powers to dictate terms to everyone; it’s a mathematical post about how agents with conflicting goals can reach a compromise.
You’re trying to bake your personal values (like happy humans) into the rules. If all the players in the game already share your values, you don’t need to do that, because it will already be reflected in their utility functions. If all players in the game don’t share your values (e.g. aliens), then why would they agree to divide resources according to rules that explicitly favor your values over theirs?
You’re trying to bake your personal values (like happy humans) into the rules.
My point is that this has already happened. The underlying assumptions bake in human values. The discussion so far did not convince me that an alien would share these values. I list instances where a human might object to these values. If a human may object to “a player which contributes absolutely nothing … gets nothing,” an alien may object too; if a human may object to “the only inputs are the set of players and a function from player subsets to utility,” an alien may object too; and so forth. These are assumptions baked into the rules of how to divide the resources. So, I am not convinced that these rules allow all agents with conflicting goals to reach a compromise because I am not convinced all agents will accept these rules.[1]
I brought up the “happy humans term” as a way to point out that maybe aliens wouldn’t object to the rule of “contribute nothing … get nothing” because they could always define the value functions so that the set of participants who contribute nothing is empty.
This sets up a meta-bargaining situation where we have to agree on which rules to accept to do bargaining before we can start bargaining. This situation seems to be a basic “Bargaining Game.” I think we might derive the utilities of each rule set from the utilities the participants receive from a bargain made under those rules + a term for how much they like using that rule set[2]. Unfortunately, except for “Choose options on the Pareto frontier whose utilities exceed the BATNA,” this game seems underdetermined, so we’ll have trouble reaching a consensus.
To understand why I think there should be a term for how much they like using the rule set, imagine aliens who value self-determination and cooperative decision-making for all sentient beings and can wipe us out militarily. Imagine we want to split the resources in an asteroid both of us landed on. Consider the rule set of “might makes right.” Under this set, they can unilaterally dictate how the asteroid is divided. So they get maximum utility from the asteroid’s resources. However, they recognize that this is the opposite of self-determination and cooperative decision making; so getting all of the resources this way is of less utility to them than getting all the resources under another set of rules.
While an alien (or a human) could in principle object to literally any rule (No Universally Compelling Arguments), I think “players who contribute nothing get nothing” is very reasonable on purely pragmatic grounds, because those players have nothing to bargain with. They are effectively non-players.
If you give free resources to “players” who contribute nothing, then what stops me from demanding additional shares for my pet rock, my dead grandparents, and my imaginary friends? The chaa division of resources shouldn’t change based on whether I claim to be 1 person or a conglomerate of 37 trillion cells that each want a share of the pie, if the real-world actions being taken are the same under both abstractions.
Also, I think you may be confusing desiderata with assumptions. “Players who contribute nothing get nothing” was taken as a goal that the rules tried to achieve, and so it makes sense (in principle) to argue about whether that’s a good goal. Stuff like “players have utility functions” is not a goal; it’s more like a description of what problem is being solved. You could argue about how well that abstraction represents various real scenarios, but it’s not really a values statement.
There are quite a few assumptions to pin down solutions that seem to unnecessarily restrict the solution space for bargaining strategies. For example,
“A player which contributes absolutely nothing to the project and just sits around, regardless of circumstances, should get 0 dollars.”
We might want solutions that benefit players who cannot contribute. For example, in an AGI world, a large number of organic humans may not be able to contribute because overhead swamps gains from trade in comparative advantage. We still want to give these people a slice of the pie. We want to value human life, not just production.
Maybe you could reconceive the project as including a “has more happy humans” term. This makes all participants contributors.
Related, is the implicit assumption that the player’s input is what should determine the “chaa” result. I’d rather divide up the pie on consequentialist terms: what division brings the maximum utility for the worst off person or median person or maximum mean utility. A Marxist would want to distribute the gains according to the players’ “needs.” If our fellow humans come up with such different notions, an alien or AI can scarcely be expected to be more similar. Unfortunately the inputs to the problem are missing terms for “need” and long term population utility.
The assumption that if the total pile is a times as big, everyone should get a times as much is also unwarranted. Utility arising from 500,000,000 pieces of candy is less thank 100,000,000 times the utility of 5 pieces. We get more mean and median utility when the extra gains go disproportionately to those who would have been allotted less.
The CoCo solution has it’s share of of assumptions. For example: Payoff dominance. If player A gets more money than player B in all cells, then player A will leave the game with more money than player B.
I don’t see why this is the way we want to design an allocation method. We may need this to make an incentive structure for certain types of behavior, but for arbitrary situations, I don’t think this is a requirement.
This isn’t a philosophical post about how you would reshape the world if you had godlike powers to dictate terms to everyone; it’s a mathematical post about how agents with conflicting goals can reach a compromise.
You’re trying to bake your personal values (like happy humans) into the rules. If all the players in the game already share your values, you don’t need to do that, because it will already be reflected in their utility functions. If all players in the game don’t share your values (e.g. aliens), then why would they agree to divide resources according to rules that explicitly favor your values over theirs?
My point is that this has already happened. The underlying assumptions bake in human values. The discussion so far did not convince me that an alien would share these values. I list instances where a human might object to these values. If a human may object to “a player which contributes absolutely nothing … gets nothing,” an alien may object too; if a human may object to “the only inputs are the set of players and a function from player subsets to utility,” an alien may object too; and so forth. These are assumptions baked into the rules of how to divide the resources. So, I am not convinced that these rules allow all agents with conflicting goals to reach a compromise because I am not convinced all agents will accept these rules.[1]
I brought up the “happy humans term” as a way to point out that maybe aliens wouldn’t object to the rule of “contribute nothing … get nothing” because they could always define the value functions so that the set of participants who contribute nothing is empty.
This sets up a meta-bargaining situation where we have to agree on which rules to accept to do bargaining before we can start bargaining. This situation seems to be a basic “Bargaining Game.” I think we might derive the utilities of each rule set from the utilities the participants receive from a bargain made under those rules + a term for how much they like using that rule set[2]. Unfortunately, except for “Choose options on the Pareto frontier whose utilities exceed the BATNA,” this game seems underdetermined, so we’ll have trouble reaching a consensus.
To understand why I think there should be a term for how much they like using the rule set, imagine aliens who value self-determination and cooperative decision-making for all sentient beings and can wipe us out militarily. Imagine we want to split the resources in an asteroid both of us landed on. Consider the rule set of “might makes right.” Under this set, they can unilaterally dictate how the asteroid is divided. So they get maximum utility from the asteroid’s resources. However, they recognize that this is the opposite of self-determination and cooperative decision making; so getting all of the resources this way is of less utility to them than getting all the resources under another set of rules.
While an alien (or a human) could in principle object to literally any rule (No Universally Compelling Arguments), I think “players who contribute nothing get nothing” is very reasonable on purely pragmatic grounds, because those players have nothing to bargain with. They are effectively non-players.
If you give free resources to “players” who contribute nothing, then what stops me from demanding additional shares for my pet rock, my dead grandparents, and my imaginary friends? The chaa division of resources shouldn’t change based on whether I claim to be 1 person or a conglomerate of 37 trillion cells that each want a share of the pie, if the real-world actions being taken are the same under both abstractions.
Also, I think you may be confusing desiderata with assumptions. “Players who contribute nothing get nothing” was taken as a goal that the rules tried to achieve, and so it makes sense (in principle) to argue about whether that’s a good goal. Stuff like “players have utility functions” is not a goal; it’s more like a description of what problem is being solved. You could argue about how well that abstraction represents various real scenarios, but it’s not really a values statement.