“Suppose you have ten ideal game-theoretic selfish agents and a pie to be divided by majority vote. ”
Well then, the statistical expected (average) share any agent is going to get long-term is 1/10th of the pie. The simplest solution that ensures this is the equal division; anticipating this from the start cuts down on negotiation costs, and if a majority agrees to follow this strategy (i.e agrees to not realize more than their “share”), it is also stable—anyone who ponders upsetting it risks to be the “odd man out” who eats the loss of an unsymmetric strategy.
In practice (i.e. in real life) there are other situations that are relatively stable, i.e. after a few rounds of “outsiders” bidding low to get in, there might be two powerful “insiders” who get large shares in liaison with four smaller insiders who agree to a very small share because it is better than nothing; the best the insiders can do then is to offer the four outsiders small shares also, so that each small-share individual wil be faced with the choice of cooperating and receiving a small share, or not cooperating and receiving nothing. Whether the two insiders can pull this off will depend on how they frame the problem, and how they present themselves (“we are the stabilizers that ensure that “social justice” is done and nobody has to starve”).
How you can get an AI to understand setups like this (and if it wants to move past the singularity, it probably will have to) seems to be quite a problem; to recognize that statistically, it can realize no more than 1/10th, and to push for the simplest solution that ensures this seems far easier (and yet some commentators seem to think that this solution of “cutting everyone in” is somehow “inferior” as a strategy—puny humans ;-).
“Suppose you have ten ideal game-theoretic selfish agents and a pie to be divided by majority vote. ”
Well then, the statistical expected (average) share any agent is going to get long-term is 1/10th of the pie. The simplest solution that ensures this is the equal division; anticipating this from the start cuts down on negotiation costs, and if a majority agrees to follow this strategy (i.e agrees to not realize more than their “share”), it is also stable—anyone who ponders upsetting it risks to be the “odd man out” who eats the loss of an unsymmetric strategy.
In practice (i.e. in real life) there are other situations that are relatively stable, i.e. after a few rounds of “outsiders” bidding low to get in, there might be two powerful “insiders” who get large shares in liaison with four smaller insiders who agree to a very small share because it is better than nothing; the best the insiders can do then is to offer the four outsiders small shares also, so that each small-share individual wil be faced with the choice of cooperating and receiving a small share, or not cooperating and receiving nothing. Whether the two insiders can pull this off will depend on how they frame the problem, and how they present themselves (“we are the stabilizers that ensure that “social justice” is done and nobody has to starve”).
How you can get an AI to understand setups like this (and if it wants to move past the singularity, it probably will have to) seems to be quite a problem; to recognize that statistically, it can realize no more than 1/10th, and to push for the simplest solution that ensures this seems far easier (and yet some commentators seem to think that this solution of “cutting everyone in” is somehow “inferior” as a strategy—puny humans ;-).