I think Wei’s has a good take on the nub of the problem.
Let us make everyone altruistic. Instead of “I will give you cake/chocolate”, say instead “I will give your mother cake/chocolate if you all agree”. If we stipulate that everyone here cares about their mother equally to themselves where treats are concerned, this should result in the same utility for everyone in the experiment (this is like the “averaging”, but maybe easier to see).
Then here, my model says you should go for cake (as long as it’s better than two chocolates).
What is the equivalent model for 11 people? Well, here it would be “I will choose one random person among you. If that person chooses chocolate, I will give half a chocolate to your mother. If that person choose cake, I will give half a cake to your mother. If the remaining 10 people choose chocolate, I will give half a chocolate to your mother”.
Then under a sensible division of responsibility or such, you should still choose cake.
However, if I gave you the 11 situation and then made your indexical preferences altruistic, it would be “if everyone chooses chocolate, your mother gets chocolate, and if everyone chooses cake, I will give 1⁄11 of a cake to your mother”.
Something has happened here; it seems that the two models have different altruistic/average equivalents, despite feeling very similar. I’ll have to think more.
I think Wei’s has a good take on the nub of the problem.
Let us make everyone altruistic. Instead of “I will give you cake/chocolate”, say instead “I will give your mother cake/chocolate if you all agree”. If we stipulate that everyone here cares about their mother equally to themselves where treats are concerned, this should result in the same utility for everyone in the experiment (this is like the “averaging”, but maybe easier to see).
Then here, my model says you should go for cake (as long as it’s better than two chocolates). What is the equivalent model for 11 people? Well, here it would be “I will choose one random person among you. If that person chooses chocolate, I will give half a chocolate to your mother. If that person choose cake, I will give half a cake to your mother. If the remaining 10 people choose chocolate, I will give half a chocolate to your mother”.
Then under a sensible division of responsibility or such, you should still choose cake.
However, if I gave you the 11 situation and then made your indexical preferences altruistic, it would be “if everyone chooses chocolate, your mother gets chocolate, and if everyone chooses cake, I will give 1⁄11 of a cake to your mother”.
Something has happened here; it seems that the two models have different altruistic/average equivalents, despite feeling very similar. I’ll have to think more.