But this post isn’t about people’s feelings. It’s about rational agents attempting to maximize wealth.
No, the entire game is about people’s feelings, if you allow discussion before the offer and accept steps, and/or if it’s humans making the decisions (where you cannot assume that self-identity and the habitual expectation of future impact don’t dominate).
It’s only about rational agents attempting to maximize wealth if you make it purely anonymous and hide the other player’s actions (and ideally that the other player is even involved). Tell agent 1 they can pick an amount to keep, but not who their counterpart is (or even if it’s human—just say it’s an agent optimizing it’s wealth). Tell player 2 they can accept the amount offered, or reject it, but don’t add the psychology of what “might have been”, by telling them that it has any effect on player 1 (or even that player 1 exists), or that the amount offered is variable based on some other agent.
But that’s boring—perfectly rational agents maximizing their wealth and not considering any future interactions pretty much do what Nash says—offer the minimum, accept anything over 0. The game is interesting ONLY when those conditions don’t hold.
When you allow discussion, all this goes out the window. It’s just psychology—what do you think the other player will accept, rather than “punishing” you even at a cost to themselves. As you note, if precommittment is asymmetric (player 2 can use it, player 1 can’t), that just reverses the places. If it’s symmetrical, then it’s back to pure psychology about where the line is set.
No, the entire game is about people’s feelings, if you allow discussion before the offer and accept steps, and/or if it’s humans making the decisions (where you cannot assume that self-identity and the habitual expectation of future impact don’t dominate).
It’s only about rational agents attempting to maximize wealth if you make it purely anonymous and hide the other player’s actions (and ideally that the other player is even involved). Tell agent 1 they can pick an amount to keep, but not who their counterpart is (or even if it’s human—just say it’s an agent optimizing it’s wealth). Tell player 2 they can accept the amount offered, or reject it, but don’t add the psychology of what “might have been”, by telling them that it has any effect on player 1 (or even that player 1 exists), or that the amount offered is variable based on some other agent.
But that’s boring—perfectly rational agents maximizing their wealth and not considering any future interactions pretty much do what Nash says—offer the minimum, accept anything over 0. The game is interesting ONLY when those conditions don’t hold.
When you allow discussion, all this goes out the window. It’s just psychology—what do you think the other player will accept, rather than “punishing” you even at a cost to themselves. As you note, if precommittment is asymmetric (player 2 can use it, player 1 can’t), that just reverses the places. If it’s symmetrical, then it’s back to pure psychology about where the line is set.
See also Altruistic Punishment (one reference https://www.nature.com/articles/415137a).