It’s very hard (perhaps not possible in humans) to have communication without changing the payoff matrix. As soon as the framing changes to make the other player slightly more trustworthy or empathetic, the actual human evaluation will include other factors (kindness, self-image, etc.). In other words, most people’s utility function _does_ include happiness of others. The terms can vary widely, and even vary in sign, based on framing and evaluation of the other, though.
More importantly, the Nash equilibrium is kind of irrelevant to non-zero-sum games. There is no reason to believe that any optimization process is seeking it. edit: I retract this paragraph. The Nash equilibrium is relevant to some non-zero-sum games, but there are truly ZERO one-shot independent games that humans participate in. Any trial or demonstration cannot avoid the fact that utility payout is not linear with the stated matrix.
It’s very hard (perhaps not possible in humans) to have communication without changing the payoff matrix. As soon as the framing changes to make the other player slightly more trustworthy or empathetic, the actual human evaluation will include other factors (kindness, self-image, etc.). In other words, most people’s utility function _does_ include happiness of others. The terms can vary widely, and even vary in sign, based on framing and evaluation of the other, though.
More importantly, the Nash equilibrium is kind of irrelevant to non-zero-sum games. There is no reason to believe that any optimization process is seeking it. edit: I retract this paragraph. The Nash equilibrium is relevant to some non-zero-sum games, but there are truly ZERO one-shot independent games that humans participate in. Any trial or demonstration cannot avoid the fact that utility payout is not linear with the stated matrix.