Code link gives a 404, so I can’t look and see, but I’m curious what the ratio is actually comparing. Is that the percentage of removals that were an improvement the ratio of improved to degraded, ignoring irrelevant removals, or the mean change (unlikely, since all are positive), or something else? Does >0.5 imply a benefit and <0.5 imply a harm (assuming so)?
It’s interesting that removing random options from A is never beneficial to A, but is also harmful to B unless B starts out with more actions than A. I presume these payouts weren’t normalized to zero-sum, so that’s down to the distribution of outcomes-to-actions, and who “has more control”.
Updated the link to the actual code. I computed the equilibria for the full game, and then computed the payoff per equilibrium for each player, and then took the mean for each player. I did the same but with the game with one option removed. The number in the chart is the proportion of games where removing one option from player A improved the payoff (averaged over equilibria).
If the number is >0.5, then that means that for that player, removing one option from A on average improves their payoffs. (The number of options is pre-removal). I also found this interesting, but the charts are maybe a bit misleading because often removing one option from A doesn’t change the equilibria. I’ll maybe generate some charts for this.
I’ll perhaps also write a clearer explanation of what is happening and repost as a top-level post.
How Often Does Taking Away Options Help?
In some game-theoretic setups, taking options away from a player improves their situation. I ran a Monte-Carlo simulation to figure out how often that is the case, generating random normal form games with payoffs in [0,1], removing a random option from the first player, and comparing the Nash equilibria found via vertex enumeration of the best response polytope (using nashpy)—the Lemke-Howson algorithm was giving me duplicate results.
Code here, largely written by Claude 3.5 Sonnet.
Not clear to me how to interpret the chart.
I wrote a short reply to Dagon, maybe that helps.
Otherwise I might write up a full post explaning this with examples &c.
Code link gives a 404, so I can’t look and see, but I’m curious what the ratio is actually comparing. Is that the percentage of removals that were an improvement the ratio of improved to degraded, ignoring irrelevant removals, or the mean change (unlikely, since all are positive), or something else? Does >0.5 imply a benefit and <0.5 imply a harm (assuming so)?
It’s interesting that removing random options from A is never beneficial to A, but is also harmful to B unless B starts out with more actions than A. I presume these payouts weren’t normalized to zero-sum, so that’s down to the distribution of outcomes-to-actions, and who “has more control”.
Updated the link to the actual code. I computed the equilibria for the full game, and then computed the payoff per equilibrium for each player, and then took the mean for each player. I did the same but with the game with one option removed. The number in the chart is the proportion of games where removing one option from player A improved the payoff (averaged over equilibria).
If the number is >0.5, then that means that for that player, removing one option from A on average improves their payoffs. (The number of options is pre-removal). I also found this interesting, but the charts are maybe a bit misleading because often removing one option from A doesn’t change the equilibria. I’ll maybe generate some charts for this.
I’ll perhaps also write a clearer explanation of what is happening and repost as a top-level post.
And: yes, the games weren’t normalized to be zero-sum.