It would be nice to prove theorems analogous to standard PAC learning theory in this setting. For example, suppose there are distributions D1,D2…Dn on X that the adversary can choose from. Our algorithm is given a sample of Di⋉g for each i and should choose a hypothesis. We can then try to prove that, given finite VC dimension, playing a minimax strategy of the game resulting from thinking of our samples as the entire distributions is approximately minimax for the real game.
Also, computing Nash equilibria in zero-sum two-player games is easy using linear programming, as a direct consequence of the minimax theorem.
It would be nice to prove theorems analogous to standard PAC learning theory in this setting. For example, suppose there are distributions D1,D2…Dn on X that the adversary can choose from. Our algorithm is given a sample of Di⋉g for each i and should choose a hypothesis. We can then try to prove that, given finite VC dimension, playing a minimax strategy of the game resulting from thinking of our samples as the entire distributions is approximately minimax for the real game.
Also, computing Nash equilibria in zero-sum two-player games is easy using linear programming, as a direct consequence of the minimax theorem.