I agree with this, although it might not work for some theoretically possible games that humans would not actually play.
Life in the real world, however, is not a perfect information zero-sum game, or even an approximation of one. So there is no reason to suppose that the techniques use will generalize to a fooming AI.
As far as I can see you can use the same techniques to learn to play any perfect information zero-sum game
Is there any reason why the same techniques couldn’t be applied to imperfect information non-zero-sum games?
I agree with this, although it might not work for some theoretically possible games that humans would not actually play.
Life in the real world, however, is not a perfect information zero-sum game, or even an approximation of one. So there is no reason to suppose that the techniques use will generalize to a fooming AI.
Here are some examples of recent work that uses these same tools to make other critical components of a more general AI:
https://coxlab.github.io/prednet/
https://arxiv.org/abs/1707.06203
https://deepmind.com/blog/differentiable-neural-computers/
(Edit note: Made the links into actual links, let me know if you do not want me to fix/improve small things like this in future comments of yours)