program equilibria in open-source game theory: once a model is strong enough to make exact mathematical inferences about the implications of the way the approximator’s actual learned behavior landed after training, the reflection about game theory can be incredibly weird. this is where much of the stuff about decision theories comes up, and the reason we haven’t run into it already is because current models are big enough to be really hard to prove through. Related work, new and old:
https://arxiv.org/pdf/2208.07006.pdf—Cooperative and uncooperative institution designs: Surprises and problems in open-source game theory—near the top of my to read list; by Andrew Critch who has some other posts on the topic, especially the good ol “Open Source Game Theory is weird”, and several more recent ones I haven’t read properly at all
This also connects through to putting neural networks in formal verification systems. The summary right now is that it’s possible but doesn’t scale to current model sizes. I expect scalability to surprise us.
program equilibria in open-source game theory: once a model is strong enough to make exact mathematical inferences about the implications of the way the approximator’s actual learned behavior landed after training, the reflection about game theory can be incredibly weird. this is where much of the stuff about decision theories comes up, and the reason we haven’t run into it already is because current models are big enough to be really hard to prove through. Related work, new and old:
https://arxiv.org/pdf/2208.07006.pdf—Cooperative and uncooperative institution designs: Surprises and problems in open-source game theory—near the top of my to read list; by Andrew Critch who has some other posts on the topic, especially the good ol “Open Source Game Theory is weird”, and several more recent ones I haven’t read properly at all
https://arxiv.org/pdf/2211.05057.pdf—A Note on the Compatibility of Different Robust Program Equilibria of the Prisoner’s Dilemma
https://arxiv.org/pdf/1401.5577.pdf—Robust Cooperation in the Prisoner’s Dilemma: Program Equilibrium via Provability Logic
https://www.semanticscholar.org/paper/Program-equilibrium-Tennenholtz/e1a060cda74e0e3493d0d81901a5a796158c8410?sort=pub-date—the paper that introduced OSGT, with papers citing it sorted by recency
also interesting https://www.semanticscholar.org/paper/Open-Problems-in-Cooperative-AI-Dafoe-Hughes/2a1573cfa29a426c695e2caf6de0167a12b788ef and https://www.semanticscholar.org/paper/Foundations-of-Cooperative-AI-Conitzer-Oesterheld/5ccda8ca1f04594f3dadd621fbf364c8ec1b8474
This also connects through to putting neural networks in formal verification systems. The summary right now is that it’s possible but doesn’t scale to current model sizes. I expect scalability to surprise us.