Very nice. This is the cleanest result on cognitive (or rationality) costs in co-operative systems that I’ve seen. Modal combat seems kind of esoteric compared to, say, iterated prisoners’ dilemma tournaments with memory, but it pays off nicely here. It gives you the outcomes of a set of other-modelling agents (without e.g. doing a whole lot of simulation), and the box-operator depth then plugs in as a natural modelling-cost measure.
Did you ever publish any of your modal combat code (I have a vague recollection that you had some Haskell code ?) ?
Very nice. This is the cleanest result on cognitive (or rationality) costs in co-operative systems that I’ve seen. Modal combat seems kind of esoteric compared to, say, iterated prisoners’ dilemma tournaments with memory, but it pays off nicely here. It gives you the outcomes of a set of other-modelling agents (without e.g. doing a whole lot of simulation), and the box-operator depth then plugs in as a natural modelling-cost measure.
Did you ever publish any of your modal combat code (I have a vague recollection that you had some Haskell code ?) ?
There is this: https://github.com/machine-intelligence/provability