It seems easy to invent games that favor specific epistemologies. Or do you think game 1 means something more, that it’s a “typical” game in some sense, while game 3 isn’t? I’d love to see a result like that.
How about if we restrict attention to games where at any stage the players are allowed to choose a probability distribution over the set of available moves, rather than being forced to choose one move? Is it then possible for Solomonoff induction to lose (whatever ‘lose’ means) with non-zero probability, in the limit?
In other words, does Solomonoff induction win all variants of game 1 that use different proper scoring rules in place of log score? Nice question. I’m going to sleep, will try to solve it tomorrow unless someone else does it first.
Game 1 is designed to test epistemologies as epistemologies, without considering any decision theories that use that epistemology. Figuring out a good decision theory is harder. What game 3 shows is that even starting with an ideal epistemology, you can’t build a decision theory that outperforms all others in all environments.
It seems easy to invent games that favor specific epistemologies. Or do you think game 1 means something more, that it’s a “typical” game in some sense, while game 3 isn’t? I’d love to see a result like that.
How about if we restrict attention to games where at any stage the players are allowed to choose a probability distribution over the set of available moves, rather than being forced to choose one move? Is it then possible for Solomonoff induction to lose (whatever ‘lose’ means) with non-zero probability, in the limit?
In other words, does Solomonoff induction win all variants of game 1 that use different proper scoring rules in place of log score? Nice question. I’m going to sleep, will try to solve it tomorrow unless someone else does it first.
Game 1 is designed to test epistemologies as epistemologies, without considering any decision theories that use that epistemology. Figuring out a good decision theory is harder. What game 3 shows is that even starting with an ideal epistemology, you can’t build a decision theory that outperforms all others in all environments.