Thanks for pointing out that bound. I will think about it. (BTW, if at any point you don’t want to continue this back-and-forth exchange, just let me know, otherwise I will probably keep responding because I always find I have things to say)
My point regarding LIA was that the theorems in the LI paper follow from dominance over all e.c. traders, and there are countably many e.c. traders. If you stop at N, all those theorems break. Of course, you will still get something out of dominating the N traders, but you’d have to go back to the blackboard to figure out what it is because you can no longer get the answer from the paper. (And the theorems give you infinitely many properties, of which you’ll retain only finitely many, so in a sense you will lose “almost all” of the desired results)
Surely there is something special about the order of hypotheses in SRM? Vapnik’s 1998 book introduces SRM (start of Ch. 6) with a decomposition of actual risk into (1) empirical risk and (2) a term depending on VC dimension, analogous to the bias-variance decomposition of generalization error. Vapnik says SRM is for the case where sample size is small enough that term (2) is non-negligible. So already, from the start, we are trying to solve a problem where “simpler = better” appears mathematically in an expression for the error we want to minimize. Then he introduces a rule n(l) for selecting a class Sn based on sample size l, and proves an asymptotic convergence rate (Thm. 6.2) which depends on n(l) having a certain property.
It is certainly true that you could order LIA’s traders by complexity (well, maybe not computably...), and I would be interested in the results. Results from some particular “good” ordering seem like the real determinants of whether LIA-like methods would be good in practice. (Since if we do not fix an ordering we can only get results that hold even for bad/”adversarial” orderings that fill early slots with nonsensical strategies)
Thanks for pointing out that bound. I will think about it. (BTW, if at any point you don’t want to continue this back-and-forth exchange, just let me know, otherwise I will probably keep responding because I always find I have things to say)
My point regarding LIA was that the theorems in the LI paper follow from dominance over all e.c. traders, and there are countably many e.c. traders. If you stop at N, all those theorems break. Of course, you will still get something out of dominating the N traders, but you’d have to go back to the blackboard to figure out what it is because you can no longer get the answer from the paper. (And the theorems give you infinitely many properties, of which you’ll retain only finitely many, so in a sense you will lose “almost all” of the desired results)
Surely there is something special about the order of hypotheses in SRM? Vapnik’s 1998 book introduces SRM (start of Ch. 6) with a decomposition of actual risk into (1) empirical risk and (2) a term depending on VC dimension, analogous to the bias-variance decomposition of generalization error. Vapnik says SRM is for the case where sample size is small enough that term (2) is non-negligible. So already, from the start, we are trying to solve a problem where “simpler = better” appears mathematically in an expression for the error we want to minimize. Then he introduces a rule n(l) for selecting a class Sn based on sample size l, and proves an asymptotic convergence rate (Thm. 6.2) which depends on n(l) having a certain property.
It is certainly true that you could order LIA’s traders by complexity (well, maybe not computably...), and I would be interested in the results. Results from some particular “good” ordering seem like the real determinants of whether LIA-like methods would be good in practice. (Since if we do not fix an ordering we can only get results that hold even for bad/”adversarial” orderings that fill early slots with nonsensical strategies)