Thanks for pointing out that bound. I will think about it. (BTW, if at any point you don’t want to continue this back-and-forth exchange, just let me know, otherwise I will probably keep responding because I always find I have things to say)
My point regarding LIA was that the theorems in the LI paper follow from dominance over all e.c. traders, and there are countably many e.c. traders. If you stop at , all those theorems break. Of course, you will still get something out of dominating the traders, but you’d have to go back to the blackboard to figure out what it is because you can no longer get the answer from the paper. (And the theorems give you infinitely many properties, of which you’ll retain only finitely many, so in a sense you will lose “almost all” of the desired results)
Surely there is something special about the order of hypotheses in SRM? Vapnik’s 1998 book introduces SRM (start of Ch. 6) with a decomposition of actual risk into (1) empirical risk and (2) a term depending on VC dimension, analogous to the bias-variance decomposition of generalization error. Vapnik says SRM is for the case where sample size is small enough that term (2) is non-negligible. So already, from the start, we are trying to solve a problem where “simpler = better” appears mathematically in an expression for the error we want to minimize. Then he introduces a rule for selecting a class based on sample size , and proves an asymptotic convergence rate (Thm. 6.2) which depends on having a certain property.
It is certainly true that you could order LIA’s traders by complexity (well, maybe not computably...), and I would be interested in the results. Results from some particular “good” ordering seem like the real determinants of whether LIA-like methods would be good in practice. (Since if we do not fix an ordering we can only get results that hold even for bad/”adversarial” orderings that fill early slots with nonsensical strategies)
Your points about the difficulty of getting uniform results in this framework are interesting. My inclination is to regard this as a failure of the framework. The LI paper introduced the idea of “e.c. traders,” and the goal of not being exploitable (in some sense) by such traders; these weren’t well-established notions which the paper simply proven some new theorems about. So they are up for critique as much as anything else in the paper (indeed, they are the only things up for critique, since I’m not disputing that the theorems themselves follow from the premises). And if our chosen framework only lets us prove something that is too weak, while leaving the most obvious strengthening clearly out of reach, that suggests we are not looking at the problem (the philosophical problem, about how to think about logical induction) at the right level of “resolution.”
As I said to Vadim earlier, I am not necessarily pessimistic about the performance of some (faster?) version of LIA with a “good” ordering for T^k. But if such a thing were to work, it would be for reasons above and beyond satisfying the LI criterion, and I wouldn’t expect the LI criterion to do much work in illuminating its success. (It might serve as a sanity check—too weak, but its negation would be bad—but it might not end up being the kind of sanity check we want, i.e. the failures it does not permit might be just those required for good and/or fast finite-time performance. I don’t necessarily think this is likely, but I won’t know if it’s true or not until the hypothetical work on LIA-like algorithms is done.)