I agree that the LI criterion is “pointwise” in the way that you describe, but I think that this is both pretty good and as much as could actually be asked. A single efficiently computable trader can do a lot. It can enforce coherence on a polynomially growing set of sentences, search for proofs using many different proof strategies, enforce a polynomially growing set of statistical patterns, enforce reflection properties on a polynomially large set of sentences, etc. So, eventually the market will not be exploitable on all these things simultaneously, which seems like a pretty good level of accurate beliefs to have.
On the other side of things, it would be far to strong to ask for a uniform bound of the form “for every ε>0, there is some day t such that after step t, no trader can multiply its wealth by a factor more than 1+ε”. This is because a trader can be hardcoded with arbitrarily many hard-to-compute facts. For every δ, there must eventually be a day t′>t on which the belief of your logical inductor assign probability less than δ to some true statement, at which point a trader who has that statement hardcoded can multiply its wealth by 1/δ. (I can give a construction of such a sentence using self-reference if you want, but it’s also intuitively natural—just pick many mutually exclusive statements with nothing to break the symmetry.)
Thus, I wouldn’t think of traders as “mistakes”, as you do in the post. A trader can gain money on the market if the market doesn’t already know all facts that will be listed by the deductive process, but that is a very high bar. Doing well against finitely many traders is already “pretty good”.
What you can ask for regarding uniformity is for some simple function f such that any trader T can multiply its wealth by at most a factor f(T). This is basically the idea of the mistake bound model in learning theory; you bound how many mistakes happen rather than when they happen. This would let you say a more than the one-trader properties I mentioned in my first paragraph. In fact, LIA has this propery; f(T) is just the initial wealth of the trader. You may therefore want to do something like setting traders’ initial wealths according to some measure of complexity. Admittedly this isn’t made explicit in the paper, but there’s not much additional that needs to be done to think in this way; it’s just the combination of the individual proofs in the paper with the explicit bounds you get from the initial wealths of the traders involved.
I basically agree completely on your last few points. The traders are a model class, not an ensemble method in any substantive way, and it is just confusing to connect them to the papers on ensemble methods that the LI paper references. Also, while I use the idea of logical induction to do research that I hope will be relevant to practical algorithms, it seems unlikely than any practical algorithm will look much like a LI. For one thing, finding fixed points is really hard without some property stronger than continuity, and you’d need a pretty good reason to put it in the inner loop of anything.
Your points about the difficulty of getting uniform results in this framework are interesting. My inclination is to regard this as a failure of the framework. The LI paper introduced the idea of “e.c. traders,” and the goal of not being exploitable (in some sense) by such traders; these weren’t well-established notions which the paper simply proven some new theorems about. So they are up for critique as much as anything else in the paper (indeed, they are the only things up for critique, since I’m not disputing that the theorems themselves follow from the premises). And if our chosen framework only lets us prove something that is too weak, while leaving the most obvious strengthening clearly out of reach, that suggests we are not looking at the problem (the philosophical problem, about how to think about logical induction) at the right level of “resolution.”
As I said to Vadim earlier, I am not necessarily pessimistic about the performance of some (faster?) version of LIA with a “good” ordering for T^k. But if such a thing were to work, it would be for reasons above and beyond satisfying the LI criterion, and I wouldn’t expect the LI criterion to do much work in illuminating its success. (It might serve as a sanity check—too weak, but its negation would be bad—but it might not end up being the kind of sanity check we want, i.e. the failures it does not permit might be just those required for good and/or fast finite-time performance.
I don’t necessarily think this is likely, but I won’t know if it’s true or not until the hypothetical work on LIA-like algorithms is done.)
A few thoughts:
I agree that the LI criterion is “pointwise” in the way that you describe, but I think that this is both pretty good and as much as could actually be asked. A single efficiently computable trader can do a lot. It can enforce coherence on a polynomially growing set of sentences, search for proofs using many different proof strategies, enforce a polynomially growing set of statistical patterns, enforce reflection properties on a polynomially large set of sentences, etc. So, eventually the market will not be exploitable on all these things simultaneously, which seems like a pretty good level of accurate beliefs to have.
On the other side of things, it would be far to strong to ask for a uniform bound of the form “for every ε>0, there is some day t such that after step t, no trader can multiply its wealth by a factor more than 1+ε”. This is because a trader can be hardcoded with arbitrarily many hard-to-compute facts. For every δ, there must eventually be a day t′>t on which the belief of your logical inductor assign probability less than δ to some true statement, at which point a trader who has that statement hardcoded can multiply its wealth by 1/δ. (I can give a construction of such a sentence using self-reference if you want, but it’s also intuitively natural—just pick many mutually exclusive statements with nothing to break the symmetry.)
Thus, I wouldn’t think of traders as “mistakes”, as you do in the post. A trader can gain money on the market if the market doesn’t already know all facts that will be listed by the deductive process, but that is a very high bar. Doing well against finitely many traders is already “pretty good”.
What you can ask for regarding uniformity is for some simple function f such that any trader T can multiply its wealth by at most a factor f(T). This is basically the idea of the mistake bound model in learning theory; you bound how many mistakes happen rather than when they happen. This would let you say a more than the one-trader properties I mentioned in my first paragraph. In fact, LIA has this propery; f(T) is just the initial wealth of the trader. You may therefore want to do something like setting traders’ initial wealths according to some measure of complexity. Admittedly this isn’t made explicit in the paper, but there’s not much additional that needs to be done to think in this way; it’s just the combination of the individual proofs in the paper with the explicit bounds you get from the initial wealths of the traders involved.
I basically agree completely on your last few points. The traders are a model class, not an ensemble method in any substantive way, and it is just confusing to connect them to the papers on ensemble methods that the LI paper references. Also, while I use the idea of logical induction to do research that I hope will be relevant to practical algorithms, it seems unlikely than any practical algorithm will look much like a LI. For one thing, finding fixed points is really hard without some property stronger than continuity, and you’d need a pretty good reason to put it in the inner loop of anything.
Your points about the difficulty of getting uniform results in this framework are interesting. My inclination is to regard this as a failure of the framework. The LI paper introduced the idea of “e.c. traders,” and the goal of not being exploitable (in some sense) by such traders; these weren’t well-established notions which the paper simply proven some new theorems about. So they are up for critique as much as anything else in the paper (indeed, they are the only things up for critique, since I’m not disputing that the theorems themselves follow from the premises). And if our chosen framework only lets us prove something that is too weak, while leaving the most obvious strengthening clearly out of reach, that suggests we are not looking at the problem (the philosophical problem, about how to think about logical induction) at the right level of “resolution.”
As I said to Vadim earlier, I am not necessarily pessimistic about the performance of some (faster?) version of LIA with a “good” ordering for T^k. But if such a thing were to work, it would be for reasons above and beyond satisfying the LI criterion, and I wouldn’t expect the LI criterion to do much work in illuminating its success. (It might serve as a sanity check—too weak, but its negation would be bad—but it might not end up being the kind of sanity check we want, i.e. the failures it does not permit might be just those required for good and/or fast finite-time performance. I don’t necessarily think this is likely, but I won’t know if it’s true or not until the hypothetical work on LIA-like algorithms is done.)