Let F be the SWBI that just says F(S) = number of 0s in S—number of 1s in S. Let M be an arbitrary semicomputable SWBI. Then I don’t think M can dominate both F and -F up to an additive constant.
I have a vague, handwavy argument:
Suppose the ‘data’ is an algorithmically random sequence, which basically gives us a discrete random walk. Now a random walk must return infinitely many times to the ‘center’ (where F = 0). And in order to dominate both F and -F, when the random walk is far away from the center, M is going to have to be increasing at the rate of 1 per step away from the center. But M can’t know exactly when the random walk is going to turn around and head back home, so when it gets home, M is going to have lost about 1. (i.e. M will be about 1 less than it was last time it was at the center.) And this is going to happen infinitely many times.
But it’s still possible that some M could dominate all semicomputable SWBIs in the weaker sense of losing less than epsilon per step, for all epsilon > 0.
But it’s still possible that some M could dominate all semicomputable SWBIs in the weaker sense of losing less than epsilon per step, for all epsilon > 0.
You may already know that, but for the benefit of onlookers: multiplicative weights over F and -F is computable and kinda sorta dominates both F and -F in the weaker sense that you specified. You can make epsilon arbitrarily close to 0 by choosing a multiplier close to 1, at the cost of also incurring an additive constant loss that goes to infinity as epsilon->0. Multiplicative weights over all lower-semicomputable SWBIs dominates them all in the same sense, but it doesn’t seem to be lower-semicomputable itself, only approximable (or maybe I just failed to find a proof).
Actually, I’m Very Suspicious of that argument now because of the fact that it takes such a long time for a random walk to ‘return home’. Perhaps it’s just wrong.
Thanks! I agree it’s hard to imagine a lower-semicomputable M that would dominate both F and -F when S is algorithmically random, but a more solid proof would be nice because about half the steps in your argument aren’t intuitively obvious to me.
Let F be the SWBI that just says F(S) = number of 0s in S—number of 1s in S. Let M be an arbitrary semicomputable SWBI. Then I don’t think M can dominate both F and -F up to an additive constant.
I have a vague, handwavy argument:
Suppose the ‘data’ is an algorithmically random sequence, which basically gives us a discrete random walk. Now a random walk must return infinitely many times to the ‘center’ (where F = 0). And in order to dominate both F and -F, when the random walk is far away from the center, M is going to have to be increasing at the rate of 1 per step away from the center. But M can’t know exactly when the random walk is going to turn around and head back home, so when it gets home, M is going to have lost about 1. (i.e. M will be about 1 less than it was last time it was at the center.) And this is going to happen infinitely many times.
But it’s still possible that some M could dominate all semicomputable SWBIs in the weaker sense of losing less than epsilon per step, for all epsilon > 0.
You may already know that, but for the benefit of onlookers: multiplicative weights over F and -F is computable and kinda sorta dominates both F and -F in the weaker sense that you specified. You can make epsilon arbitrarily close to 0 by choosing a multiplier close to 1, at the cost of also incurring an additive constant loss that goes to infinity as epsilon->0. Multiplicative weights over all lower-semicomputable SWBIs dominates them all in the same sense, but it doesn’t seem to be lower-semicomputable itself, only approximable (or maybe I just failed to find a proof).
Thanks! Your counterexample looks like it might work, but I’m having trouble working out a more rigorous proof.
Me too. :-/
Actually, I’m Very Suspicious of that argument now because of the fact that it takes such a long time for a random walk to ‘return home’. Perhaps it’s just wrong.
MathOverflow ROCKS! When your first experience with an online community looks like this, you know you’re gonna stay there.
Thanks! I agree it’s hard to imagine a lower-semicomputable M that would dominate both F and -F when S is algorithmically random, but a more solid proof would be nice because about half the steps in your argument aren’t intuitively obvious to me.