So, to be clear, the difference between an optimal predictor and a quasi-optimal predictor is as follows: δ(k) is the amount by which some other polynomial-size circuit family is able to beat the current predictor. An optimal predictor cannot be beaten by any more than a δ such that kNδ(k)→0 for any N, while a quasi-optimal predictor can only assert it cannot be beaten by any more than a δ such that δ(k)→0. Yes?
Exactly. It’s such a simple tweak that it’s embarrassing I haven’t noticed it in the first place. It’s just that in average complexity theory negligible errors seem much more popular. The price to pay is that some theorems require stronger assumptions namely something that had to be bounded by a polynomial now has to be bounded by a constant. On the other hand Theorems 9 demonstrates that there is no chance optimal predictors cover all of VerNP (for example) whereas quasi-optimal predictors might work. Also, I think that in VerNP there is a universal construction for uniform quasi-optimal predictors (as opposed to optimal predictors) similar to Levin’s universal search although I haven’t fleshed out the details yet (anyhow such a construction would be theoretically valid but highly inefficient in practice).
So, to be clear, the difference between an optimal predictor and a quasi-optimal predictor is as follows: δ(k) is the amount by which some other polynomial-size circuit family is able to beat the current predictor. An optimal predictor cannot be beaten by any more than a δ such that kNδ(k)→0 for any N, while a quasi-optimal predictor can only assert it cannot be beaten by any more than a δ such that δ(k)→0. Yes?
Exactly. It’s such a simple tweak that it’s embarrassing I haven’t noticed it in the first place. It’s just that in average complexity theory negligible errors seem much more popular. The price to pay is that some theorems require stronger assumptions namely something that had to be bounded by a polynomial now has to be bounded by a constant. On the other hand Theorems 9 demonstrates that there is no chance optimal predictors cover all of VerNP (for example) whereas quasi-optimal predictors might work. Also, I think that in VerNP there is a universal construction for uniform quasi-optimal predictors (as opposed to optimal predictors) similar to Levin’s universal search although I haven’t fleshed out the details yet (anyhow such a construction would be theoretically valid but highly inefficient in practice).