I’d also like to specially examine what happens if we use a Solomonoff prior (assigning probability 2^-k to programs of length k). If there is any program of length L that solves the problem correctly, then any deterministic algorithm of length M that is incorrect with probability much less than 2^-(M+L) with respect to the Solomonoff prior must always work (since we can take the program that runs the two algorithms against each other until it finds a discrepancy, then uses that as the input). Therefore, “almost always” with respect to Solomonoff is more or less the same as “always”, so we haven’t really done anything different than redefine P in a very roundabout way. Crucially, this means that even if we are fully Bayesian, under a Solomonoff prior the question of whether every randomized polynomial-time algorithm has a deterministic counterpart still boils down to the unresolved and deep P vs. BPP question.
I’ve read this 3 times, and still can’t tell what it’s supposed to mean. For starters, what is the “that” in “uses that as the input”? Grammatically, it should be “a discrepancy”, but a discrepancy is a pair of outputs that are different, and I don’t see how to use that as an input. I’m also dismayed by the phrase “almost always … is more or less the same as always”, which is a tautology, and so should not be a conclusion.
I’ve read this 3 times, and still can’t tell what it’s supposed to mean. For starters, what is the “that” in “uses that as the input”? Grammatically, it should be “a discrepancy”, but a discrepancy is a pair of outputs that are different, and I don’t see how to use that as an input. I’m also dismayed by the phrase “almost always … is more or less the same as always”, which is a tautology, and so should not be a conclusion.