I thought of it some more… Don’t think of probability as a way to represent ignorance. Probability represents a kind of knowledge. For example when you say “probability that coin fell heads is 1⁄2 ”, that represents a sort of a model of what bounces do to the coin direction. When you multiply probabilities for a pair of coin throws, that represents independence of the throws. The 1⁄2 for Q(i) is not representative of that, and even if it was representative of something (statistics on the Q), product of these numbers is not probability but lower bound on the probability assuming independence; assuming they aren’t independent, the probability is still 1⁄2 .
Probabilistic methods certainly are very useful in time bounded calculations—they are used very heavily in computer graphics and simulations of all kinds—but that is usually through use of random number source or sufficiently good random number generator, as in the probabilistic primality tests such as Miller-Rabin for example. The time-bounded bot, far from evaluating and updating “probabilities”, would have to do symbolic mathematics searching for an imperfect solution, and make use of random number sources whenever appropriate, striving for bounds on probability of being wrong. (e.g. when I implemented Miller-Rabin I just made it run enough calculations so that it wouldn’t fail till heat death of universe assuming that my prng wouldn’t somehow return a huge string of bad witnesses, which would be very interesting if it happened).
Well, that doesn’t seem very useful as per Wei Dai example...
Why not look at the methods people actually employ to approximate and guess math when they can’t quite compute something? Applied mathematics is an enormously huge field.
I thought of it some more… Don’t think of probability as a way to represent ignorance. Probability represents a kind of knowledge. For example when you say “probability that coin fell heads is 1⁄2 ”, that represents a sort of a model of what bounces do to the coin direction. When you multiply probabilities for a pair of coin throws, that represents independence of the throws. The 1⁄2 for Q(i) is not representative of that, and even if it was representative of something (statistics on the Q), product of these numbers is not probability but lower bound on the probability assuming independence; assuming they aren’t independent, the probability is still 1⁄2 .
Probabilistic methods certainly are very useful in time bounded calculations—they are used very heavily in computer graphics and simulations of all kinds—but that is usually through use of random number source or sufficiently good random number generator, as in the probabilistic primality tests such as Miller-Rabin for example. The time-bounded bot, far from evaluating and updating “probabilities”, would have to do symbolic mathematics searching for an imperfect solution, and make use of random number sources whenever appropriate, striving for bounds on probability of being wrong. (e.g. when I implemented Miller-Rabin I just made it run enough calculations so that it wouldn’t fail till heat death of universe assuming that my prng wouldn’t somehow return a huge string of bad witnesses, which would be very interesting if it happened).
I’ll be specific. Shannon entropy represents ignorance. The bigger it is, the more ignorant you are.
Well, that doesn’t seem very useful as per Wei Dai example...
Why not look at the methods people actually employ to approximate and guess math when they can’t quite compute something? Applied mathematics is an enormously huge field.