Has anyone proved a theorem on the uselessness of randomness?
Nominull’s suggestion:
“By adding randomness to your algorithm, you spread its behaviors out over a particular distribution, and there must be at least one point in that distribution whose expected value is at least as high as the average expected value of the distribution.”
Might be a starting point for such a proof. However it does rely on being able to find that “one point” whose expected value is >= the average expected value of the distribution. Perhaps it is easy to prove that good points lie in a certain range, but all obvious specific choices in that range are bad.
Has anyone proved a theorem on the uselessness of randomness?
Nominull’s suggestion:
“By adding randomness to your algorithm, you spread its behaviors out over a particular distribution, and there must be at least one point in that distribution whose expected value is at least as high as the average expected value of the distribution.”
Might be a starting point for such a proof. However it does rely on being able to find that “one point” whose expected value is >= the average expected value of the distribution. Perhaps it is easy to prove that good points lie in a certain range, but all obvious specific choices in that range are bad.