The purpose of this post is to provide a short fully mathematically specified conjecture which can be worked on with very little background, but which has an important consequence in logical uncertainty. Not too many man-hours have been put into this question yet, so it is plausible that a MIRIx team could solve this problem.
Let E be a be an envirnoment which is a function from N to {0,1}.
Let M be an algorithm which on input n is given oracle access to E(i) for all i<logn, and which outputs a probability p(n).
Definition: If
limn→∞n∏i=1|p(i)+E(i)−1||E(i)−13|=0,
then M is bad. Similarly, if
limn→∞n∏i=1|p(i)+E(i)−1||E(i)−23|=0,
then M is bad. Otherwise, M is good. (Note that if there is no limit, this does not mean M is bad.)
Conjecture: For every algorithm M, there exists an environment E such that M is bad.
Intuitively, M is slowly seeing bits from E, and much more quickly making predictions about E. If M is bad in the first sense, that means that M it has made much worse predictions than if it just output 2⁄3. If M is bad in the second sense, that means that M it has made much worse predictions than if it just output 1⁄3. It seems easy enough to avoid the first failure mode; we just have to switch to output 2⁄3 if we cross some threshhold where 2⁄3 has been doing better. Adding the second failure mode makes this strategy stop working, because an evil environment could cause us to lock in to 2⁄3, and then switch to giving probability 1⁄3 forever.
We would like an M which can take a countable set of advisors, and do at least as well as all of them, even when there is a delay in the observations. However, I believe that M can’t even do at least as well as two constant advisors. The above conjecture implies that M cannot even balance the predictions of two constant advisors, and therefore also cannot balance countably many advisors. Note that a disproof would also be very interesting.
Concise Open Problem in Logical Uncertainty
The purpose of this post is to provide a short fully mathematically specified conjecture which can be worked on with very little background, but which has an important consequence in logical uncertainty. Not too many man-hours have been put into this question yet, so it is plausible that a MIRIx team could solve this problem.
Let E be a be an envirnoment which is a function from N to {0,1}.
Let M be an algorithm which on input n is given oracle access to E(i) for all i<logn, and which outputs a probability p(n).
Definition: If limn→∞n∏i=1|p(i)+E(i)−1||E(i)−13|=0, then M is bad. Similarly, if limn→∞n∏i=1|p(i)+E(i)−1||E(i)−23|=0, then M is bad. Otherwise, M is good. (Note that if there is no limit, this does not mean M is bad.)
Conjecture: For every algorithm M, there exists an environment E such that M is bad.
Intuitively, M is slowly seeing bits from E, and much more quickly making predictions about E. If M is bad in the first sense, that means that M it has made much worse predictions than if it just output 2⁄3. If M is bad in the second sense, that means that M it has made much worse predictions than if it just output 1⁄3. It seems easy enough to avoid the first failure mode; we just have to switch to output 2⁄3 if we cross some threshhold where 2⁄3 has been doing better. Adding the second failure mode makes this strategy stop working, because an evil environment could cause us to lock in to 2⁄3, and then switch to giving probability 1⁄3 forever.
We would like an M which can take a countable set of advisors, and do at least as well as all of them, even when there is a delay in the observations. However, I believe that M can’t even do at least as well as two constant advisors. The above conjecture implies that M cannot even balance the predictions of two constant advisors, and therefore also cannot balance countably many advisors. Note that a disproof would also be very interesting.