Concise Open Problem in Logical Uncertainty
The purpose of this post is to provide a short fully mathematically specified conjecture which can be worked on with very little background, but which has an important consequence in logical uncertainty. Not too many man-hours have been put into this question yet, so it is plausible that a MIRIx team could solve this problem.
Let be a be an envirnoment which is a function from to .
Let be an algorithm which on input is given oracle access to for all , and which outputs a probability .
Definition: If then is bad. Similarly, if then is bad. Otherwise, is good. (Note that if there is no limit, this does not mean is bad.)
Conjecture: For every algorithm , there exists an environment such that is bad.
Intuitively, is slowly seeing bits from , and much more quickly making predictions about . If is bad in the first sense, that means that it has made much worse predictions than if it just output 2⁄3. If is bad in the second sense, that means that it has made much worse predictions than if it just output 1⁄3. It seems easy enough to avoid the first failure mode; we just have to switch to output 2⁄3 if we cross some threshhold where 2⁄3 has been doing better. Adding the second failure mode makes this strategy stop working, because an evil environment could cause us to lock in to 2⁄3, and then switch to giving probability 1⁄3 forever.
We would like an which can take a countable set of advisors, and do at least as well as all of them, even when there is a delay in the observations. However, I believe that can’t even do at least as well as two constant advisors. The above conjecture implies that cannot even balance the predictions of two constant advisors, and therefore also cannot balance countably many advisors. Note that a disproof would also be very interesting.
bad1
andbad2
compute log-badnesses ofM
relative top1
andp2
, onE[:prev]
; the goal ofM
is to ensure neither one goes to ∞.prev, this, next
are set in such a way thatM
is permitted access to this when computingp[this:next]
.bad1()
is now be up to date throughE[:this]
, not justE[:prev]
This is just for early iterations; in the limit,
eps
should be just enough forbad1
to go halfway tobound
:Now every iteration (after the first few) where mean(E[prev:this])≤25 will decrease
bad2()
by roughly at least (this−prev)(log(1−p1)−log(1−p2)+25(log(p1)−log(p2)−log(1−p1)+log(1−p2)))=(this−prev)15log(2)≫prev, which is large enough to turnbad2()
negative. Therefore, ifM
is bad forE
, there can be only finitely many such iterations until the loop exits. However, every iteration where mean(E[prev:this])≥25 will causebound - bad1()
to grow exponentially (by a factor of 1110=1+12(−1+251p1)), so the loop will terminate.Now we’ll perform the same procedure for
bad2()
:For the same reasons as the previous loop, this loop either stops with
bad2() < 0
or runs forever withbad2()
bounded andbad1
repeatedly falling back below 0.Therefore, this algorithm either gets trapped in one of the inner
while
loops (and succeeds) or turnsbad1()
andbad2()
negative, each an infinite number of times, and therefore succeeds.Could you spell out the step
every iteration where mean(𝙴[𝚙𝚛𝚎𝚟:𝚝𝚑𝚒𝚜])≥2/5 will cause bound—bad1() to grow exponentially (by a factor of 11/10=1+(1/2)(−1+2/5𝚙𝟷))
a little more? I don’t follow. (I think I follow the overall structure of the proof, and if I believed this step I would believe the proof.)
We have that eps is about (2/3)(1-exp([bad1() - bound]/(next-this))), or at least half that, but I don’t see how to get a lower bound on the decrease of bad1() (as a fraction of bound-bad1() ).
You are correct that you use the fact that 1+eps is at approximately e^(eps).
The concrete way this is used in this proof is replacing the ln(1+3eps) you subtract from bad1 when the environment is a 1 with 3eps=(bound—bad1) / (next—this), and replacing the ln(1-3eps/2) you subtract from bad1 when the environment is a 0 with −3eps/2=-(bound—bad1) / (next—this)/2
Therefore, you subtract from bad1 approximately at least (next-this)((2/5)(bound—bad1) / (next—this)-(3/5)*(bound—bad1) / (next—this)/2).
This comes out to (bound—bad1)/10.
I believe the inequality is the wrong direction to just use e^(eps) as a bound for 1+eps, but when next-this gets big, the approximation gets close enough.
In case anyone shared my confusion:
The while loop where we ensure that eps is small enough so that
bound > bad1() + (next—this) * log((1 - p1) / (1 - p1 - eps))
is technically necessary to ensure that bad1() doesn’t surpass bound, but it is immaterial in the limit. Solving
bound = bad1() + (next—this) * log((1 - p1) / (1 - p1 - eps))
gives
eps >= (1/3) (1 - e^{ -[bound—bad1()] / [next—this]] })
which, using the log(1+x) = x approximation, is about
(1/3) ([bound—bad1()] / [next—this] ).
Then Scott’s comment gives the rest. I was worried about the fact that we seem to be taking the exponential of the error in our approximation, or something. But Scott points out that this is not an issue because we can make [next-this] as big as we want, if necessary, without increasing bad1() at all, by guessing p1 for a very long time until [bound—bad1()] / [next—this]] is close enough to zero that the error is too small to matter.
Nice! I think I believe your claim, and I would like to chat with you to verify stuff and talk about future directions.
I have thought about algorithms very similar to this, and using such an algorithm got an M which is either good, or bad in the first sense and outputting probabilities converging to 2⁄3, or bad in the second sense and outputting probabilities converging to 1⁄3. I had thought that if epsilon was shrinking quickly enough as to not have bad1 go to infinity, it would be shrinking so quickly that you could get locked in the while loop with bad2 increasing. I don’t think I actually checked this claim carefully, so I guess maybe I was wrong.
If this algorithm works as claimed, I wonder if you can extend it to three advisors (which may not be constant).
Ah, I think I can stymy M with 2 nonconstant advisors. Namely, let A1(n)=12−1n+3 and A2(n)=12+1n+3. We (setting up an adversarial E) precommit to setting E(n)=0 if p(n)≥A2(n) and E(n)=1 if p(n)≤A1(n); now we can assume that M always chooses p(n)∈[A1(n),A2(n)], since this is better for M.
Now define b′i(j)=|Ai(j)+E(j)−1|−|p(j)+E(j)−1| and bi(n)=∑j<nb′i(j). Note that if we also define badi(n)=∑j<n(log|Ai(j)+E(j)−1|−log|p(j)+E(j)−1|) then ∑j<n|2bi(j)−badi(j)|≤∑j<n(2A1(j)−1−log(2A1(j))))=∑j<nO((12−A1(j))2) is bounded; therefore if we can force b1(n)→∞ or b2(n)→∞ then we win.
Let’s reparametrize by writing δ(n)=A2(n)−A1(n)=2n+3 and q(n)=p(n)−A1(n)δ(n), so that b′i(j)=δ(j)(|i−2+E(j)|−|q(j)−1+E(j)|).
Now, similarly to how M worked for constant advisors, let’s look at the problem in rounds: let s0=0, and sn=⌊exp(sn−1−1)⌋+1 for n>0. When determining E(sn−1),…,E(sn−1), we can look at p(sn−1),…,p(sn−1). Let tn=⌊b2(sn)−1n⌋. Let’s set E(sn−1),…,E(sn−1) to 1 if ∑sn−1j=sn−1δ(j)(1−q(j))≥1; otherwise we’ll do something more complicated, but maintain the constraint that b2(sn)≥b2(sn−1)−1n(n−1)≥tn−1+1n: this guarantees that tn is nondecreasing and that liminfj→∞b2(j)≥limn→∞tn.
If tn→∞ then b2(n)→∞ and we win. Otherwise, let t=limn→∞tn, and consider n such that tn−1=t.
We have ∑sn−1j=sn−1δ(j)(1−q(j))<1. Let J⊆{sn−1,…,sn−1} be a set of indices with q(j)≥q(j′) for all j∈J,j′∉J, that is maximal under the constraint that ∑j∈Jδ(j)(1−q(j))≤1n(n−1); thus we will still have ∑j∈Jδ(j)(1−q(j))≥1n(n−1)−δ(sn−1). We shall set E(j)=0 for all j∈J.
By the definition of J: ∑j∈Jb′1(j)=∑j∈Jδ(j)q(j)≥∑j∈Jδ(j)(1−q(j))∑sn−1j=sn−1δ(j)q(j)∑sn−1j=sn−1δ(j)(1−q(j))≥(1n(n−1)−δ(sn−1))∑sn−1j=sn−1δ(j)−11≥(1n(n−1)−δ(sn−1))(2log(sn+3sn−1+3)−1)≥2 if n≫0
For j′∉J, we’ll proceed iteratively, greedily minimizing ∣∣∑jj′=sn−11j′∉J(b′1(j′),b′2(j′))∣∣. Then: minsn−1≤j<snj∑j′=sn−11j′∉Jb′1(j′)≥− ⎷sn−1∑j=sn−1δ(j)2=−2 ⎷sn−2∑j=sn−1+31j2≥−2 ⎷sn−2∑j=sn−1+3(1j−1−1j)≥−2√sn−1+2≥−1 if n≫0
Keeping this constraint, we can flip (or not flip) all the E(j′)s for j′∉J so that ∑sn−1j′=sn−11j′∉Jb′2(j′)>0. Then, we have b2(sn)≥b2(sn−1)−1n(n−1), b1(sn)−b1(sn−1)=∑snj=sn−1(1j∈J+1j∉J)b′1(j)≥2−1=1 if n≫0, and for sn−1≤j≤sn, b1(j)≥b1(sn−1)+∑j−1j′=sn−11j′∉Jb′1(j′)≥b1(sn−1)−1 if n≫0.
Therefore, b1(j)→∞, so we win.
I don’t yet know whether I can extend it to two nonconstant advisors, but I do know I can extend it to a countably infinite number of constant-prediction advisors. Let (Pi)i=0,… be an enumeration of their predictions that contains each one an infinite number of times. Then:
bad(i)
is now up to date throughE[:this]
, not justE[:prev]
This is just for early iterations of the inner loop; in the limit,
eps
should be just enough forbad(i)
to go halfway tobound
if we letp = abs(p1 + eps - flip)
:Consider q=log(1−p1)−log(1−p2)log(1−p1)−log(1−p2)+log(p2)−log(p1). This q is the probability between
p1
andp2
such that ifE[k]
is chosen with probability |q−flip| then that will have an equal impact onbad(i)
andbad(j)
. Now consider some q′ betweenp1
and q. Every iteration where mean(|E[prev:this]−flip|)≤q′ will decreasebad(j)
by a positive quantity that’s at least linear inthis-prev
, so (at least after the first few such iterations) this will exceed prev⋅−log(maxk:Pk has been reachedmax(Pk,1−Pk))>bad(j), so it will turnbad(j)
negative. If this happens for allj
thenM
cannot be bad forE
. If it doesn’t, then let’s look at the firstj
where it doesn’t. After a finite number of iterations, every iteration must have mean(|E[prev:this]−flip|)>q′. However, this will causebad(i)
to decrease by a positive quantity that’s at least proportional tobound - bad(i)
; therefore, after a finite number of such iterations, we must reach bad(i)<0. So ifM
is bad forE
then for each value ofi
we will eventually make bad(i)<0 and then move on to the next value ofi
. This impliesM
is not bad forE
.Emboldened by this, we can also consider the problem of building an M that isn’t outperformed by any constant advisor. However, this cannot be done, according to the following handwavy argument:
Let q be some incompressible number, and let E(i)iid∼Bern(q). When computing p(n), M can’t do appreciably better than Laplace’s law of succession, which will give it standard error √q(1−q)log(n), and relative badness ∼q(1−q)log(n)(1q+11−q)=1log(n) (relative to the q-advisor) on average. For i≤n, and n≫0, the greatest deviation of the badness from the ∑ij=21log(j)≥i−1log(i) trend is ≈√2nloglog(n)q(1−q) (according to the law of the iterated logarithm), which isn’t enough to counteract the expected badness; therefore the badness will converge to infinity.
Let’s say E(i) = 1 iff p(i) < 1⁄2. Then both products go to zero at least as fast as (3/4)^n. Or am I missing something?