Your confusion is because you are thinking about regret in an anytime setting. In an anytime setting, there is a fixed policy π, we measure the expected reward of π over a time interval t and compare it to the optimal expected reward over the same time interval. If π has probability p>0 to walk into a trap, regret has the linear lower bound Ω(pt).
On other hand, I am talking about policies πt that explicitly depend on the parameter t (I call this a “metapolicy”). Both the advisor and the agent policies are like that. As t goes to ∞, the probability p(t) to walk into a trap goes to 0, so p(t)t is a sublinear function.
A second difference with the usual definition of regret is that I use an infinite sum of rewards with geometric time discount e−1/t instead of a step function time discount that cuts off at t. However, this second difference is entirely inessential, and all the theorems work about the same with step function time discount.
Your confusion is because you are thinking about regret in an anytime setting. In an anytime setting, there is a fixed policy π, we measure the expected reward of π over a time interval t and compare it to the optimal expected reward over the same time interval. If π has probability p>0 to walk into a trap, regret has the linear lower bound Ω(pt).
On other hand, I am talking about policies πt that explicitly depend on the parameter t (I call this a “metapolicy”). Both the advisor and the agent policies are like that. As t goes to ∞, the probability p(t) to walk into a trap goes to 0, so p(t)t is a sublinear function.
A second difference with the usual definition of regret is that I use an infinite sum of rewards with geometric time discount e−1/t instead of a step function time discount that cuts off at t. However, this second difference is entirely inessential, and all the theorems work about the same with step function time discount.