% operators that are separated from the operand by a space
% autosize deliminaters
% operators that require brackets
% operators that require parentheses
% Paper specific
There is a simple way to somewhat improve the regret bound for DRL we derived before. Specifically, we have the following
#Theorem 1
There is some C∈(0,∞) s.t. the following holds.
Consider I an interface, α,ϵ∈(0,1), H={υk=(μk,rk)∈ΥI}k∈[N] for some N∈N. For each k∈[N], let σk be an ϵ-sane policy for υk. For each k∈[N], let Mk be the corresponding MDP. Denote
This gives gives us better dependence on N but worse dependence on τ compared to what we had before. However, we can also get the best of both bounds with a single algorithm, since the only difference between the algorithms is in the choice of the parameters η and T (which can be chosen to give the better of the two bounds for the given parameters).
To show Theorem 1, we use the following proposition which appears in Russo and Van Roy as Proposition 3 (see page 16):
#Proposition 1
Consider a probability space (Ω,P∈ΔΩ), N∈N, R⊆[0,1] a finite set and random variables U:Ω→R, K:Ω→[N] and J:Ω→[N]. Assume that K∗P=J∗P and I[K;J]=0. Then
I[K;J,U]≥2N(E[U∣J=K,K]−E[U])2
Theorem 1 is now proved exactly as before, with Proposition 1 replacing what was previously called Proposition B.3 and setting η and T to
Improved regret bound for DRL
% operators that are separated from the operand by a space
% autosize deliminaters
% operators that require brackets
% operators that require parentheses
% Paper specific
There is a simple way to somewhat improve the regret bound for DRL we derived before. Specifically, we have the following
#Theorem 1
There is some C∈(0,∞) s.t. the following holds.
Consider I an interface, α,ϵ∈(0,1), H={υk=(μk,rk)∈ΥI}k∈[N] for some N∈N. For each k∈[N], let σk be an ϵ-sane policy for υk. For each k∈[N], let Mk be the corresponding MDP. Denote
τ:=1NN−1∑k=0maxs∈Sksupγ∈[1−α,1)∣∣∣dVMk(s,γ)dγ∣∣∣+1
Assume that
i. For each k∈[N], α≤1−γMk.
ii. α≤1N2(lnN)4/3(1ϵ+|A|)−1τ8/3
Then, there is an ¯I-policy π∗ s.t.
1NN∑k=0(EU∗υk(1−α)−EUπ∗¯υk[σk](1−α))≤Cα1/4N1/2(lnN)1/3(1ϵ+|A|)1/4τ1/3
This gives gives us better dependence on N but worse dependence on τ compared to what we had before. However, we can also get the best of both bounds with a single algorithm, since the only difference between the algorithms is in the choice of the parameters η and T (which can be chosen to give the better of the two bounds for the given parameters).
To show Theorem 1, we use the following proposition which appears in Russo and Van Roy as Proposition 3 (see page 16):
#Proposition 1
Consider a probability space (Ω,P∈ΔΩ), N∈N, R⊆[0,1] a finite set and random variables U:Ω→R, K:Ω→[N] and J:Ω→[N]. Assume that K∗P=J∗P and I[K;J]=0. Then
I[K;J,U]≥2N(E[U∣J=K, K]−E[U])2
Theorem 1 is now proved exactly as before, with Proposition 1 replacing what was previously called Proposition B.3 and setting η and T to
η=α1/4N−1/2(lnN)1/3(1ϵ+|A|)1/4τ1/3
T=α−1/4N−1/2(lnN)−1/3(1ϵ+|A|)−1/4τ2/3