This is the first in a series of essays that aim to derive a regret bound for DRL that depends on finer attributes of the prior than just the number of hypotheses. Specifically, we consider the entropy of the prior and a certain learning-theoretic dimension parameter. As a “by product”, we derive a new regret bound for ordinary RL without resets and without traps. In this chapter, we open the series by demonstrating the latter, under the significant simplifying assumption that the MDPs are deterministic.
Background
The regret bound we previously derived for DRL grows as a power law with the number of hypotheses. In contrast, the RL literature is usually concerned with considering all transition kernels on a fixed state space satisfying some simple assumption (e.g. a bound on the diameter or bias span). In particular, the number of hypotheses is uncountable. While the former seems too restrictive (although it’s relatively simple to generalize it to countable hypothesis classes), the latter seems too coarse. Indeed, we expect a universally intelligent agent to detect patterns in the data, i.e. follow some kind of simplicity prior rather than a uniform distribution over transition kernels.
The underlying technique of our proof was lower bounding the information gain in a single episode of posterior sampling by the expected regret incurred during this episode. Although we have not previously stated it in this form, the resulting regret bound depends on the entropy of the prior (we considered a uniform prior instead). This idea (unbeknownst to us) appeared earlier in Russo and Van Roy. Moreover, later Russo and Van Roy used to it formulate a generalization of posterior sampling they call “information-directed sampling” the can produce far better regret bounds in certain scenarios. However, to the best of our knowledge, this technique was not previously used to analyze reinforcement learning (as opposed to bandits). Therefore, it seems natural to derive such an “entropic” regret bound for ordinary RL, before extending it to DRL.
Now, Osband and Van Roy derived a regret bound for priors supported on some space of transition kernels as a function of its Kolmogorov dimension and “eluder” dimension (the latter introduced previously by Russo and Van Roy). They also consider a continuous state space. This is a finer approach than considering nearly arbitrary transition kernels on a fixed state space, but it still doesn’t distinguish between different priors with the same support. Our new results involve a parameter similar to eluder dimension, but instead of Kolmogorov dimension we use entropy (in the following chapters we will see that Kolmogorov dimension is, in some sense, an upper bound on entropy). As opposed to Osband and Van Roy, we currently limit ourselves to finite state spaces, but on the other hand we consider no resets (at the price of a “no traps” assumption).
In this chapter we derive the entropic regret bound for deterministic MDPs. (In the following we will call them deterministic decision processes (DDPs) since they have little to do with Markov.) This latter restriction significantly simplifies the analysis. In the following chapters, we will extend it to stochastic MDPs, however the resulting regret bound will be somewhat weaker.
Results
We start by introducing a new learning-theoretic concept of dimension. It is similar to eluder dimension, but is adapted to the discrete deterministic setting and also somewhat stronger (i.e. smaller: more environment classes are low dimensional w.r.t. this concept than w.r.t. eluder dimension).
Definition 1
Consider sets A, B and F⊆{A→B} non-empty. Given C⊆A×B and a∗∈A, we say that a∗ is F-dependent ofC when for any f,g:A→B s.t. for any (a,b)∈C it holds f(a)=g(a)=b, we have f(a∗)=g(a∗).Otherwise, we say that a∗ is F-independentofC.
The prediction dimension ofF (denoted dimpF) is the supremum of the set of n∈N for which there is a sequence {(ak∈A,bk∈B)}k∈[n] s.t. for every k∈[n], ak is F-independent of {(aj,bj)}j∈[k].
Fix non-empty finite sets S (the set of states) and A (the set of actions). Denote X:=S×[0,1], where the second factor is regarded as the space of reward values. Given e:S×A→X, Te:S×A→S and Re:S×A→[0,1] s.t. e(s,a)=(Te(s,a),Re(s,a)), we regard Te as the (deterministic) transition kernel and Re as the reward function associated with “DDP hypothesis” e. This allows us to speak of the dimension of a DDP hypothesis class (i.e. some H⊆{S×A→X}). We now give some examples for dimensions of particular function/hypothesis classes.
Proposition 1
Given any A, B and F⊆{A→B}, we have dimpF<|F|.
Proposition 2
Given any A, Band F⊆{A→B}, we have dimpF≤|A|. In particular, given H as above, dimH≤|S|⋅|A|.
Proposition 3
We now consider deterministic Markov decision processes that are cellular automata. Consider finite sets X (the set of cells), M (the set of neighbor types) and a mapping ν:X×M→X (which cell is the neighbor of which cell). For example, M might be a subset of a group acting on X. More specifically, Xcan be (Z/nZ)d acting on itself, corresponding to a d-dimensional toroidal cellular automaton of sizen.
Consider another set C (the set of cell states) and suppose that S={X→C}. Given any s∈S, and x∈X, define sx:X→C by sx(m):=s(ν(x,m)). Given any T:{M→C}×A→C, define Tglob:S×A→S by Tglob(s,a)(x):=T(sx,a). Given any R:{M→C}→[0,1], define Rglob:S→[0,1] byRglob(s):=1|X|∑x∈XR(sx). Define H by
H:={(Tglob,Rglob)∣∣T:{M→C}×A→C,R:{M→C}→[0,1]}
That is, H is the set of transition kernels and reward functions that are local in the sense defined by ν. Then, dimpH≤|C||M|(|A|+1).
In Proposition 3, it might seem like, although the rules of the automaton are local, the influence of the agent is necessarily global, because the dependence on the action appears in all cells. However, this is not really a restriction: the state of the cells can encode a particular location for the agent and the rules might be such that the agent’s influence is local around this location. More unrealistic is the full observability. Dealing with partially observable cellular automata is outside the scope of this essay.
The central lemma in the proof of the regret bound for RL is a regret bound in its own right, in the setting of (deterministic) contextual bandits. Since this lemma might be of independent interest, we state it already here.
Let S (contexts), A (arms) and O (outcomes) be non-empty sets. Fix a function R:O→[0,1] (the reward function). For any c∈Sω (a fixed sequence of contexts), e:S×A→O (outcome rule) and π:(S×O)∗×Sk→A (policy), we define ecπ∈ΔOω to be the resulting distribution over outcome histories. Given γ∈[0,1), we define Uγ:Oω→[0,1] (the utility function) by
Uγ(o):=(1−γ)∞∑n=0γnR(on)
Lemma 1
Consider a countable non-empty set of hypotheses H⊆{S×A→O} and some ζ∈ΔH (the prior). For each s∈S, define Hs⊆{A→O} by
Hs:={e:A→O∣∣e(a)=e′(s,a),e′∈H}
Let D:=maxs∈SdimpHs and suppose that A is countable [this assumption is to simplify the proof and is not really necessary]. Then, there exists π†:(S×O)∗×Sk→A s.t. for any c∈Sω and γ∈(0,1)
Note that the expression on the left hand side is the Bayesian regret. On the right hand side, H(ζ) stands for the Shannon entropy of ζ. In particular we have
H(ζ)≤ln|H|≤|S|⋅|A|ln|O|
Also, it’s not hard to see that |H|≤|O|dimpH≤|O|D|S| and therefore
DH(ζ)≤(dimpH)2ln|O|
DH(ζ)≤D2|S|ln|O|
Finally, the policy π† we actually consider in the proof is Thompson sampling.
Now we proceed to studying reinforcement learning. First, we state a regret bound for RL with resets. Now S stands for the set of states and A for the set of actions. We fix a sequence of initial states c∈Sω. For any e:S×A→X (environment), π:X∗×Xk→A (policy), and T∈N+ we define eπ∈ΔXω to be the resulting distribution over histories, assuming that the state is reset to cn and the reward to 0 every time period of length T+1. In particular, we have
Prx∼ec[T]π[∀n∈N:xn(T+1)=(cn,0)]=1
Given γ∈[0,1) we define Uγ:Xω→[0,1]
Uγ(sr):=(1−γ)∞∑n=0γnrn
Theorem 1
Consider a countable non-empty set of hypotheses H⊆{S×A→X} and some ζ∈ΔH. Let D:=dimpH. Then, for any T∈N+ and γ∈[0,1), there exists π†γ:X∗×Xk→A s.t. for any c∈Sω
Note that ec[T]π is actually a probability measure concentrated on a single history, sincee π is deterministic: we didn’t make it explicit only to avoid introducing new notation.
Finally, we give the regret bound without resets. For any e:S×A→X and π:X∗×Xk→A, we define eπ∈ΔXω to be the resulting distribution over histories, given initial state s0 and no resets.
Theorem 2
Consider a countable non-empty set of hypotheses H⊆{S×A→X} and some ζ∈ΔH. Let D:=dimpH. Assume that for any e∈H and s∈S, A0e(s)=A [A0 was defined here in “Definition 1”; so was the value function V(s,x) used below] (i.e. there are no traps). For any γ∈[0,1)we define τ(γ) by
τ(γ):=Ee∼ζ[maxs∈Ssupx∈(γ,1)∣∣∣dVe(s,x)dx∣∣∣]
Then, for any γ∈[0,1) s.t.1−γ≪1 there exists π†γ:X∗×Xk→A s.t.
Note that τ(γ)decreases with γ so this factor doesn’t make the qualitative dependence on γ any worse.
Both Theorem 1 and Theorem 2 have anytime variants in which the policy doesn’t depend on γ at the price of a slightly (within a constant factor) worse regret bound, but for the sake of brevity we don’t state them (our ultimate aim is DRL which is not anytime anyway). In Theorem 2 we didn’t specify the constant, so it is actually true verbatim without the γ dependence in π†γ (but we still leave the dependence to simplify the proof a little). It is also possible to spell out the assumption on γ in Theorem 2.
Proofs
Definition A.1
Consider sets A, B and F⊆{A→B} non-empty. A sequence {(ak∈A,bk∈B)}k∈[n]is said to be F-independent, when for every k∈[n], ak is F-independent of {(aj,bj)}j∈[k].
Definition A.2
Consider sets A, B. Given C⊆A×B and a∗∈A, suppose f,g:A→B are s.t.:
For any (a,b)∈C, f(a)=g(a)=b
f(a∗)=g(a∗)
Then, f,g are said to shatter (C,a∗).
Given sequences {(ak∈A,bk∈B)}k∈[n] and {(fk,gk:A→B)}k∈[n], (f,g) is said to shatter(a,b) when for any k∈[n], (fk,gk) shatters ({(aj∈A,bj∈B)}j∈[k],ak).
Proof of Proposition 1
Consider {(ak∈A,bk∈B)}k∈[n] an F-independent sequence. We will now construct G⊆F s.t. |G|=n+1, which is sufficient.
By the definition of F-independence, there is a sequence {(fk,gk∈F)}k∈[n] that shatters (a,b). We have fk(ak)≠gk(ak) and therefore either fk(ak)≠bk or gk(ak)≠bk. Without loss of generality, assume that for each k∈[n], fk(ak)≠bk. It follows that for any k∈[n] and j∈[k], fj(aj)≠bj=fk(aj) and therefore fj≠fk.
If n=0 then there is nothing to prove since F is non-empty, hence we can assume n>0. For any k∈[n−1], fk(ak)≠bk=gn−1(ak) and therefore fk≠gn−1. Also, fn−1(an−1)≠gn−1(an−1) and therefore fn−1≠gn−1. We now take G={f0,f1…fn−1,gn−1}, completing the proof. ■
Proof of Proposition 2
Consider {(ak∈A,bk∈B)}k∈[n] an F-independent sequence and {(fk,gk∈F)}k∈[n] that shatters it. For any k∈[n] and j∈[k], we have fk(aj)=gk(aj) but fk(ak)≠gk(ak), implying that aj≠ak. It follows that |A|≥n. ■
Proof of Proposition 3
Consider {(sk∈S,ak∈A,tk∈S,rk∈[0,1])}k∈[n] an H-independent sequence and {(Tglobk,Rglobk,~Tglobk,~Rglobk)}k∈[n] that shatters it. Define A,B⊆[n] by
A:={k∈[n]∣∣Tglobk(sk,ak)≠~Tglobk(sk,ak)}
B:={k∈[n]∣∣Rglobk(sk)≠~Rglobk(sk)}
Since (Tglob,Rglob,~Tglob,~Rglob) shatters (s,a,t,r), we have A∪B=[n].
Consider any k∈A. Obviously, there is xk∈X s.t.
Tglobk(sk,ak)(xk)≠~Tglobk(sk,ak)(xk)
Denote σk:=(sk)xk∈CM. We have Tk(σk,ak)≠~Tk(σk,ak). On the other hand, for any j∈A∩[k], the shattering implies Tglobk(sj,aj)=~Tglobk(sj,aj) and in particular Tk(σj,aj)=~Tk(σj,aj). Therefore, (σk,ak)≠(σj,aj). We conclude that |A|≤|C||M|⋅|A|.
Now consider any k∈B. Define fk∈R{M→C} by
(fk)σ:=∣∣{x∈X∣∣(sk)x=σ}∣∣|X|
We have
fk⋅Rk=Rglobk(sk)≠~Rglobk(sk)=fk⋅~Rk
fk⋅(Rk−~Rk)≠0
(These are dot products in R{M→C}.) On the other hand, for any j∈B∩[k], the shattering implies fj⋅Rk=fj⋅~Rk and therefore fj⋅(Rk−~Rk)=0. Therefore, fk is not in the linear span of {fj}j∈B∩[k] and hence the set {fk}k∈B is linearly independent. We conclude that |B|≤dimR{M→C}=|C||M|.
Putting everything together, we get n≤|A|+|B|≤|C||M|(|A|+1). ■
Proposition A.1
Consider a countable set A, a set B, a non-empty countable set F⊆{A→B}, some f∗:A→B, some ζ∈ΔF and some ξ∈ΔA. Denote
p:=Pr(a,f)∼ξ×ζ[f(a)≠f∗(a)]
Then, for any q∈(0,1), we can choose A∘⊆A and F∘⊆F s.t. ξ(A∘)≥1−q, ζ(F∘)≥1−pqdimpF and for any f,g∈F∘ and a∈A∘, f(a)=g(a).
Proof of Proposition A.1
We define A∘ by
A∘:={a∈A∣∣∣Prf∼ζ[f(a)≠f∗(a)]≤pq}
The fact that ξ(A∘)≥1−q follows from the definition of p.
Denote n:=|A∘|∈N∪{ω} and enumerate A∘ as A∘={ak}k∈[n]. For each k∈[n], define Fk⊆F recursively by
F0:=FFk+1:={Fk if ∀f,g∈Fk:f(ak)=g(ak){f∈Fk|f(ak)=f∗(ak)} otherwise
Set F∘:=⋂k∈[n]Fk. Define I⊆[n] by
I:={k∈[n]|Fk+1≠Fk}
Denote m:=|I|. For any i∈[m], we denote by ki the i-th number in I in increasing order. Denote a∗i:=aki and b∗i:=f∗(a∗i). By the definition of Fk, for each i∈[m] we can choose fi,gi∈Fki s.t. fi(a∗i)≠gi(a∗i). Moreover, it also follows from the definition that for every i∈[m] and j∈[i], fi(a∗j)=gi(a∗j)=b∗j (using only the fact that fi,gi∈Fki and ki>kj). Therefore, (f,g) shatters (a∗,b∗) and hence m≤dimpF.
By the definition of A∘, for any k∈I, ζ(Fk+1)≥ζ(Fk)−pq. On the other hand, for any k∈[n]∖I, Fk+1=Fk and in particular ζ(Fk+1)=ζ(Fk). We conclude that
ζ(F∘)≥ζ(F0)−pqm≥1−pqdimpF■
Proposition A.2
Consider a countable set A, a set B, a non-empty countable set F⊆{A→B}, some ζ∈ΔF and some ξ∈ΔA. Consider f∗:A→B s.t.
f∗(a)∈argmaxb∈BPrf∼ζ[f(a)=b]
Then,
Pr(a,f)∼ξ×ζ[f(a)≠f∗(a)]≤1ln2I(a,f)∼ξ×ζ[f;a,f(a)]
[I[f;a,f(a)] stands for the mutual information between f and the joint random variables a,f(a).]
The first term on the right hand side obviously vanishes, and the second term can be expressed in terms of KL-divergence. For any a∈A, Define eva:F→B by eva(f):=f(a). We get
Denote ξ:=Π∗ζ, D:=dimpH and Γ:=I(a,e)∼ξ×ζ[e;a,e(a)]. By Proposition A.2, there is e∗:A→O s.t.
Pr(a,e)∼ξ×ζ[e(a)≠e∗(a)]≤Γln2
By Proposition A.1 (setting q:=√ΓDln2), there are A∘⊆A and H∘⊆H s.t. ξ(A∘)≥1−√ΓDln2, ζ(H∘)≥1−√ΓDln2 and for any e1,e2∈H∘ and a∈A, e1(a)=e2(a). Define H∗:=H∘∩Π−1(A∘). We have ζ(H∗)≥1−2√ΓDln2.
Denote L:=Ee∼ζ[maxa∈AR(e(a))−Ea∼ξ[R(e(a))]]. For brevity, we will also use the notation Ee∗[X]:=Ee∼ζ[X;e∈H∗]. We get, using the bound on ζ(H∗)
L≤Ee∗[maxa∈AR(e(a))−Ea∼ξ[R(e(a))]]+2√ΓDln2
L≤Ee∗[Ee′∗[R(e(Π(e)))−R(e(Π(e′)))]]+4√ΓDln2
Using the properties of H∘ and A∘ we get
L≤Ee∗[Ee′∗[R(e′(Π(e)))−R(e′(Π(e′)))]]+4√ΓDln2
The expression inside the expected values in the first term on the right hand side is negative, by the property of Π. We conclude
L≤4√ΓDln2■
Proof of Lemma 1
For each s∈S we choose some Πs:H→A s.t.
Πs(e)∈argmaxa∈AR(e(s,a))
We now take π† to be Thompson sampling. More formally, we construct a probability space (Ω,P) with random variables K:Ω→H (the true environment) and {Jn:Ω→{Sn→H}}n∈N (the hypothesis sampled on round n). We define {An:Ω→{Sn+1→A}}n∈N (the action taken on round n) and {Θn:Ω→{Sn+1→O}}n∈N (the observation made on round n) by
An(s):=Πsn(Jn(s:n))Θn(s):=K(sn,An(s))
We also define {Hn:Ω→{Sn→H}}n∈N by
Hn(s):={e∈H|∀m∈[n]:e(sm,Am(s:m))=Θn(s:m)}
Hn is thus the set of hypotheses consistent with the previous outcomes K(sm,Πsm(Jm)). The random variables K,J have to satisfy K∗P=ζ (the distribution of the true environment is the prior) and
Pr[Jn(s)=e∣∣K,{Jm(s:m)}m∈[n]]=Prζ[e|Hn(s)]
That is, the distribution of Jn conditional on K and Jm for m∈[n] is given by the prior ζ updated by the previous outcomes.π† is then defined s.t. for any so∈(S×O)n, t∈S and a∈A:
π†(a|so,t):=Pr[An(st)=a|∀m∈[n]:om=Θm(s:m)]
Now we need to prove the regret bound. We denote the Bayesian regret by R:
Note that the set above is indeed non-empty because (π,h) is H♯s-independent.
Given g∈XT, we will use the notational convention stg−1:=s.
Choose ek,~ek s.t.(ekπk):mk+1≠(~ekπk):mk+1 and ∀j∈[k]:ekπj=~ekπj=hj. Obviously (ekπk):mk=(~ekπk):mk. We set s∗k:=st(ekπk)mk−1 and a∗k:=(πk)mk. Clearly ek(s∗k,a∗k)≠~ek(s∗k,a∗k).
If k<n−1 then there is f∈H s.t. ∀j∈[k+1]:fπj=hj (by independence). Therefore, st(hk)mk−1=s∗k: otherwise st(fπk)mk−1≠st(ekπk)mk−1 and we get a contradiction with the minimality of mk. We now set x∗k:=(hk)mk. If k=n, we choose x∗k∈X arbitrarily. For any k∈[n] and j∈[k], ekπj=~ekπj=hj and therefore ek(s∗k,a∗k)=~ek(s∗k,a∗k)=x∗k.
Proof of Theorem 1
Let A♯:=AT, O♯:=XT and H♯ be defined as in Proposition A.4. By Proposition A.4, we have maxs∈SdimpH♯s≤D. Define ζ♯∈ΔH♯ as the pushforward of ζ by the ♯ operator. Define R♯γ:O♯→[0,1] by
R♯γ(h):=γ(1−γ)1−γT+1T−1∑n=0γnrwhn
Denote also γ♯:=γT+1. It is easy to see that given any η∈O♯ω
Uγ(∞∏n=0(cn,0)ηn)=(1−γ♯)∞∑n=0γ♯nR♯γ(ηn)
The product is in the string concatenation sense.
Applying Lemma 1 to all the “♯” objects (with S and c unaffected), we get π†T,γ s.t.
Here, we used the fact that the optimal policy for a DDP with fixed initial state (and any time discount function) can be taken to be a fixed sequence of actions.
Observing that H(ζ♯)≤H(ζ) and 1−γT+1≤(T+1)(1−γ), we get the desired result. ■
The following is a minor variant of what was called “Proposition B.2” here, and we state it here without proof (since the proof is essentially the same).
Proposition A.5
Consider a DDP e:S×A→X, policies π0,π∗:X∗×Xk→A and T∈N+. Suppose that for any h∈X∗ and x∈X
suppπ0(h,s)⊆A0e(s)
Suppose further that π∗ is optimal for e with time discount γ and any initial state. Denote
τe(γ):=maxs∈Ssupx∈(γ,1)∣∣∣dVe(s,x)dx∣∣∣
Given any s∈S and policy π, denote esπ∈ΔXω the distribution over histories of resulting from π interacting with e starting from initial state s. Finally, define UT,γ:Xω→[0,1] by
It is not hard to see that Theorem 1 can be extended to the setting where the initial state sequence c∈Sω is chosen in a way that depends on the history. This is because if c is chosen adversarially (i.e. in order to maximize regret), the history doesn’t matter (in other words, we get a repeated zero-sum game in which, on each round, the initial state is chosen by the adversary and the policy is chosen by the agent after seeing the initial state; this game clearly has a pure Nash equilibrium). In particular, we can let c be just the states naturally resulting from the interaction of the DDP with the policy.
Let π∗eγ be the optimal policy for DDP e and time discount γ. We get
Entropic Regret I: Deterministic MDPs
This is the first in a series of essays that aim to derive a regret bound for DRL that depends on finer attributes of the prior than just the number of hypotheses. Specifically, we consider the entropy of the prior and a certain learning-theoretic dimension parameter. As a “by product”, we derive a new regret bound for ordinary RL without resets and without traps. In this chapter, we open the series by demonstrating the latter, under the significant simplifying assumption that the MDPs are deterministic.
Background
The regret bound we previously derived for DRL grows as a power law with the number of hypotheses. In contrast, the RL literature is usually concerned with considering all transition kernels on a fixed state space satisfying some simple assumption (e.g. a bound on the diameter or bias span). In particular, the number of hypotheses is uncountable. While the former seems too restrictive (although it’s relatively simple to generalize it to countable hypothesis classes), the latter seems too coarse. Indeed, we expect a universally intelligent agent to detect patterns in the data, i.e. follow some kind of simplicity prior rather than a uniform distribution over transition kernels.
The underlying technique of our proof was lower bounding the information gain in a single episode of posterior sampling by the expected regret incurred during this episode. Although we have not previously stated it in this form, the resulting regret bound depends on the entropy of the prior (we considered a uniform prior instead). This idea (unbeknownst to us) appeared earlier in Russo and Van Roy. Moreover, later Russo and Van Roy used to it formulate a generalization of posterior sampling they call “information-directed sampling” the can produce far better regret bounds in certain scenarios. However, to the best of our knowledge, this technique was not previously used to analyze reinforcement learning (as opposed to bandits). Therefore, it seems natural to derive such an “entropic” regret bound for ordinary RL, before extending it to DRL.
Now, Osband and Van Roy derived a regret bound for priors supported on some space of transition kernels as a function of its Kolmogorov dimension and “eluder” dimension (the latter introduced previously by Russo and Van Roy). They also consider a continuous state space. This is a finer approach than considering nearly arbitrary transition kernels on a fixed state space, but it still doesn’t distinguish between different priors with the same support. Our new results involve a parameter similar to eluder dimension, but instead of Kolmogorov dimension we use entropy (in the following chapters we will see that Kolmogorov dimension is, in some sense, an upper bound on entropy). As opposed to Osband and Van Roy, we currently limit ourselves to finite state spaces, but on the other hand we consider no resets (at the price of a “no traps” assumption).
In this chapter we derive the entropic regret bound for deterministic MDPs. (In the following we will call them deterministic decision processes (DDPs) since they have little to do with Markov.) This latter restriction significantly simplifies the analysis. In the following chapters, we will extend it to stochastic MDPs, however the resulting regret bound will be somewhat weaker.
Results
We start by introducing a new learning-theoretic concept of dimension. It is similar to eluder dimension, but is adapted to the discrete deterministic setting and also somewhat stronger (i.e. smaller: more environment classes are low dimensional w.r.t. this concept than w.r.t. eluder dimension).
Definition 1
Consider sets A, B and F⊆{A→B} non-empty. Given C⊆A×B and a∗∈A, we say that a∗ is F-dependent of C when for any f,g:A→B s.t. for any (a,b)∈C it holds f(a)=g(a)=b, we have f(a∗)=g(a∗). Otherwise, we say that a∗ is F-independent of C.
The prediction dimension of F (denoted dimpF) is the supremum of the set of n∈N for which there is a sequence {(ak∈A,bk∈B)}k∈[n] s.t. for every k∈[n], ak is F-independent of {(aj,bj)}j∈[k].
Fix non-empty finite sets S (the set of states) and A (the set of actions). Denote X:=S×[0,1], where the second factor is regarded as the space of reward values. Given e:S×A→X, Te:S×A→S and Re:S×A→[0,1] s.t. e(s,a)=(Te(s,a),Re(s,a)), we regard Te as the (deterministic) transition kernel and Re as the reward function associated with “DDP hypothesis” e. This allows us to speak of the dimension of a DDP hypothesis class (i.e. some H⊆{S×A→X}). We now give some examples for dimensions of particular function/hypothesis classes.
Proposition 1
Given any A, B and F⊆{A→B}, we have dimpF<|F|.
Proposition 2
Given any A, B and F⊆{A→B}, we have dimpF≤|A|. In particular, given H as above, dimH≤|S|⋅|A|.
Proposition 3
We now consider deterministic Markov decision processes that are cellular automata. Consider finite sets X (the set of cells), M (the set of neighbor types) and a mapping ν:X×M→X (which cell is the neighbor of which cell). For example, M might be a subset of a group acting on X. More specifically, X can be (Z/nZ)d acting on itself, corresponding to a d-dimensional toroidal cellular automaton of size n.
Consider another set C (the set of cell states) and suppose that S={X→C}. Given any s∈S, and x∈X, define sx:X→C by sx(m):=s(ν(x,m)). Given any T:{M→C}×A→C, define Tglob:S×A→S by Tglob(s,a)(x):=T(sx,a). Given any R:{M→C}→[0,1], define Rglob:S→[0,1] by Rglob(s):=1|X|∑x∈XR(sx). Define H by
That is, H is the set of transition kernels and reward functions that are local in the sense defined by ν. Then, dimpH≤|C||M|(|A|+1).
In Proposition 3, it might seem like, although the rules of the automaton are local, the influence of the agent is necessarily global, because the dependence on the action appears in all cells. However, this is not really a restriction: the state of the cells can encode a particular location for the agent and the rules might be such that the agent’s influence is local around this location. More unrealistic is the full observability. Dealing with partially observable cellular automata is outside the scope of this essay.
The central lemma in the proof of the regret bound for RL is a regret bound in its own right, in the setting of (deterministic) contextual bandits. Since this lemma might be of independent interest, we state it already here.
Let S (contexts), A (arms) and O (outcomes) be non-empty sets. Fix a function R:O→[0,1] (the reward function). For any c∈Sω (a fixed sequence of contexts), e:S×A→O (outcome rule) and π:(S×O)∗×Sk→A (policy), we define ecπ∈ΔOω to be the resulting distribution over outcome histories. Given γ∈[0,1), we define Uγ:Oω→[0,1] (the utility function) by
Lemma 1
Consider a countable non-empty set of hypotheses H⊆{S×A→O} and some ζ∈ΔH (the prior). For each s∈S, define Hs⊆{A→O} by
Let D:=maxs∈SdimpHs and suppose that A is countable [this assumption is to simplify the proof and is not really necessary]. Then, there exists π†:(S×O)∗×Sk→A s.t. for any c∈Sω and γ∈(0,1)
Note that the expression on the left hand side is the Bayesian regret. On the right hand side, H(ζ) stands for the Shannon entropy of ζ. In particular we have
Also, it’s not hard to see that |H|≤|O|dimpH≤|O|D|S| and therefore
Finally, the policy π† we actually consider in the proof is Thompson sampling.
Now we proceed to studying reinforcement learning. First, we state a regret bound for RL with resets. Now S stands for the set of states and A for the set of actions. We fix a sequence of initial states c∈Sω. For any e:S×A→X (environment), π:X∗×Xk→A (policy), and T∈N+ we define eπ∈ΔXω to be the resulting distribution over histories, assuming that the state is reset to cn and the reward to 0 every time period of length T+1. In particular, we have
Given γ∈[0,1) we define Uγ:Xω→[0,1]
Theorem 1
Consider a countable non-empty set of hypotheses H⊆{S×A→X} and some ζ∈ΔH. Let D:=dimpH. Then, for any T∈N+ and γ∈[0,1), there exists π†γ:X∗×Xk→A s.t. for any c∈Sω
Note that ec[T]π is actually a probability measure concentrated on a single history, sincee π is deterministic: we didn’t make it explicit only to avoid introducing new notation.
Finally, we give the regret bound without resets. For any e:S×A→X and π:X∗×Xk→A, we define eπ∈ΔXω to be the resulting distribution over histories, given initial state s0 and no resets.
Theorem 2
Consider a countable non-empty set of hypotheses H⊆{S×A→X} and some ζ∈ΔH. Let D:=dimpH. Assume that for any e∈H and s∈S, A0e(s)=A [A0 was defined here in “Definition 1”; so was the value function V(s,x) used below] (i.e. there are no traps). For any γ∈[0,1) we define τ(γ) by
Then, for any γ∈[0,1) s.t.1−γ≪1 there exists π†γ:X∗×Xk→A s.t.
Note that τ(γ) decreases with γ so this factor doesn’t make the qualitative dependence on γ any worse.
Both Theorem 1 and Theorem 2 have anytime variants in which the policy doesn’t depend on γ at the price of a slightly (within a constant factor) worse regret bound, but for the sake of brevity we don’t state them (our ultimate aim is DRL which is not anytime anyway). In Theorem 2 we didn’t specify the constant, so it is actually true verbatim without the γ dependence in π†γ (but we still leave the dependence to simplify the proof a little). It is also possible to spell out the assumption on γ in Theorem 2.
Proofs
Definition A.1
Consider sets A, B and F⊆{A→B} non-empty. A sequence {(ak∈A,bk∈B)}k∈[n] is said to be F-independent, when for every k∈[n], ak is F-independent of {(aj,bj)}j∈[k].
Definition A.2
Consider sets A, B. Given C⊆A×B and a∗∈A, suppose f,g:A→B are s.t.:
For any (a,b)∈C, f(a)=g(a)=b
f(a∗)=g(a∗)
Then, f,g are said to shatter (C,a∗).
Given sequences {(ak∈A,bk∈B)}k∈[n] and {(fk,gk:A→B)}k∈[n], (f,g) is said to shatter (a,b) when for any k∈[n], (fk,gk) shatters ({(aj∈A,bj∈B)}j∈[k],ak).
Proof of Proposition 1
Consider {(ak∈A,bk∈B)}k∈[n] an F-independent sequence. We will now construct G⊆F s.t. |G|=n+1, which is sufficient.
By the definition of F-independence, there is a sequence {(fk,gk∈F)}k∈[n] that shatters (a,b). We have fk(ak)≠gk(ak) and therefore either fk(ak)≠bk or gk(ak)≠bk. Without loss of generality, assume that for each k∈[n], fk(ak)≠bk. It follows that for any k∈[n] and j∈[k], fj(aj)≠bj=fk(aj) and therefore fj≠fk.
If n=0 then there is nothing to prove since F is non-empty, hence we can assume n>0. For any k∈[n−1], fk(ak)≠bk=gn−1(ak) and therefore fk≠gn−1. Also, fn−1(an−1)≠gn−1(an−1) and therefore fn−1≠gn−1. We now take G={f0,f1…fn−1,gn−1}, completing the proof. ■
Proof of Proposition 2
Consider {(ak∈A,bk∈B)}k∈[n] an F-independent sequence and {(fk,gk∈F)}k∈[n] that shatters it. For any k∈[n] and j∈[k], we have fk(aj)=gk(aj) but fk(ak)≠gk(ak), implying that aj≠ak. It follows that |A|≥n. ■
Proof of Proposition 3
Consider {(sk∈S,ak∈A,tk∈S,rk∈[0,1])}k∈[n] an H-independent sequence and {(Tglobk,Rglobk,~Tglobk,~Rglobk)}k∈[n] that shatters it. Define A,B⊆[n] by
Since (Tglob,Rglob,~Tglob,~Rglob) shatters (s,a,t,r), we have A∪B=[n].
Consider any k∈A. Obviously, there is xk∈X s.t.
Denote σk:=(sk)xk∈CM. We have Tk(σk,ak)≠~Tk(σk,ak). On the other hand, for any j∈A∩[k], the shattering implies Tglobk(sj,aj)=~Tglobk(sj,aj) and in particular Tk(σj,aj)=~Tk(σj,aj). Therefore, (σk,ak)≠(σj,aj). We conclude that |A|≤|C||M|⋅|A|.
Now consider any k∈B. Define fk∈R{M→C} by
We have
(These are dot products in R{M→C}.) On the other hand, for any j∈B∩[k], the shattering implies fj⋅Rk=fj⋅~Rk and therefore fj⋅(Rk−~Rk)=0. Therefore, fk is not in the linear span of {fj}j∈B∩[k] and hence the set {fk}k∈B is linearly independent. We conclude that |B|≤dimR{M→C}=|C||M|.
Putting everything together, we get n≤|A|+|B|≤|C||M|(|A|+1). ■
Proposition A.1
Consider a countable set A, a set B, a non-empty countable set F⊆{A→B}, some f∗:A→B, some ζ∈ΔF and some ξ∈ΔA. Denote
Then, for any q∈(0,1), we can choose A∘⊆A and F∘⊆F s.t. ξ(A∘)≥1−q, ζ(F∘)≥1−pqdimpF and for any f,g∈F∘ and a∈A∘, f(a)=g(a).
Proof of Proposition A.1
We define A∘ by
The fact that ξ(A∘)≥1−q follows from the definition of p.
Denote n:=|A∘|∈N∪{ω} and enumerate A∘ as A∘={ak}k∈[n]. For each k∈[n], define Fk⊆F recursively by
Set F∘:=⋂k∈[n]Fk. Define I⊆[n] by
Denote m:=|I|. For any i∈[m], we denote by ki the i-th number in I in increasing order. Denote a∗i:=aki and b∗i:=f∗(a∗i). By the definition of Fk, for each i∈[m] we can choose fi,gi∈Fki s.t. fi(a∗i)≠gi(a∗i). Moreover, it also follows from the definition that for every i∈[m] and j∈[i], fi(a∗j)=gi(a∗j)=b∗j (using only the fact that fi,gi∈Fki and ki>kj). Therefore, (f,g) shatters (a∗,b∗) and hence m≤dimpF.
By the definition of A∘, for any k∈I, ζ(Fk+1)≥ζ(Fk)−pq. On the other hand, for any k∈[n]∖I, Fk+1=Fk and in particular ζ(Fk+1)=ζ(Fk). We conclude that
Proposition A.2
Consider a countable set A, a set B, a non-empty countable set F⊆{A→B}, some ζ∈ΔF and some ξ∈ΔA. Consider f∗:A→B s.t.
Then,
[I[f;a,f(a)] stands for the mutual information between f and the joint random variables a,f(a).]
Proof of Proposition A.2
By the chain rule for mutual information
The first term on the right hand side obviously vanishes, and the second term can be expressed in terms of KL-divergence. For any a∈A, Define eva:F→B by eva(f):=f(a). We get
For any a∈A and f∈F, ζ(ev−1a(f(a)))≤ζ(ev−1a(f∗(a))) by definition of f∗. If f(a)≠f∗(a), then
It follows that, in this case, ζ(ev−1a(f(a)))≤12. We conclude
Proposition A.3
Consider a countable set A, a set O, a non-empty countable set H⊆{A→O}, some ζ∈ΔH and some R:O→[0,1]. Let Π:H→A be s.t.
Then,
Proof of Proposition A.3
Denote ξ:=Π∗ζ, D:=dimpH and Γ:=I(a,e)∼ξ×ζ[e;a,e(a)]. By Proposition A.2, there is e∗:A→O s.t.
By Proposition A.1 (setting q:=√ΓDln2), there are A∘⊆A and H∘⊆H s.t. ξ(A∘)≥1−√ΓDln2, ζ(H∘)≥1−√ΓDln2 and for any e1,e2∈H∘ and a∈A, e1(a)=e2(a). Define H∗:=H∘∩Π−1(A∘). We have ζ(H∗)≥1−2√ΓDln2.
Denote L:=Ee∼ζ[maxa∈AR(e(a))−Ea∼ξ[R(e(a))]]. For brevity, we will also use the notation Ee∗[X]:=Ee∼ζ[X;e∈H∗]. We get, using the bound on ζ(H∗)
Using the properties of H∘ and A∘ we get
The expression inside the expected values in the first term on the right hand side is negative, by the property of Π. We conclude
Proof of Lemma 1
For each s∈S we choose some Πs:H→A s.t.
We now take π† to be Thompson sampling. More formally, we construct a probability space (Ω,P) with random variables K:Ω→H (the true environment) and {Jn:Ω→{Sn→H}}n∈N (the hypothesis sampled on round n). We define {An:Ω→{Sn+1→A}}n∈N (the action taken on round n) and {Θn:Ω→{Sn+1→O}}n∈N (the observation made on round n) by
We also define {Hn:Ω→{Sn→H}}n∈N by
Hn is thus the set of hypotheses consistent with the previous outcomes K(sm,Πsm(Jm)). The random variables K,J have to satisfy K∗P=ζ (the distribution of the true environment is the prior) and
That is, the distribution of Jn conditional on K and Jm for m∈[n] is given by the prior ζ updated by the previous outcomes.π† is then defined s.t. for any so∈(S×O)n, t∈S and a∈A:
Now we need to prove the regret bound. We denote the Bayesian regret by R:
The construction of π† implies that
Define {Zn:Ω→{Sn→ΔH}}n∈N (the belief state of the agent on round n) and {Ξn:Ω→{Sn+1→ΔA}}n∈N (the distribution over actions on round n) by
Using expectation conditional on Zn we can rewrite the equation for R(γ) as
It follows that
We now apply Proposition A.3 and get
Given any x∈X, we will use the notation x=(stx∈S,rwx∈[0,1]).
Proposition A.4
Fix T∈N+ and consider a non-empty set of DDP hypotheses H⊆{S×A→X}. For each e:S×A→X we define e♯:S×AT→XT recursively by
(That is, e♯(s,π) is the history resulting from DDP e interacting with action sequence π starting from initial state s.) Define
Then, for any s∈S, dimpH♯s≤dimpH.
Here, the subscript s has the same meaning as in Lemma 1.
Proof of Proposition A.4
Fix s∈S. Let {(πk∈A♯,hk∈XT)}k∈[n] be an H♯s-independent sequence. We will now construct a sequence
s.t (e,~e) shatters (s∗,a∗,x∗). The latter implies that (s∗,a∗,x∗) is an H-independent sequence, establishing the claim.
For brevity, we will use the notation fπ:=f♯(s,π) (here, f:S×A→X and π:S×N→A). For each k∈[n] define mk∈[T] by
Note that the set above is indeed non-empty because (π,h) is H♯s-independent.
Given g∈XT, we will use the notational convention stg−1:=s.
Choose ek,~ek s.t.(ekπk):mk+1≠(~ekπk):mk+1 and ∀j∈[k]:ekπj=~ekπj=hj. Obviously (ekπk):mk=(~ekπk):mk. We set s∗k:=st(ekπk)mk−1 and a∗k:=(πk)mk. Clearly ek(s∗k,a∗k)≠~ek(s∗k,a∗k).
If k<n−1 then there is f∈H s.t. ∀j∈[k+1]:fπj=hj (by independence). Therefore, st(hk)mk−1=s∗k: otherwise st(fπk)mk−1≠st(ekπk)mk−1 and we get a contradiction with the minimality of mk. We now set x∗k:=(hk)mk. If k=n, we choose x∗k∈X arbitrarily. For any k∈[n] and j∈[k], ekπj=~ekπj=hj and therefore ek(s∗k,a∗k)=~ek(s∗k,a∗k)=x∗k.
Proof of Theorem 1
Let A♯:=AT, O♯:=XT and H♯ be defined as in Proposition A.4. By Proposition A.4, we have maxs∈SdimpH♯s≤D. Define ζ♯∈ΔH♯ as the pushforward of ζ by the ♯ operator. Define R♯γ:O♯→[0,1] by
Denote also γ♯:=γT+1. It is easy to see that given any η∈O♯ω
The product is in the string concatenation sense.
Applying Lemma 1 to all the “♯” objects (with S and c unaffected), we get π†T,γ s.t.
Here, we used the fact that the optimal policy for a DDP with fixed initial state (and any time discount function) can be taken to be a fixed sequence of actions.
Observing that H(ζ♯)≤H(ζ) and 1−γT+1≤(T+1)(1−γ), we get the desired result. ■
The following is a minor variant of what was called “Proposition B.2” here, and we state it here without proof (since the proof is essentially the same).
Proposition A.5
Consider a DDP e:S×A→X, policies π0,π∗:X∗×Xk→A and T∈N+. Suppose that for any h∈X∗ and x∈X
Suppose further that π∗ is optimal for e with time discount γ and any initial state. Denote
Given any s∈S and policy π, denote esπ∈ΔXω the distribution over histories of resulting from π interacting with e starting from initial state s. Finally, define UT,γ:Xω→[0,1] by
Then
Proof of Theorem 2
It is not hard to see that Theorem 1 can be extended to the setting where the initial state sequence c∈Sω is chosen in a way that depends on the history. This is because if c is chosen adversarially (i.e. in order to maximize regret), the history doesn’t matter (in other words, we get a repeated zero-sum game in which, on each round, the initial state is chosen by the adversary and the policy is chosen by the agent after seeing the initial state; this game clearly has a pure Nash equilibrium). In particular, we can let c be just the states naturally resulting from the interaction of the DDP with the policy.
Let π∗eγ be the optimal policy for DDP e and time discount γ. We get
Here, the second term on the right hand side comes from the rewards at time moments divisible by T, which were set to 0 in Theorem 1. Denote
Applying Proposition A.5, we get
Assume that γT−1≥12. Then
Now set
It is easy to see that the assumption γT−1≥12 is now justified for 1−γ≪1. We get