I wrote this post during the first part of the SERI MATS winter school, in Vanessa Kosoy’s research stream. This is just an explanation of my intended research direction and the motivation behind it, and write down proofs for the statements in Vanessa’s comment describing the research direction.
The motivation behind infra-Bayesianism is to develop a better theory of intelligence.
Why do we need a theory of intelligence? When we build superhuman AIs, we can have two kinds of reasons to believe in their safety: - empirical evidence coming from experiments with less capable AIs; - mathematically proven guarantees of its performance.
Empirical evidence on its own may be unreliable, because, as the system capabilities improve, results may fail to generalize as we expected. We need a theory to give us performance guarantees and tell us how to correctly extrapolate observed results to new conditions.
Why do we need a better theory? The ideal Bayesian agent, known as AIXI, has several shortcomings, and one of them is realizability assumption. In order for a Bayesian agent to have performance guarantees: the true hypothesis must have a positive probability in the agent’s prior. However, this is unachievable in the real world, because the agent is contained in the environment, and so trying to have a true hypothesis about the environment runs into self-reference problems and computational constraints.
A related problem in game theory is the grain-of-truth problem. The term comes from the paper by Ehud Kalai and Ehud Lehrer [1], in which they prove that in an infinitely repeated game, subjectively rational players—i.e. players maximizing their expected reward according to their subjective beliefs about what strategies the opponent is likely to play—will learn to predict their opponents’ moves and their strategies will converge to Nash equilibrium strategies, provided that the “grain of truth” requirement is met: each player must assign a positive prior probability to any set of histories that has a positive chance of occurringk in the game.
Unfortunately, the “grain of truth” requirement is very strict, and when it’s not met, learning fails. Foster and Young [2] prove that in a repeated 2-player game with uncertain payoff, rational agents will fail to learn each other’s strategies and fail to converge to a Nash equilibrium, except for a set of games of measure 0. John H Nachbar [3] demonstrates that, loosely speaking, for any sufficiently rich set of strategies, it is impossible for each player to both predict their opponent’s strategy and have their own strategy be the best response to their beliefs about their opponent at the same time.
Infra-Bayesianism is a possible solution to the “grain of truth problem”, as it allows agents to have partial models of their environment. In this way, infra-Bayesian beliefs are similar to physical laws. For example, Newton’s laws of motion can predict the path of a falling object, but not the weather tomorrow or the outcome of an election. Instead of using a single probability distribution, Infra-Bayesianism defines hypotheses as convex sets of probability distributions. This allows Infra-Bayesian hypotheses to be agnostic about certain parts of the world.
Our hypothesis is that infra-Bayesian agents will have an advantage in iterated games over Bayesian agents. I want to try and compare their performance in iterated game setting. It will be interesting to see if infra-Bayesian agents have better convergence to Nash equilibria.
The folk theorem states that any payoff vector above the minimax of each player can be achieved by a subgame-perfect equilibrium of a repeated game. For 2-player games, minimax and maximin payoffs in mixed strategies are the same. Therefore the best guaranteed payoff in a 2-player game is the maximin payoff. However, achieving it would not be very interesting, because to receive a maximin payoff, the agent only needs to learn its own rewards, and doesn’t need to learn about other agents’ policies.
This is why it is interesting to consider another simple kind of repeated game: a population game. Instead of a fixed set of agents playing with each other, we consider K populations of agents. At each turn, a random agent is selected from each population to participate in a one-shot game with K players, which we refer to as the stage game. It is assumed that all populations are sufficiently large that the effect of any individual agent violating from their policy on the distribution of histories of other agents would be negligible.
In an immortal population game, the expected rewards of future games are discounted by a time factor of γ. In a mortal population game, the rewards are not discounted but instead each agent has a chance 1−γ of dying after each turn. Each turn all dead agents are replaced with new agents with empty histories.
Let us introduce some notations:
Ai,i=1,…,N is the set of pure strategies (actions) of the ith agent in the game. We assume Ai are finite sets.
[I’ll deal with deterministic rewards first, then see what happens in more complicated situations. For a deterministic reward, we can assume that a player knows his reward function.]
O=∏Ki=1Ai is the set of pure outcomes of a stage game. We assume that each player knows his own deterministic reward function Ri:O→[0,1].
The set of n-round outcome histories is On and O∗=⋃nOn is the set of all finite-length histories.
|h| is the length of a history. is the length of a history h∈O∗.
Definition (distance between histories)
The distance between partial histories h1,h2∈O∗ is defined as
d(h1,h2)=αt(h1,h2),
where α∈(0,1) is some constant, t(h1,h2) is the first time the histories diverge, i.e. the smallest t such that exactly one of the histories has fewer than t steps, or they both have at least t steps and their step t is different. If the histories are identical, the distance is defined to be 0. It is easy to see that this is indeed a metric.
ΔO∗ represents the set of probability distributions on O∗.
Definition (distance between measures)
The Kantorovich-Rubinstein (KR) metric on the space ΔO∗ is defined as follows:
d(m,m′)=supf|m(f)−m′(f)|,
where supremum is taken over all 1-Lipschitz functions f:O∗→[−1,1].
πi:O∗→ΔAi is the policy of player number i. It determines which mixed strategy player number i chooses, given the full history it has seen so far. Upon seeing history h, the probability that player i will choose action a is πi(h)({a}), which I may also write down as πi(a|h) for brevity.
μni∈ΔO∗ is the distribution of histories in the i-th population at the nth turn of the repeated game.
Note how it is entirely possible for distributions of histories for different players to be different. For example, suppose that player 1 has two possible actions, and his policy is to take action in the first game randomly with probability 50% each, and then always repeat this action. Then histories of players in distribution 1 will have repeated actions for player 1, while histories of players in distribution 2 will have random actions for player 1, because player 1 is sampled randomly from the distribution 1, and each move is equally likely.
Definition.
A set of policies is said to be in equilibrium if playing a different strategy π′i for any agent, while all other agents (including other agents in population i) stick to their policies, does not result in greater expected reward.
What does an equilibrium look like? Populations are assumed to be sufficiently large that an effect of any individual agent’s action on the distribution of histories in the population is negligible. At turn n, opponent number j is going to play a mixed strategy πj(hj) with probability μnj(h). So his moves follow probability distribution πnj=∑hμnj(h)πj(h). It means that the best policy for an individual agent in population i is to choose a best response policy to mixed policies πn1,…,¯πni,…,πnK at turn n. (“Bar” notation means the element in list is skipped). Since each agent follows a best response policy, their probabilistic mixture πni is also a best response policy. So the weighted averages of the players’ policies at every turn are in a Nash equilibrium strategy for the single-turn game.
It is interesting to find out if choosing πi to be learning strategies will result in convergence to a Nash equilibrium in the limit of γ→1.
The evolution rule Ti:(ΔO∗)K→ΔO∗ that produces μn+1i from μn1,…,μnK works as follows.
For the first step, for each j=1,…,K we sample a history for player j from μnj. Then we sample actions according to policies πj:O∗→ΔAj. We append the outcome of the game to the history of player number i.
For the second step, each player dies with probability 1−γ and is replaced with a new player with an empty history.
where h′ stands for history h without the last step, a=(a1,…,aK) for the actions taken at the last step of h, and the sums on the right-hand side are taken over histories of all players except for player i.
The second step is obtained by
Ti(μ1,…,μK)=γ^Ti(μ1,…,μK)+(1−γ)δ∅
where δ∅ stands for the Dirac delta measure of an empty history.
Lemma.
Ti is continuous.
Proof.
Ti is obtained from ^Ti by a linear transformation, so it is enough to prove continuity of ^Ti.
I am going to write down a proof for K=3, it generalizes straightforwardly to any K.
If follows that for any μ′2 such that d(μ2,μ′2)<1/4αt(μ2)∗ϵ/(9|A|), this sum is less than ϵ/(3|A|).
Since every history has |A| possible successors after 1 step, ∑h∈O∗,|h|≥1μ1(h′)=∑h1∈O∗μ1(h1)∗|A|=|A|
We proved that for any μ′2 such that d(μ2,μ′2)<1/4αt∗ϵ/(9|A|) the second summand in (1) is bounded from above by
∑|h|≥1μ1(h′)∑h2|μ2(h2)−μ′2(h2)|
≤∑|h|≥1μ1(h′)ϵ/(3|A|)≤ϵ/(3|A|)∗|A|=ϵ/3
The same argument proves that for any μ′3 such that d(μ3,μ′3)<1/4αt(μ3)∗ϵ/(9|A|), the first summand is bounded from above by ϵ/3.
Now let us upper bound the last summand in (1).
It can be written down as
∑|h|≥1f(h)(μ1(h′)−μ′1(h′))P(h|h′)
where
P(h|h1)=π1(a1|h1)∑h2μ′2(h2)π2(a2|h2)∑h3μ′3(h3)π3(a3|h3) if h1=h′,
0 if h1≠h′
is the probability that player 1 with a history h′ will end up with history h after one step. We can rewrite this expression as
∑h∑h1f(h)(μ1(h1)−μ′1(h1))P(h|h1)
because if h1 is not equal to the history h without the last step, the probability P(h|h1) is equal to zero. Now rewrite this as
∑h1(μ1(h1)−μ′1(h1))∑hP(h|h1)f(h)=(μ1−μ′1)(~f),
where ~f(h1)=∑hP(h|h1)f(h).
Let us prove that if f is a 1-Lipschitz function O∗→[−1,1], the same is true of ~f.
The inequalities −1≤~f≤1 are obvious. For any two different histories h1 and h2, any their two continuation histories h1a1 and h2a2 are at the same distance d(h1a1,h2a2)=d(h1,h2), and therefore |f(h1a1)−f(h2a2)|≤d(h1a1,h2a2)=d(h1,h2). Since the integrands are close, the integrals must be close too:
~f(h1)−~f(h2)≤suph′1f(h′1)−infh′2f(h′2)≤d(h1,h2)
The same way,
~f(h2)−~f(h1)≤suph′2f(h′2)−infh′1f(h′1)≤d(h1,h2),
and so ~f is 1-Lipschitz.
Therefore, the third summand is bounded from above by d(μ1,μ′1), so if we choose μ′1 so that d(μ1,μ′1)≤ϵ/3, μ′2 so that d(μ2,μ′2)≤1/4αt(μ2)∗ϵ/(9|A|) and μ′3 so that d(μ3,μ′3)≤1/4αt(μ3)∗ϵ/(9|A|), we’ll get ^T1(μ1,μ2,μ3)(f)−^T1(μ′1,μ′2,μ′3)(f)≤ϵ. Qed.
Notice how we also proved that if μ2 and μ3 are kept fixed, ^T1 does not increase the distance, and therefore T1 is a contraction.
Combining all components gives us the map T=(T1,…,TK):(ΔO∗)K→(ΔO∗)K.
We can extend a finite history to an infinite history by appending an infinite number of blank or “placeholder” turns represented by the symbol ⊥ at the end of the original history. With this notation, O∗ is a subset of (O⊔{⊥})ω. The set O⊔{⊥} (with discrete topology) is compact, and therefore their product (O⊔{⊥})ω is a compact space by the Tychonoff’s theorem. ΔO∗can be seen as a subset of Δ((O⊔{⊥})w): we take a probability distribution μ on O∗ and for any subset E⊂(O⊔{⊥})w defineμ(E):=μ(E∩O∗).
The distance on (O⊔{⊥})ω is defined just like on O∗: the distance between two histories is αt where t is the first time the two histories diverge.
Proposition
The topology induced on (O⊔{⊥})w by this metric is the product topology on (O⊔{⊥})ω.
Proof
The product topology on (O⊔{⊥})ω can be generated by basis sets U(x,M)=∏Mi=1xi×∏∞i=M+1(O⊔{⊥}). The metric topology is generated by the unit balls B(x,r)={y∈(O⊔{⊥})ω:d(x,y)<r}. These two families coincide, since B(x,r)=U(x,M), where M is the smallest integer such that αM+1<r. Therefore the topologies also coincide.
vggf
Since (O⊔{⊥})ω is compact, it must also be a complete space. The set of sequences with finitely many non-⊥ elements is countable and dense in (O⊔{⊥})ω. So (O⊔{⊥})ω is a separable space. This means that (O∪{⊥})ω is a separable, completely metrizable topological space, also known as a Polish space.
Because (O∪{⊥})ω is compact, the space of probability distributions over it, Δ(O⊔{⊥})ω is compact in the weak* topology. (See theorem 10.2 in [3])
However, we are looking for fixed points among probability distributions on O∗ and, unfortunately, ΔO∗ is not compact as O∗ is not closed in (O⊔{⊥})ω. (A sequence of finite histories that are continuations of each other has a limit in (O⊔{⊥})ω, but it’s not a finite history.)
On the other hand, we can observe that a fixed distribution in (ΔO∗)K has the following property:
If the proportion of players in population with a history length of t is x in the nth generation, then the proportion of players in population with a history length of t+1 in the next generation will be γ x, since γ is the proportion of players with a t-long history that survive the turn. This implies that in a fixed distribution the age-distribution ζ must satisfy ζ(t+1)=γζ(t). This means that ζ(t)=γt(1−γ).
We call P the subset of ΔO∗⊂Δ((O⊔{⊥})ω) in which every population’s age distribution is ζ.
Lemma
P is closed in the weak* topology.
Proof
Suppose μn→μ is a converging sequence of measures, μn∈P. It means that for any continuous function f:(O⊔{⊥})ω→R, ∫fdmn→∫fdm.
For elements of (O⊔{⊥})w, we can define length as the number of symbols before the first ocurrence of symbol ⊥ (if there is no symbol ⊥, the length is infinite). For a fixed nonnegative integer t, consider the function f(h) which is equal to 1 if h has length t, and 0 otherwise. It is continuous, because if length of a history h is t, ∀h′:d(h,h′)<αt⟹h=h′⟹|f(h)−f(h′)|=0. The integral ∫fdmn measures the proportion of histories in the distribution mn which have length t; the limit distribution m then must preserve the age distribution and also belong in the set P
As a closed subset of a compact set, P is compact. It is also easy to see that P is convex.
Theorem
T=(T1,…,TK):P→P has a fixed point.
Proof
The Schauder fixed-point theorem implies that a continuous map from a compact convex subset of a Hausdorff topological vector space to itself has a fixed point. We showed that P is compact and convex, and T is continuous.
The set Δ((O⊔{⊥})ω) is a subset of the topological vector space M((O⊔{⊥})ω) of all finite signed measures on (O⊔{⊥})ω with weak* topology. Let us show that this vector space is Hausdorff. Take any two signed measures m1=m+1+m−1and m2=m+2+m−2, and suppose that no continuous function f on (O⊔{⊥})w separates them: ∫fdm1=∫fdm2. Then ∫fd(m+1−m−2)=∫fd(−m−1+m+2). The functional L(f)=∫fd(m+1−m−2) is a positive linear functional on C((O⊔{⊥})ω). The space (O⊔{⊥})ω is metric and therefore Hausdorff; it is compact and therefore locally compact. Therefore, by the Riesz representation theorem, the measure corresponding to this functional is unique: m+1−m−2=−m−1+m+2 and therefore m1=m2.
Another possibly interesting observation is that the age distribution of histories is going to converge to the limit distribution.
Proposition
The age distribution of histories m(t)=∑|h|=tμ(h) converges to the distribution ζ(t)=γt(1−γ) in the norm d(m1,m2)=∑t|m1(t)−m2(t)|.
Proof
Since the map Ti adds 1 step to every history and then replaces each agent with a newborn with probability 1−γ, it induces a map on age distributions:
Ti(m)(t)=γm(t−1) for t>0
1−γ for t=0
on age distributions. For this map ζ is a fixed point and d(T(m1),T(m2))=γd(m1,m2). It follows that any initial age distribution converges to ζ.
What happens in the fixed points? In a fixed point, the distribution of opponents stays the same each term. It means that a player observes a sequence of independent identically distributed (IID) outcomes of their policy. Suppose that the policy π satisfies the following property: upon observing an IID sequence of games, it converges to the optimal response in the limit of γ→∞. Then in this limit, fixed points will also be Nash equilibria. The property above can be satisfied even by very simple algorithms, for example, one that assumes that opponent’s move is randomly sampled from the set of his previous moves. Any Bayesian agent that includes all IID processes in its prior will satisfy this property.
Some interesting questions for research are:
Are any/all of the fixed points attractors?
Does convergence to a fixed point occur for all or at least almost all initial conditions?
Do all Nash equilibria correspond to fixed points?
Do stronger game theoretic solution concepts (e.g. proper equilibria) have corresponding dynamical properties?
[1] Kalai, Ehud, and Ehud Lehrer. “Rational learning leads to Nash equilibrium.” Econometrica: Journal of the Econometric Society (1993): 1019-1045. [2] Dean P Foster and H Peyton Young. “On the impossibility of predicting the behavior of rational agents.” Proceedings of the National Academy of Sciences, 98(22):12848–12853, 2001. [3] Nachbar, John H. “Beliefs in repeated games.” Econometrica 73.2 (2005): 459-480. [4] Hadfield-Menell D. et al. Cooperative inverse reinforcement learning //Advances in neural information processing systems. – 2016. – Т. 29. [5]Hadfield-Menell D. et al. The off-switch game //Workshops at the Thirty-First AAAI Conference on Artificial Intelligence. – 2017. [6]Ergodic Theory by Charles Walkden, lecture 10
Fixed points in mortal population games
Link post
I wrote this post during the first part of the SERI MATS winter school, in Vanessa Kosoy’s research stream. This is just an explanation of my intended research direction and the motivation behind it, and write down proofs for the statements in Vanessa’s comment describing the research direction.
The motivation behind infra-Bayesianism is to develop a better theory of intelligence.
Why do we need a theory of intelligence? When we build superhuman AIs, we can have two kinds of reasons to believe in their safety:
- empirical evidence coming from experiments with less capable AIs;
- mathematically proven guarantees of its performance.
Empirical evidence on its own may be unreliable, because, as the system capabilities improve, results may fail to generalize as we expected. We need a theory to give us performance guarantees and tell us how to correctly extrapolate observed results to new conditions.
Why do we need a better theory? The ideal Bayesian agent, known as AIXI, has several shortcomings, and one of them is realizability assumption. In order for a Bayesian agent to have performance guarantees: the true hypothesis must have a positive probability in the agent’s prior. However, this is unachievable in the real world, because the agent is contained in the environment, and so trying to have a true hypothesis about the environment runs into self-reference problems and computational constraints.
A related problem in game theory is the grain-of-truth problem. The term comes from the paper by Ehud Kalai and Ehud Lehrer [1], in which they prove that in an infinitely repeated game, subjectively rational players—i.e. players maximizing their expected reward according to their subjective beliefs about what strategies the opponent is likely to play—will learn to predict their opponents’ moves and their strategies will converge to Nash equilibrium strategies, provided that the “grain of truth” requirement is met: each player must assign a positive prior probability to any set of histories that has a positive chance of occurringk in the game.
Unfortunately, the “grain of truth” requirement is very strict, and when it’s not met, learning fails. Foster and Young [2] prove that in a repeated 2-player game with uncertain payoff, rational agents will fail to learn each other’s strategies and fail to converge to a Nash equilibrium, except for a set of games of measure 0. John H Nachbar [3] demonstrates that, loosely speaking, for any sufficiently rich set of strategies, it is impossible for each player to both predict their opponent’s strategy and have their own strategy be the best response to their beliefs about their opponent at the same time.
Infra-Bayesianism is a possible solution to the “grain of truth problem”, as it allows agents to have partial models of their environment. In this way, infra-Bayesian beliefs are similar to physical laws. For example, Newton’s laws of motion can predict the path of a falling object, but not the weather tomorrow or the outcome of an election. Instead of using a single probability distribution, Infra-Bayesianism defines hypotheses as convex sets of probability distributions. This allows Infra-Bayesian hypotheses to be agnostic about certain parts of the world.
Our hypothesis is that infra-Bayesian agents will have an advantage in iterated games over Bayesian agents. I want to try and compare their performance in iterated game setting. It will be interesting to see if infra-Bayesian agents have better convergence to Nash equilibria.
The folk theorem states that any payoff vector above the minimax of each player can be achieved by a subgame-perfect equilibrium of a repeated game. For 2-player games, minimax and maximin payoffs in mixed strategies are the same. Therefore the best guaranteed payoff in a 2-player game is the maximin payoff. However, achieving it would not be very interesting, because to receive a maximin payoff, the agent only needs to learn its own rewards, and doesn’t need to learn about other agents’ policies.
This is why it is interesting to consider another simple kind of repeated game: a population game. Instead of a fixed set of agents playing with each other, we consider K populations of agents. At each turn, a random agent is selected from each population to participate in a one-shot game with K players, which we refer to as the stage game. It is assumed that all populations are sufficiently large that the effect of any individual agent violating from their policy on the distribution of histories of other agents would be negligible.
In an immortal population game, the expected rewards of future games are discounted by a time factor of γ. In a mortal population game, the rewards are not discounted but instead each agent has a chance 1−γ of dying after each turn. Each turn all dead agents are replaced with new agents with empty histories.
Let us introduce some notations:
Ai,i=1,…,N is the set of pure strategies (actions) of the ith agent in the game. We assume Ai are finite sets.
[I’ll deal with deterministic rewards first, then see what happens in more complicated situations. For a deterministic reward, we can assume that a player knows his reward function.]
O=∏Ki=1Ai is the set of pure outcomes of a stage game. We assume that each player knows his own deterministic reward function Ri:O→[0,1].
The set of n-round outcome histories is On and O∗=⋃nOn is the set of all finite-length histories.
|h| is the length of a history. is the length of a history h∈O∗.
Definition (distance between histories)
The distance between partial histories h1,h2∈O∗ is defined as
d(h1,h2)=αt(h1,h2),
where α∈(0,1) is some constant, t(h1,h2) is the first time the histories diverge, i.e. the smallest t such that exactly one of the histories has fewer than t steps, or they both have at least t steps and their step t is different. If the histories are identical, the distance is defined to be 0. It is easy to see that this is indeed a metric.
ΔO∗ represents the set of probability distributions on O∗.
Definition (distance between measures)
The Kantorovich-Rubinstein (KR) metric on the space ΔO∗ is defined as follows:
d(m,m′)=supf|m(f)−m′(f)|,
where supremum is taken over all 1-Lipschitz
functions f:O∗→[−1,1].
πi:O∗→ΔAi is the policy of player number i. It determines which mixed strategy player number i chooses, given the full history it has seen so far. Upon seeing history h, the probability that player i will choose action a is πi(h)({a}), which I may also write down as πi(a|h) for brevity.
μni∈ΔO∗ is the distribution of histories in the i-th population at the nth turn of the repeated game.
Note how it is entirely possible for distributions of histories for different players to be different. For example, suppose that player 1 has two possible actions, and his policy is to take action in the first game randomly with probability 50% each, and then always repeat this action. Then histories of players in distribution 1 will have repeated actions for player 1, while histories of players in distribution 2 will have random actions for player 1, because player 1 is sampled randomly from the distribution 1, and each move is equally likely.
Definition.
A set of policies is said to be in equilibrium if playing a different strategy π′i for any agent, while all other agents (including other agents in population i) stick to their policies, does not result in greater expected reward.
What does an equilibrium look like? Populations are assumed to be sufficiently large that an effect of any individual agent’s action on the distribution of histories in the population is negligible. At turn n, opponent number j is going to play a mixed strategy πj(hj) with probability μnj(h). So his moves follow probability distribution πnj=∑hμnj(h)πj(h). It means that the best policy for an individual agent in population i is to choose a best response policy to mixed policies πn1,…,¯πni,…,πnK at turn n. (“Bar” notation means the element in list is skipped). Since each agent follows a best response policy, their probabilistic mixture πni is also a best response policy. So the weighted averages of the players’ policies at every turn are in a Nash equilibrium strategy for the single-turn game.
It is interesting to find out if choosing πi to be learning strategies will result in convergence to a Nash equilibrium in the limit of γ→1.
The evolution rule Ti:(ΔO∗)K→ΔO∗ that produces μn+1i from μn1,…,μnK works as follows.
For the first step, for each j=1,…,K we sample a history for player j from μnj. Then we sample actions according to policies πj:O∗→ΔAj. We append the outcome of the game to the history of player number i.
For the second step, each player dies with probability 1−γ and is replaced with a new player with an empty history.
The first step is described by the formula
μn+1i(h)=^Ti(μn1,…,μnK)(h), where
^Ti(μ1,…,μK)(h)=μi(h′)πi(h′)(ai)∑h1…¯∑hi…∑hK∏j≠iμj(hj)πj(aj|hj),
where h′ stands for history h without the last step, a=(a1,…,aK) for the actions taken at the last step of h, and the sums on the right-hand side are taken over histories of all players except for player i.
The second step is obtained by
Ti(μ1,…,μK)=γ^Ti(μ1,…,μK)+(1−γ)δ∅
where δ∅ stands for the Dirac delta measure of an empty history.
Lemma.
Ti is continuous.
Proof.
Ti is obtained from ^Ti by a linear transformation, so it is enough to prove continuity of ^Ti.
I am going to write down a proof for K=3, it generalizes straightforwardly to any K.
For any 1-Lipschitz function f:O∗→[−1,1],
^T1(μ1,μ2,μ3)(f)−^T1(μ′1,μ′2,μ′3)(f)
=∑|h|≥1f(h)μ1(h′)π1(a1|h′)∑h2μ2(h2)π2(a2|h2)∑h3μ3(h3)π3(a3|h3)
−∑|h|≥1f(h)μ′1(h′)π1(a1|h′)∑h2μ′2(h2)π2(a2|h2)∑h3μ′3(h3)π3(a3|h3)
=∑|h|≥1f(h)μ1(h′)π1(a1|h′)∑h2μ′2(h2)π2(a2|h2)∑h3(μ3(h3)−μ′3(h3))π3(a3|h3)
+∑|h|≥1f(h)μ1(h′)π1(a1|h′)∑h2(μ2(h2)−μ′2(h2))π2(a2|h2)∑h3μ3(h3)π3(a3|h3)
+∑|h|≥1f(h)(μ1(h′)−μ′1(h′))π1(a1|h′)∑h2μ′2(h2)π2(a2|h2)∑h3μ′3(h3)π3(a3|h3) (1)
Let us look at the second summand.
∑|h|≥1f(h)μ1(h′)π1(a1|h′)∑h2(μ2(h2)−μ′2(h2))π2(a2|h2)∑h3μ3(h3)π3(a3|h3)
≤∑|h|≥1μ1(h′)∑h2|μ2(h2)−μ′2(h2)|
because |f|≤1, |π1(a1|h′)|≤1, |π2(a2|h2)|≤1 and ∑h3μ3(h3)π3(a3|h3)≤∑h3μ3(h3)=1.
Choose t=t(μ2) such that ∑|h|>tμ2(h)≤ϵ/(9|A|).
Then the sum
∑h|μ2(h)−μ′2(h)|=∑|h|≤t|μ2(h)−μ′2(h)|+∑|h|>t|μ2(h)−μ′2(h)|
≤∑|h|≤t|μ2(h)−μ′2(h)|+∑|h|>tμ2(h)+∑|h|>tμ′2(h) (2)
Let us upper bound the first summand in (2):
∑|h|≤t|μ2(h)−μ′2(h)|=2α−t∑hg(h)(μ2(h)−μ′2(h)),
where g(h)=0.5αt sign(μ2(h)−μ′2(h)) if |h|≤t and 0 otherwise.
Then g is a 1-Lipschitz function from O∗ to [-1, 1].
Indeed, |g(h)|≤0.5α|h|, so for any two histories h1 and h2, |g(h1)−g(h2)|≤αmax(|h1|,|h2|)≤d(h1,h2)
Therefore, this expression can be bounded from above by
2α−t∑hg(h)(μ2(h)−μ′2(h))≤2α−td(μ2,μ′2)
Now for the last summand in (2):
∑|h|>tμ′2(h)=1−∑|h|≤tμ′2(h)≤
≤1−∑|h|≤tμ2(h)+∑|h|≤t|μ2(h)−μ′2(h)|
=∑|h|>tμ2(h)+∑|h|≤t|μ2(h)−μ′2(h)|
Bringing them together:
|∑h∈O∗(μ2(h)−μ′2(h))|≤2∑|h|≤t|μ2(h)−μ′2(h)|+2∑|h|>tμ2(h)≤
≤4α−td(μ1,μ2)+2ϵ/(9|A|)
If follows that for any μ′2 such that d(μ2,μ′2)<1/4αt(μ2)∗ϵ/(9|A|), this sum is less than ϵ/(3|A|).
Since every history has |A| possible successors after 1 step, ∑h∈O∗,|h|≥1μ1(h′)=∑h1∈O∗μ1(h1)∗|A|=|A|
We proved that for any μ′2 such that d(μ2,μ′2)<1/4αt∗ϵ/(9|A|) the second summand in (1) is bounded from above by
∑|h|≥1μ1(h′)∑h2|μ2(h2)−μ′2(h2)|
≤∑|h|≥1μ1(h′)ϵ/(3|A|)≤ϵ/(3|A|)∗|A|=ϵ/3
The same argument proves that for any μ′3 such that d(μ3,μ′3)<1/4αt(μ3)∗ϵ/(9|A|), the first summand is bounded from above by ϵ/3.
Now let us upper bound the last summand in (1).
It can be written down as
∑|h|≥1f(h)(μ1(h′)−μ′1(h′))P(h|h′)
where
P(h|h1)=π1(a1|h1)∑h2μ′2(h2)π2(a2|h2)∑h3μ′3(h3)π3(a3|h3) if h1=h′,
0 if h1≠h′
is the probability that player 1 with a history h′ will end up with history h after one step. We can rewrite this expression as
∑h∑h1f(h)(μ1(h1)−μ′1(h1))P(h|h1)
because if h1 is not equal to the history h without the last step, the probability P(h|h1) is equal to zero. Now rewrite this as
∑h1(μ1(h1)−μ′1(h1))∑hP(h|h1)f(h)=(μ1−μ′1)(~f),
where ~f(h1)=∑hP(h|h1)f(h).
Let us prove that if f is a 1-Lipschitz function O∗→[−1,1], the same is true of ~f.
The inequalities −1≤~f≤1 are obvious. For any two different histories h1 and h2, any their two continuation histories h1a1 and h2a2 are at the same distance d(h1a1,h2a2)=d(h1,h2), and therefore |f(h1a1)−f(h2a2)|≤d(h1a1,h2a2)=d(h1,h2). Since the integrands are close, the integrals must be close too:
~f(h1)−~f(h2)≤suph′1f(h′1)−infh′2f(h′2)≤d(h1,h2)
The same way,
~f(h2)−~f(h1)≤suph′2f(h′2)−infh′1f(h′1)≤d(h1,h2),
and so ~f is 1-Lipschitz.
Therefore, the third summand is bounded from above by d(μ1,μ′1), so if we choose μ′1 so that d(μ1,μ′1)≤ϵ/3, μ′2 so that d(μ2,μ′2)≤1/4αt(μ2)∗ϵ/(9|A|) and μ′3 so that d(μ3,μ′3)≤1/4αt(μ3)∗ϵ/(9|A|), we’ll get ^T1(μ1,μ2,μ3)(f)−^T1(μ′1,μ′2,μ′3)(f)≤ϵ. Qed.
Notice how we also proved that if μ2 and μ3 are kept fixed, ^T1 does not increase the distance, and therefore T1 is a contraction.
Combining all components gives us the map T=(T1,…,TK):(ΔO∗)K→(ΔO∗)K.
We can extend a finite history to an infinite history by appending an infinite number of blank or “placeholder” turns represented by the symbol ⊥ at the end of the original history. With this notation, O∗ is a subset of (O⊔{⊥})ω. The set O⊔{⊥} (with discrete topology) is compact, and therefore their product (O⊔{⊥})ω is a compact space by the Tychonoff’s theorem. ΔO∗can be seen as a subset of Δ((O⊔{⊥})w): we take a probability distribution μ on O∗ and for any subset E⊂(O⊔{⊥})w defineμ(E):=μ(E∩O∗).
The distance on (O⊔{⊥})ω is defined just like on O∗: the distance between two histories is αt where t is the first time the two histories diverge.
Proposition
The topology induced on (O⊔{⊥})w by this metric is the product topology on (O⊔{⊥})ω.
Proof
The product topology on (O⊔{⊥})ω can be generated by basis sets U(x,M)=∏Mi=1xi×∏∞i=M+1(O⊔{⊥}). The metric topology is generated by the unit balls B(x,r)={y∈(O⊔{⊥})ω:d(x,y)<r}. These two families coincide, since B(x,r)=U(x,M), where M is the smallest integer such that αM+1<r. Therefore the topologies also coincide.
vggf
Since (O⊔{⊥})ω is compact, it must also be a complete space. The set of sequences with finitely many non-⊥ elements is countable and dense in (O⊔{⊥})ω. So (O⊔{⊥})ω is a separable space. This means that (O∪{⊥})ω is a separable, completely metrizable topological space, also known as a Polish space.
Because (O∪{⊥})ω is compact, the space of probability distributions over it, Δ(O⊔{⊥})ω is compact in the weak* topology. (See theorem 10.2 in [3])
However, we are looking for fixed points among probability distributions on O∗ and, unfortunately, ΔO∗ is not compact as O∗ is not closed in (O⊔{⊥})ω. (A sequence of finite histories that are continuations of each other has a limit in (O⊔{⊥})ω, but it’s not a finite history.)
On the other hand, we can observe that a fixed distribution in (ΔO∗)K has the following property:
If the proportion of players in population with a history length of t is x in the nth generation, then the proportion of players in population with a history length of t+1 in the next generation will be γ x, since γ is the proportion of players with a t-long history that survive the turn. This implies that in a fixed distribution the age-distribution ζ must satisfy ζ(t+1)=γζ(t). This means that ζ(t)=γt(1−γ).
We call P the subset of ΔO∗⊂Δ((O⊔{⊥})ω) in which every population’s age distribution is ζ.
Lemma
P is closed in the weak* topology.
Proof
Suppose μn→μ is a converging sequence of measures, μn∈P. It means that for any continuous function f:(O⊔{⊥})ω→R,
∫fdmn→∫fdm.
For elements of (O⊔{⊥})w, we can define length as the number of symbols before the first ocurrence of symbol ⊥ (if there is no symbol ⊥, the length is infinite).
For a fixed nonnegative integer t, consider the function f(h) which is equal to 1 if h has length t, and 0 otherwise. It is continuous, because if length of a history h is t, ∀h′:d(h,h′)<αt⟹h=h′⟹|f(h)−f(h′)|=0. The integral ∫fdmn measures the proportion of histories in the distribution mn which have length t; the limit distribution m then must preserve the age distribution and also belong in the set P
As a closed subset of a compact set, P is compact. It is also easy to see that P is convex.
Theorem
T=(T1,…,TK):P→P has a fixed point.
Proof
The Schauder fixed-point theorem implies that a continuous map from a compact convex subset of a Hausdorff topological vector space to itself has a fixed point. We showed that P is compact and convex, and T is continuous.
The set Δ((O⊔{⊥})ω) is a subset of the topological vector space M((O⊔{⊥})ω) of all finite signed measures on (O⊔{⊥})ω with weak* topology. Let us show that this vector space is Hausdorff. Take any two signed measures m1=m+1+m−1 and m2=m+2+m−2, and suppose that no continuous function f on (O⊔{⊥})w separates them: ∫fdm1=∫fdm2. Then ∫fd(m+1−m−2)=∫fd(−m−1+m+2). The functional L(f)=∫fd(m+1−m−2) is a positive linear functional on C((O⊔{⊥})ω). The space (O⊔{⊥})ω is metric and therefore Hausdorff; it is compact and therefore locally compact. Therefore, by the Riesz representation theorem, the measure corresponding to this functional is unique: m+1−m−2=−m−1+m+2 and therefore m1=m2.
Another possibly interesting observation is that the age distribution of histories is going to converge to the limit distribution.
Proposition
The age distribution of histories m(t)=∑|h|=tμ(h) converges to the distribution ζ(t)=γt(1−γ) in the norm d(m1,m2)=∑t|m1(t)−m2(t)|.
Proof
Since the map Ti adds 1 step to every history and then replaces each agent with a newborn with probability 1−γ, it induces a map on age distributions:
Ti(m)(t)=γm(t−1) for t>0
1−γ for t=0
on age distributions. For this map ζ is a fixed point and d(T(m1),T(m2))=γd(m1,m2). It follows that any initial age distribution converges to ζ.
What happens in the fixed points? In a fixed point, the distribution of opponents stays the same each term. It means that a player observes a sequence of independent identically distributed (IID) outcomes of their policy. Suppose that the policy π satisfies the following property: upon observing an IID sequence of games, it converges to the optimal response in the limit of γ→∞. Then in this limit, fixed points will also be Nash equilibria.
The property above can be satisfied even by very simple algorithms, for example, one that assumes that opponent’s move is randomly sampled from the set of his previous moves. Any Bayesian agent that includes all IID processes in its prior will satisfy this property.
Some interesting questions for research are:
Are any/all of the fixed points attractors?
Does convergence to a fixed point occur for all or at least almost all initial conditions?
Do all Nash equilibria correspond to fixed points?
Do stronger game theoretic solution concepts (e.g. proper equilibria) have corresponding dynamical properties?
[1] Kalai, Ehud, and Ehud Lehrer. “Rational learning leads to Nash equilibrium.” Econometrica: Journal of the Econometric Society (1993): 1019-1045.
[2] Dean P Foster and H Peyton Young. “On the
impossibility of predicting the behavior of rational agents.” Proceedings of the National
Academy of Sciences, 98(22):12848–12853,
2001.
[3] Nachbar, John H. “Beliefs in repeated games.” Econometrica 73.2 (2005): 459-480.
[4] Hadfield-Menell D. et al. Cooperative inverse reinforcement learning //Advances in neural information processing systems. – 2016. – Т. 29.
[5]Hadfield-Menell D. et al. The off-switch game //Workshops at the Thirty-First AAAI Conference on Artificial Intelligence. – 2017.
[6]Ergodic Theory by Charles Walkden, lecture 10