Catastrophe Mitigation Using DRL
Previously we derived a regret bound for DRL which assumed the advisor is “locally sane.” Such an advisor can only take actions that don’t lose any value in the long term. In particular, if the environment contains a latent catastrophe that manifests with a certain rate (such as the possibility of an UFAI), a locally sane advisor has to take the optimal course of action to mitigate it, since every delay yields a positive probability of the catastrophe manifesting and leading to permanent loss of value. This state of affairs is unsatisfactory, since we would like to have performance guarantees for an AI that can mitigate catastrophes that the human operator cannot mitigate on their own. To address this problem, we introduce a new form of DRL where in every hypothetical environment the set of uncorrupted states is divided into “dangerous” (impending catastrophe) and “safe” (catastrophe was mitigated). The advisor is then only required to be locally sane in safe states, whereas in dangerous states certain “leaking” of long-term value is allowed. We derive a regret bound in this setting as a function of the time discount factor, the expected value of catastrophe mitigation time for the optimal policy, and the “value leak” rate (i.e. essentially the rate of catastrophe occurrence). The form of this regret bound implies that in certain asymptotic regimes, the agent attains near-optimal expected utility (and in particular mitigates the catastrophe with probability close to 1), whereas the advisor on its own fails to mitigate the catastrophe with probability close to 1.
Appendix A proves the main theorem. Appendix B contains the proof of an important lemma which is however almost identical to what appeared in the previous essay. Appendix C contains several propositions from the previous essay which we are used in the proof. [Appendices B and C were moved to a separate post because of a length limit in the website.]
Results
We start by formalising the concepts of a “catastrophe” and “catastrophe mitigation” in the language of MDPs.
Definition 1
A catastrophe MDP is an MDP together with a partition of into subsets (safe, dangerous and corrupt states respectively).
Definition 2
Fix a catastrophe MDP . Define by
is called a mitigation policy for when
i. For any , .
is called a proper mitigation policy for when condition i holds and
ii. For any , .
Definition 3
Fix , a catastrophe MDP and a proper mitigation policy . is said to have expected mitigation time when for any
Next, we introduce the notion of an MDP perturbation. We will use it by considering perturbations of a catastrophe MDP which “eliminate the catastrophe.”
Definition 4
Fix and consider a catastrophe MDP . An MDP is said to be a -perturbation of when
i.
ii.
iii.
iv. For any and ,
v. For any and , there exists s.t. .
Similarly, we can consider perturbations of a policy.
Definition 5
Fix and consider a catastrophe MDP . Given and , is said to be a -perturbation of when
i. For any , .
ii. For any , there exists s.t. .
We will also need to introduce policy-specific value functions, Q-functions and relatively -optimal actions.
Definition 6
Fix an MDP and . We define and by
For each , we define , and by
Now we give the new (weaker) condition on the advisor policy. For notational simplicity, we assume the policy is stationary. It is easy to generalize these results to non-stationary advisor policies and to policies that depend on irrelevant additional information (i.e. policies for universes that are realizations of the MDP).
Definition 7
Given a catastrophe MDP , we denote the MDP defined by
-
-
-
-
For any , .
-
For any , .
Definition 8
Fix . Consider a catastrophe MDP . A policy is said to be locally -sane for when there exists a -perturbation of with a deterministic proper mitigation policy and a -perturbation of s.t.
i. For all , .
ii. is a mitigation policy for .
iii. For any :
iv. For any :
Given , is said to have potential mitigation time when has it as expected mitigation time.
Note that a locally -sane policy still has to be -optimal in . This requirement seems reasonably realistic, since, roughly speaking, it only means that there is some way to “rearrange the universe” that the agent can achieve, and that would be “endorsed” by the advisor, s.t this rearrangement doesn’t destroy substantially much value and s.t. after this rearrangement, there is no “impending catastrophe” that the agent has to prevent and the advisor wouldn’t be able to prevent in its place. In particular, this rearrangement may involve creating some subagents inside the environment and destroying the original agent, in which case any policy on is “vacuously” optimal (since all actions have no effect).
We can now formulate the main result.
Theorem 1
Fix an interface , , and for each , an MDP s.t. . Now, consider for each , an -universe which is an -realization of a catastrophe MDP with state function s.t.
i.
ii. For each and , .
iii. For each , .
iv. Given and , if and , then (this condition means that in uncorrupted states, the reward is observable).
Consider also , and a locally -sane policy for . Assume has potential mitigation time . Then, there exists an -policy s.t. for any
Here, is the -policy defined by . and the are regarded as fixed and we don’t explicitly examine their effect on regret, whereas , , and the are regarded as variable with the asymptotics , .
In most interesting cases, (i.e. the “mean time between catastrophes” is much shorter than a discount horizon) and (i.e. the expected mitigation time is much shorter than the discount horizon), which allows simplifying the above to
We give a simple example.
Example 1
Let , . For any and , we fix some and define the catastrophe MDP by
-
, , (adding corrupted states is an easy exercise).
-
If and then
If then
If and then
If and then
-
, if then .
-
and iff (this defines a unique ).
-
If then for any .
-
, .
-
If then , .
We have . Consider the asymptotic regime , , . According to Theorem 1, we get
The probability of a catastrophe (i.e. ending up in state ) for the optimal policy for a given is . Therefore, the probability of a catastrophe for policy is . On the other hand, it is easy to see that the policy has a probability of catastrophe (and in particular regret ): it spends time “exploring” with a probability of a catastrophe on every step.
Note that this example can be interpreted as a version of Christiano’s approval-directed agent, if we regard the state as a “plan of action” that the advisor may either approve or not. But in this formalism, it is a special case of consequentialist reasoning.
Theorem 1 speaks of a finite set of environments, but as before (see Proposition 1 here and Corollary 3 here), there is a “structural” equivalent, i.e. we can use it to produce corollaries about Bayesian agents with priors over a countable set of environments. The difference is, in this case we consider asymptotic regimes in which the environment is also variable, so the probability weight of the environment in the prior will affect the regret bound. We leave out the details for now.
Appendix A
We start by deriving a more general and more precise version of the non-catastrophic regret bound, in which the optimal policy is replaced by an arbitrary “reference policy” (later it will be related to the mitigation policy) and the dependence on the MDPs is expressed via a bound on the derivative of by .
Definition A.1
Fix . Consider an MDP and policies , . is called -sane relatively to when for any
i.
ii.
Lemma A.1
Fix an interface , and . Now, consider for each , an -universe which is an -realization of an MDP with state function and policies , . Consider also , and assume that
i. is -sane relatively to .
ii. For any and
Then, there exists an -policy s.t. for any
The -notation refers to the asymptotics where is fixed (so we don’t explicitly examine its effect on regret) whereas , and the are variable and , .
The proof of Lemma A.1 is almost identical to the proof the main theorem for “non-catastrophic” DRL up to minor modifications needed to pass from absolute to relative regret, and tracking the contribution of the derivative of . We give it in Appendix B.
We will not apply Lemma A.1 directly the the universes of Theorem 1. Instead, we will define new universes using the following constructions.
Definition A.2
Consider a catastrophe MDP. We define the catastrophe MDP as follows.
-
, , .
-
-
For any :
For any , :
For any :
-
For any , .
-
For any , .
-
Now, consider an interface and a which is an -realization of a catastrophe MDP with state function . Denote , and . Denote the projection mapping and corresponding. We define the -universe and the function as follows
It is easy to see that is an -realization of with state function .
Definition A.3
Consider a catastrophe MDP. We define the catastrophe MDP as follows.
-
, , .
-
-
-
For any , .
-
Now, consider an interface and a which is an -realization of a catastrophe MDP with state function . We define the -universe as follows
It is easy to see that is an -realization of with state function .
Given , we will use the notation
Given an -policy , the -policy is defined by .
In order to utilize condition iii of Definition 8, we need to establish the following relation between and , .
Proposition A.2
Consider a catastrophe MDP, some and a proper mitigation policy. Then
For the purpose of the proof, the following notation will be convenient
Definition A.4
Consider a finite set and some . We define by
As well known, the limit above always exists.
Proof of Proposition A.2
Consider any and . Since , we have
Let be either of and . Since , we get
Since is a mitigation policy, we get
Finally, since is proper, and . We conclude
Now we will establish a bound on the derivative of by in terms of expected mitigation time, in order to demonstrate condition ii of Lemma A.1.
Proposition A.3
Fix . Consider a catastrophe MDP and a proper mitigation policy with expected mitigation time . Assume than for any and
Then, for any and
Note that, since is a rational function of with no poles on the interval , some finite always exists. Note also that Proposition A.3 is really about Markov chains rather than MDPs, but we don’t make it explicit to avoid introducing more notation.
Proof of Proposition A.3
Let be the Markov chain with transition matrix and initial state . For any ), we have
Given , we define by
It is easy to see that can be rewritten as
The expression above is well defined because is a proper mitigation policy and therefore is finite with probability 1.
Let us decompose as defined as follows
We have
The second term can be regarded as a weighted average (since ), where the maximal term in the average is at most , hence
Also, we have
To transform the relative regret bounds for “auxiliary” universes obtained from Lemma A.1 to regret bounds for the original universes, we will need the following.
Definition A.5
Fix and a universe which is an -realization of a catastrophe MDP with state function . Let be a -perturbation of . An environment is said to be a -lift of to when
i. is an -realization of with state function .
ii.
iii. For any and , if then .
iv. For any and , if then there exists s.t.
It is easy to see that such a lift always exists, for example we can take:
Proposition A.4
Consider s.t and . Let be a universe which is an -realization of a catastrophe MDP with state function . Suppose that is a mitigation policy for that has expected mitigation time . Consider some -policy . Suppose that is a -perturbation of and is a -lift of to . Denote . Then, there is some that depends on nothing s.t
In order to prove Proposition A.4, we need a relative regret bound for derived from a relative regret bound for .
Proposition A.5
Fix an interface and an -universe which is an -realization of a catastrophe MDP with state function s.t. . Suppose that is a mitigation policy for . Let be any -policy. Then, for any
Proof of Proposition A.5
is a mitigation policy, therefore for any , . It follows that
Also, it is easy to see from the definition of and that
Indeed, any discrepancy between the behavior of and involves transition to the state which yields 0 reward forever. Subtracting these inequalities, we get the desired result.
Another observation we need to prove Proposition A.4 is a bound on the effect of -perturbations in terms of mitigation time.
Proposition A.6
Consider , a universe which is an -realization of a catastrophe MDP with state function , and some . Assume that for any and , . Let be a -perturbation of and a -lift of to . Then,
Proof of Proposition A.6
It is straightforward to construct a probability space , measurable and measurable s.t.
i.
ii. For any and s.t. :
iii. For any and s.t. :
iv. For any , and :
Denote . We have
Also, it is easy to see that for any measurable
It follows that
Using the fact that and the convexity of the function
Using the triangle inequality, we conclude
As a final ingredient towards the proof of Proposition A.4, we will need to use the relative regret bound for to get a certain statistical bound on mitigation time.
Definition A.6
Let be any environment. We define the closed set by
Consider a universe which is an -realization of a catastrophe MDP with state function . We define the measurable function as follows
Proposition A.7
Fix an interface and an -universe which is an -realization of a catastrophe MDP with state function s.t. . Suppose that is a mitigation policy for that has the expected mitigation time . Let be any -policy. Then, there is that depends on nothing s.t. for any , if then
Proof of Proposition A.7
For any , we have
It follows that
If is s.t. for all , , then
Since is a mitigation policy, it follows that
Subtracting the two inequalities, we get
Denote . By choosing sufficiently large, we can assume without loss of generality that the right hand side is positive since, unless , we would have , and unless , we would have . In either case, the inequality we are trying to prove would hold. Also, note that . We get
By the same reasoning as before, we can assume without loss of generality that e.g. . It follows that
Combining this with the previous inequality implies
It is easy to see that there is s.t. for any , and therefore . Therefore, for any such and , , where it is sufficient to assume that . Taking , we conclude (assuming and observing that )
Taking logarithm of both sides
Combining with the inequality we had before, we get
Proof of Proposition A.4
By Proposition A.5, we have
Note that is a -perturbation of and is a -lift of to . The condition of Proposition A.6 holds tautologically due to Definition A.3. Therefore, we can apply Proposition A.6 and get
The only difference between and is the appearance of instead of . Therefore, we can rewrite the above as
Applying Proposition A.7 to each of the last two terms, we get
The following definition will be useful in order to apply Proposition A.4.
Definition A.7
Consider a catastrophe MDP s.t. and a policy . We then define the catastrophe MDP as follows:
-
, , .
-
-
For any and : .
-
For any : .
-
Now consider an -realization of with state function . Then, is clearly an -realization of with the state function defined by .
Note also that and (where interpreting as a policy for or requires choosing an arbitrary value for the state ). Moreover, , , and .
Finally, we are read to prove the main theorem.
Proof of Theorem 1
For every , denote and the -perturbations of and respectively and the deterministic proper mitigation policy for of Definition 8. Let be a lift of to and denote . Define . Observe that is -sane relatively to in the sense of and both: condition i of Definition A.1 follows by Proposition A.2 from conditions ii and iii of Definition 8, and condition ii of Definition A.1 follows from condition iv of Definition 8. Moreover, by Proposition A.3, we have
Here, we used that is fixed (and thus so is , by conditions i-iii).
Condition iv implies that all the universes in have a common reward function (notice that transition to a corrupted state induces the observation whereas transition to a state in in the universe induces the observation ). Therefore, we can use Lemma A.1 to conclude that there exists an -policy s.t.
It is easy to see that is a -perturbation of . Observe also that , and is a -lift of to . Applying Proposition A.4, we get
Setting we get
By condition i of Definition 8, this implies
- The Learning-Theoretic AI Alignment Research Agenda by 4 Jul 2018 9:53 UTC; 92 points) (
- Announcement: AI alignment prize winners and next round by 15 Jan 2018 14:33 UTC; 81 points) (
- AI Alignment Prize: Round 2 due March 31, 2018 by 12 Mar 2018 12:10 UTC; 28 points) (
- 13 Sep 2019 3:01 UTC; 18 points) 's comment on Embedded Agency via Abstraction by (
- Quantilal control for finite MDPs by 12 Apr 2018 9:21 UTC; 14 points) (
- 22 Apr 2019 17:59 UTC; 6 points) 's comment on Value Learning is only Asymptotically Safe by (
- 21 Apr 2019 12:44 UTC; 5 points) 's comment on Value Learning is only Asymptotically Safe by (
- 21 Jan 2020 19:11 UTC; 5 points) 's comment on Clarifying “AI Alignment” by (
- 13 Jun 2019 12:41 UTC; 4 points) 's comment on [AN #57] Why we should focus on robustness in AI safety, and the analogous problems in programming by (
- 21 Apr 2019 12:19 UTC; 3 points) 's comment on Value Learning is only Asymptotically Safe by (
- More precise regret bound for DRL by 14 Feb 2018 11:58 UTC; 1 point) (
- Catastrophe Mitigation Using DRL (Appendices) by 14 Feb 2018 11:57 UTC; 0 points) (
Maybe it’s just my browser, but it look like it got cut off. Here’s the last of what it renders for me:
Averaging the previous inequality over kk, we get
1N∑k=0N−1R?k≤(1−γT)∑n=0∞γnTE[E[U!n∣J!n=K, Z!nT]−E[U!n∣Z!nT]]+O(1−γTη2+τ¯(1−γ)1−γT) 1N∑k=0N−1R?k≤(1−γT)∑n=0∞γnTE[E[Un!∣Jn!=K, ZnT!]−E[Un!∣ZnT!]]+O(1−γTη2+τ¯(1−γ)1−γT)
$${k=0}{N-1}R{?k} (1-^T){n=0}{nT} [[U^!_n ^!n = K, Z^!{nT}]-[U^!n Z^!{nT}]] + O(+
Indeed there is some kind of length limit in the website. I moved Appendices B and C to a separate post.
Unfortunately, it’s not just your browser. The website truncates the document for some reason. I emailed Matthew about it and ey are looking into it.