A while ago, drnickbone showed in a LessWrong post that for any decision theory, there is a “fair” decision problem on which that decision theory fails—that is, gets a low reward, even though a simple alternative decision theory gets a high reward. Although drnickbone’s argument was given in words, it’s intuitively clear that it could be formalized as math—except, what exactly do “decision theory” and “decision problem” mean in this context?
In this post, I’ll propose definitions of these notions in the context of provability logic, as used in Vladimir Slepnev’s modal version of UDT, and give a formalization of drnickbone’s result. This implies that no single modal decision theory, including UDT, can behave optimally on all “fair” decision problems. But in my next post, I’ll show that modal UDT is nevertheless optimal in a weaker sense: For every provably “fair” modal decision problem, there is a finite set of true axioms that can be added to PA, such that modal UDT using this extension of PA performs optimally on that decision problem. (This is a modal logic version of a result which Kenny Easwaran, Nate Soares and I recently showed in the context of bounded proof length, and which hasn’t been published yet.)
def U():
# Fill boxes, according to predicted action.
box1 = 1000
box2 = 1000000 if (A() == 1) else 0
# Compute reward, based on actual action.
return box2 if (A() == 1) else (box1 + box2)
In other words, we have a universe program, U(), which makes calls to an agent program, A().
The agent program, in turn, uses quining to refer to the source code of the universe program. But while the universe is allowed to call the agent to figure out how it will behave, the agent can’t just call the universe—or we’ll have an infinite regress. Instead, we generally consider agents that use abstract reasoning in the form of proofs to reason about the universe they’re in.
So a universe is a parameterless program, and a decision problem is a universe with a slot in it for an agent program. Similarly, an agent is a parameterless program, and a decision theory is an agent with a slot in it for the universe program. The universe can refer to the agent by calling it, but the agent can refer to the universe’s source (and to its own source) only inside quoted formulas.
Definitions
Let’s say that a sequence of modal formulas φ1,…,φn are provably mutually exclusive and exhaustive, or p.m.e.e. for short, if GL proves that exactly one of them is true: For example, φ1, φ2 and φ3 are p.m.e.e. if
GL⊢(φ1∧¬φ2∧¬φ3)∨(¬φ1∧φ2∧¬φ3)∨(¬φ1∧¬φ2∧φ3).
We’ll say that a (k-outcome) modal universe is a sequence O1,…,Ok, or →O for short, of closed p.m.e.e. formulas, where Oi is interpreted as the proposition that the i’th of the k possible outcomes occurs.
Given a preference ranking on these k possible outcomes, which has n≤k equivalence classes, we’ll write Ui for the modal formula asserting that one of the outcomes in the i’th ranked equivalence class has occurred; for example, if the preference ordering is such that O2, O5 and O7 are the most preferred outcomes, then U1≡O2∨O5∨O7. Clearly, U1,…,Un (or →U for short) is a sequence of p.m.e.e. formulas; for brevity, we can avoid talking about →O, and call the sequence →U an n-level universe.
(I’m using ≡ to mean “same formula”.)
Similarly, an (m-action) modal agent is a sequence A1,…,Am, or →A for short, of closed p.m.e.e. formulas, where Ai is interpreted as saying that the agent takes its i’th available action.
A modal (m-action, n-level) decision problem is an n-level universe with a slot for an m-action agent →A: that is, a p.m.e.e. sequence P1(→a),…,Pn(→a), or →P(→a) for short, where →a≡(a1,…,am).
Similarly, a decision theory is an agent with a slot for a universe. However, the definition of decision theories isn’t exactly analogous to that of decision problems, because we want to reflect the idea that the agent can’t directly simulate the universe it’s living in, but can only make reference to it inside quoted formulas; in the modal logic world, this translates to: “inside boxes”. Thus, we define a modal (m-action, n-level) decision theory to be a p.m.e.e. sequence of fully modalized formulas T1(→u),…,Tm(→u), or →T(→u) for short, where →u=(u1,…,un).
(Aside: We could require one of these to be provably mutually exclusive and exhaustive only under the assumption that its argument is p.m.e.e., but my current guess is that it’s not worth it.)
Fixed points
Now we can use the fixed point theorem to show that any modal decision theory has a well-defined output on any modal decision problem, and vice versa: Since →T(→u) is fully modalized, so is—→T(→P(→a))if we substitute the Pi(→a) for ui, and all instances of ui were inside boxes, then all instances of →a in the resulting formula will be inside boxes as well. Thus, we can find a fixed point →A satisfying GL⊢→A↔→T(→P(→A)) (which I’m using as shorthand for stating that GL⊢Ai↔Ti(→P(→A)) holds for i=1,…,m).
By the uniqueness theorem, for any →B satisfying GL⊢→B↔→T(→P(→B)), we have GL⊢→A↔→B, so our decision theory’s action is well-defined. Moreover, by the rule about substitution of equivalent formulas, it follows that GL⊢→P(→A)↔→P(→B), meaning that the preference level →U:≡→P(→A) that our decision theory achieves on this decision problem is well-defined as well.
Thus, for every pair of sequences (→T(→u),→P(→a)), there’s a pair (→A,→U) such that GL⊢→A↔T(→U) and GL⊢→U↔P(→A); and this pair is unique up to provable equivalence.
Extensional (= “fair”) decision problems
drnickbone’s result states that for every decision theory, there is a “fair” decision problem which on which this decision theory fails, where “fair” means that an agent’s reward depends only on its actions, not on its source code: Any two agents which output the same actions will achieve the same outcome. We can formalize this in modal logic as
(→a↔→b)→(→P(→a)↔→P(→b)),
where I’m writing →φ↔→ψ for the conjunction of the φi↔ψi. We’ll say that a modal decision problem is provably extensional if GL proves the above fairness condition.
There is one case in which provable extensionality is easy to show: If all occurrences of propositional variables in →P(→a) are outside boxes, then the fairness condition is a simple propositional tautology, and all such tautologies are provable in GL.
(Aside: One might wonder whether provable extensionality doesn’t follow for all decision problems from the rule about substitution of equivalent formulas, but that’s only an inference rule: if you can prove in GL that φ↔φ′, and you can also prove ψ(φ), then it follows that you can prove ψ(φ′), but GL does not in general prove (φ↔φ′)→(ψ(φ)↔ψ(φ′)). Here’s a counterexample: Note that □⊥ says “PA proves a contradiction”, so ¬□⊥ says “PA is consistent”. Hence, GL does not prove that (⊤↔¬□⊥)→(□⊤↔□(¬□⊥)), because that reduces to ¬□⊥→□¬□⊥, which says “if PA is consistent, then PA proves its own consistency”. That statement is false, and by soundness, GL only proves true things.)
Evil problems
Intuitively, given a m-action, n-level decision theory →T(→u) (where m,n≥2), drnickbone’s “evil” extensional decision problem →P(→a) works as follows: First, →P(→a) checks what →T(→u) would do on this same decision problem. Then it checks whether →a does the same thing. If so, the agent gets the second-best outcome; but if →a differs from →T(→u)’s action, the agent gets the best outcome.
To implement this idea, we first define a sequence F1(→a,→b),…,Fm(→a,→b), where →a stands for the action of the actual agent and →b≡(b1,…,bm) stands for the action that →T(→u) would have taken, as follows:
F1(→a,→b):≡¬(→a↔→b);
F2(→a,→b):≡¬(→a↔→b); and
Fj+2(→a,→b):≡⊥.
Now, we use the fixed point theorem to find a sequence →B such that GL⊢Bi↔Ti(→F(→B,→B)) for i=1,…,m. Finally, we let →P(→a):≡→F(→a,→B). It’s clear from the definition of →F(→a,→b) that the formulas in →P(→a) are p.m.e.e., as required for a decision problem; and →a only occurs outside boxes in →P(→a), so →P(→a) is provably extensional.
The point of this is that →B is now clearly the action of →T(→u) on the decision problem →P(→a): Bi is equivalent to Ti(→F(→B,→B)), which is equivalent to Ti(→P(→B)), and GL⊢→B↔→T(→P(→B)) is exactly the defining condition of ”→B says what →T(→u) does on →P(→a)”. Therefore, →P(→a) now implements the intuitive definition of the evil problem: it checks whether →a takes the same action as →T(→u); if so, it returns the second-best outcome; otherwise, it returns the best.
So →T(→u) gets only the second best outcome, but the following decision theory →T′(→u) gets the best one:
“Evil” decision problems in provability logic
A while ago, drnickbone showed in a LessWrong post that for any decision theory, there is a “fair” decision problem on which that decision theory fails—that is, gets a low reward, even though a simple alternative decision theory gets a high reward. Although drnickbone’s argument was given in words, it’s intuitively clear that it could be formalized as math—except, what exactly do “decision theory” and “decision problem” mean in this context?
In this post, I’ll propose definitions of these notions in the context of provability logic, as used in Vladimir Slepnev’s modal version of UDT, and give a formalization of drnickbone’s result. This implies that no single modal decision theory, including UDT, can behave optimally on all “fair” decision problems. But in my next post, I’ll show that modal UDT is nevertheless optimal in a weaker sense: For every provably “fair” modal decision problem, there is a finite set of true axioms that can be added to PA, such that modal UDT using this extension of PA performs optimally on that decision problem. (This is a modal logic version of a result which Kenny Easwaran, Nate Soares and I recently showed in the context of bounded proof length, and which hasn’t been published yet.)
Prerequisite: A primer on provability logic.
Background
Here’s an example of how we usually talk about decision problems (Vladimir Slepnev’s formalization of Newcomb’s problem from “A model of UDT with a halting oracle”):
In other words, we have a universe program, U(), which makes calls to an agent program, A().
The agent program, in turn, uses quining to refer to the source code of the universe program. But while the universe is allowed to call the agent to figure out how it will behave, the agent can’t just call the universe—or we’ll have an infinite regress. Instead, we generally consider agents that use abstract reasoning in the form of proofs to reason about the universe they’re in.
So a universe is a parameterless program, and a decision problem is a universe with a slot in it for an agent program. Similarly, an agent is a parameterless program, and a decision theory is an agent with a slot in it for the universe program. The universe can refer to the agent by calling it, but the agent can refer to the universe’s source (and to its own source) only inside quoted formulas.
Definitions
Let’s say that a sequence of modal formulas φ1,…,φn are provably mutually exclusive and exhaustive, or p.m.e.e. for short, if GL proves that exactly one of them is true: For example, φ1, φ2 and φ3 are p.m.e.e. if GL⊢(φ1∧¬φ2∧¬φ3)∨(¬φ1∧φ2∧¬φ3)∨(¬φ1∧¬φ2∧φ3).
We’ll say that a (k-outcome) modal universe is a sequence O1,…,Ok, or →O for short, of closed p.m.e.e. formulas, where Oi is interpreted as the proposition that the i’th of the k possible outcomes occurs.
Given a preference ranking on these k possible outcomes, which has n≤k equivalence classes, we’ll write Ui for the modal formula asserting that one of the outcomes in the i’th ranked equivalence class has occurred; for example, if the preference ordering is such that O2, O5 and O7 are the most preferred outcomes, then U1≡O2∨O5∨O7. Clearly, U1,…,Un (or →U for short) is a sequence of p.m.e.e. formulas; for brevity, we can avoid talking about →O, and call the sequence →U an n-level universe.
(I’m using ≡ to mean “same formula”.)
Similarly, an (m-action) modal agent is a sequence A1,…,Am, or →A for short, of closed p.m.e.e. formulas, where Ai is interpreted as saying that the agent takes its i’th available action.
A modal (m-action, n-level) decision problem is an n-level universe with a slot for an m-action agent →A: that is, a p.m.e.e. sequence P1(→a),…,Pn(→a), or →P(→a) for short, where →a≡(a1,…,am).
Similarly, a decision theory is an agent with a slot for a universe. However, the definition of decision theories isn’t exactly analogous to that of decision problems, because we want to reflect the idea that the agent can’t directly simulate the universe it’s living in, but can only make reference to it inside quoted formulas; in the modal logic world, this translates to: “inside boxes”. Thus, we define a modal (m-action, n-level) decision theory to be a p.m.e.e. sequence of fully modalized formulas T1(→u),…,Tm(→u), or →T(→u) for short, where →u=(u1,…,un).
(Aside: We could require one of these to be provably mutually exclusive and exhaustive only under the assumption that its argument is p.m.e.e., but my current guess is that it’s not worth it.)
Fixed points
Now we can use the fixed point theorem to show that any modal decision theory has a well-defined output on any modal decision problem, and vice versa: Since →T(→u) is fully modalized, so is—→T(→P(→a))if we substitute the Pi(→a) for ui, and all instances of ui were inside boxes, then all instances of →a in the resulting formula will be inside boxes as well. Thus, we can find a fixed point →A satisfying GL⊢→A↔→T(→P(→A)) (which I’m using as shorthand for stating that GL⊢Ai↔Ti(→P(→A)) holds for i=1,…,m).
By the uniqueness theorem, for any →B satisfying GL⊢→B↔→T(→P(→B)), we have GL⊢→A↔→B, so our decision theory’s action is well-defined. Moreover, by the rule about substitution of equivalent formulas, it follows that GL⊢→P(→A)↔→P(→B), meaning that the preference level →U:≡→P(→A) that our decision theory achieves on this decision problem is well-defined as well.
Thus, for every pair of sequences (→T(→u),→P(→a)), there’s a pair (→A,→U) such that GL⊢→A↔T(→U) and GL⊢→U↔P(→A); and this pair is unique up to provable equivalence.
Extensional (= “fair”) decision problems
drnickbone’s result states that for every decision theory, there is a “fair” decision problem which on which this decision theory fails, where “fair” means that an agent’s reward depends only on its actions, not on its source code: Any two agents which output the same actions will achieve the same outcome. We can formalize this in modal logic as (→a↔→b)→(→P(→a)↔→P(→b)), where I’m writing →φ↔→ψ for the conjunction of the φi↔ψi. We’ll say that a modal decision problem is provably extensional if GL proves the above fairness condition.
There is one case in which provable extensionality is easy to show: If all occurrences of propositional variables in →P(→a) are outside boxes, then the fairness condition is a simple propositional tautology, and all such tautologies are provable in GL.
(Aside: One might wonder whether provable extensionality doesn’t follow for all decision problems from the rule about substitution of equivalent formulas, but that’s only an inference rule: if you can prove in GL that φ↔φ′, and you can also prove ψ(φ), then it follows that you can prove ψ(φ′), but GL does not in general prove (φ↔φ′)→(ψ(φ)↔ψ(φ′)). Here’s a counterexample: Note that □⊥ says “PA proves a contradiction”, so ¬□⊥ says “PA is consistent”. Hence, GL does not prove that (⊤↔¬□⊥)→(□⊤↔□(¬□⊥)), because that reduces to ¬□⊥→□¬□⊥, which says “if PA is consistent, then PA proves its own consistency”. That statement is false, and by soundness, GL only proves true things.)
Evil problems
Intuitively, given a m-action, n-level decision theory →T(→u) (where m,n≥2), drnickbone’s “evil” extensional decision problem →P(→a) works as follows: First, →P(→a) checks what →T(→u) would do on this same decision problem. Then it checks whether →a does the same thing. If so, the agent gets the second-best outcome; but if →a differs from →T(→u)’s action, the agent gets the best outcome.
To implement this idea, we first define a sequence F1(→a,→b),…,Fm(→a,→b), where →a stands for the action of the actual agent and →b≡(b1,…,bm) stands for the action that →T(→u) would have taken, as follows:
F1(→a,→b):≡¬(→a↔→b);
F2(→a,→b):≡¬(→a↔→b); and
Fj+2(→a,→b):≡⊥.
Now, we use the fixed point theorem to find a sequence →B such that GL⊢Bi↔Ti(→F(→B,→B)) for i=1,…,m. Finally, we let →P(→a):≡→F(→a,→B). It’s clear from the definition of →F(→a,→b) that the formulas in →P(→a) are p.m.e.e., as required for a decision problem; and →a only occurs outside boxes in →P(→a), so →P(→a) is provably extensional.
The point of this is that →B is now clearly the action of →T(→u) on the decision problem →P(→a): Bi is equivalent to Ti(→F(→B,→B)), which is equivalent to Ti(→P(→B)), and GL⊢→B↔→T(→P(→B)) is exactly the defining condition of ”→B says what →T(→u) does on →P(→a)”. Therefore, →P(→a) now implements the intuitive definition of the evil problem: it checks whether →a takes the same action as →T(→u); if so, it returns the second-best outcome; otherwise, it returns the best.
So →T(→u) gets only the second best outcome, but the following decision theory →T′(→u) gets the best one:
T′1(→u):≡¬T1(→u);
T′2(→u):≡¬T1(→u);
T′j+2(→u):≡⊥.