Attempted versions of CDT and EDT can be constructed using logical inductors, called LICDT and LIEDT. It is shown, however, that LICDT fails XOR Blackmail, and LIEDT fails Newcomb. One interpretation of this is that LICDT and LIEDT do not implement CDT and EDT very well. I argue that they are indeed forms of CDT and EDT, but stray from expectations because they also implement the ratifiability condition I discussed previously. Continuing the line of thinking from that post, I discuss conditions in which LICDT=LIEDT, and try to draw out broader implications for decision theory.
Thanks to Scott and Sam for discussions shaping this post. Also thanks to many participants at AISFP for discussions shaping my current view of counterfactuals.
I’m not sure who gets credit for LICDT and LIEDT, but they’ve been discussed around MIRI since shortly after logical induction itself. LIEDT was sort of the obvious first thing to try; LICDT is a slight variation. (Scott thinks Jessica may have come up with LICDT.) They might be thought of as the punching bag for better logical induction DTs to be contrasted with (although, Tsvi wrote up a likely-better baseline proposal).
Both LICDT and LIEDT use a logical inductor which has been run for n steps, Pn. I’ll abbreviate an agent as An (parameterizing by the same n as for the inductor), with An=a to say that the agent takes action a from some action set A. We can define self-referential sentences Sn↔Pn(Sn)<ϵ. Both LICDT and LIEDT explore when such sentences are true. We can take whichever action the agent least expects itself to do conditioned on its exploring. This forces the agent to take every action with frequency at least ϵ/|A| in the limit, and also makes the exploration pseudorandom in the sense that the logical inductor cannot predict it much better than to assign probability ϵ to exploration (and therefore, neither can any poly-time computable predictor).
When it isn’t taking an action due to the exploration clause, LIEDT chooses actions based on the expected utility conditioning on each action. Utility is defined as a logically uncertain variable (LUV), in the terminology of the logical induction paper. Let Un be the LUV for the utility of An, and En(Un|An=a) be the conditional expectation of Un in Pn given that the agent takes action a. The conditional expectation is always well-defined thanks to the exploration, which ensures that the probability of each action remains above zero.
LICDT is similar, but rather than taking the expectation conditioned on each action, it takes the expectation conditioned on exploring and taking that action. Judging actions by what would happen if you took those actions randomly, rather than reliably, is supposed to remove the kind of correlation which makes EDT cooperate in prisoner’s dilemma, one-box in Newcomb, et cetera. We will see that this only party works.
Both LICDT and LIEDT include any observations in their deductive state. (There can be special predicates representing sensory states.) So, they are updateful decision theories.
LICDT and LIEDT aren’t very different, and mostly we just talk about LIEDT, calling it LIDT. However, I’m recently realizing just how similar LICDT and LIEDT really are.
LIEDT two-boxes in Newcomb.
Suppose we have an LIEDT agent facing Newcomb’s problem. We can specify a sequence of Newcomb problems (for logical inductors of increasing power) by the utility function Un:=10Pn(An=1)−I(An=1), where An=1 is the proposition stating that the agent (of power n) one-boxes, and I() is the indicator function which returns 1 for true propositions and 0 for false. This is a Newcomb problem where Omega is fallible; in fact, Omega can only predict the agent as well as the agent can predict itself, since both use the same logical inductor. (And, Newcomb deals with the uncertainty by putting the money in the box with probability equal to its estimation of the probability LIEDT one-boxes.) The best reward the agent can get is if Omega predicts one-boxing, but the agent unexpectedly two-boxes. Of course, a logical inductor can’t be fooled like this reliably; so the agent is incentivised to one-box.
Theorem. LIEDT converges to two-box on non-exploration rounds as n increases.
Proof. The logical inductor comes to predict En(Un|An=1)=Pn(An=1)−1 and En(Un|An=2)=Pn(An=1) with increasing accuracy as n increases, since it has access to Pn(An=1), and since the conditioning is always well-defined thanks to exploration. Therefore, two-boxing eventually becomes the most appealing option. □
So, LIEDT does not come to see Omega as correlated with its action, because it knows its own general policy, and the general policy screens off all the correlation between Omega and its action.
Now, it’s true that if Omega was a more powerful predictor than the agent, LIEDT could one-box—but, so could LICDT. In particular, if Omega simply knows the action precisely, then Un=10I(An=1)−I(An=1)=9I(An=1), and both LICDT and LIEDT one-box.
LICDT is XOR-blackmailed.
On the other hand, consider the XOR Blackmail letter, which is supposed to be a case where CDT does better than EDT. There is a difficult-to-predict disaster, Dn, with pseudorandom probability 0.01. However, an AI researcher can predict both the disaster and the AI, and will use that knowledge to try and extract money from the AI. Let’s call the AI sending money to the researcher An=1, and not sending money An=0. The AI researcher sends a letter asking for money if and only if [they predict the AI will respond by sending money XOR Dn]. Let’s say the AI researcher asks for half the cost of the disaster. Un=I(Dn)(−I(An)−2)+I(¬Dn)(−I(An)). Moreover, the deductive state includes knowledge of the letter, Ln=XOR(Dn,An).
Theorem.LICDT converges to sending the blackmailer money when a letter is received, on non-exploration rounds.
Proof. LICDT bases its decision on the utility observed in exploration rounds. Conditional on its receiving the letter and exploring into sending the money, no disaster has occurred. Conditional on its receiving the letter and exploring into not sending money, the disaster has occurred. It will come to predict both of these things accurately, and its conditional expectations will be consistent with them. Therefore, it will send the money. □
Interpretation of the two experiments.
It appears that these aren’t very good implementations of CDT and EDT. The attempted CDT fails blackmail letter; the attempted EDT fails Newcomb. But, if we look a little closer, something more interesting is going on. I didn’t prove it here, but both of them will one-box when Omega is a perfect predictor, and two-box when Omega is fallible. Both of them will send the blackmail letter when the blackmailer is a perfect predictor, and refuse when the blackmailer is fallible. They appear to be following my “Law of Logical Causality” from SLS III.
When people argue that EDT one-boxes in Newcomb and that CDT two-boxes, and that EDT sends the money in XOR Blackmail but EDT abstains, they often aren’t careful that EDT and CDT are being given the same problem. CDT is supposed to be taking a physical-causation counterfactual, meaning it represents the problem in a Bayesian network representing its physical uncertainty, in which the direction of links lines up with physical causality. If we give EDT the same Bayesian network, it will disregard the causal information contained therein, and compute conditional utilities of actions. But, it is unclear that EDT will then one-box in Newcomb. Reasoning about the physical situation, will it really conclude that conditioning on one action or another changes the expected prediction of Omega? How does the conditional probability flow from the action to Omega? Omega’s prediction is based on some observations made in the past. It may be that the agent knows equally well what those observations were; it just doesn’t know exactly what Omega concluded from them. The knowledge of the observations screens off any probabilistic relationship. Or, even worse, it may be that the physical information which the agent has includes its own source code. The agent can’t try to run its own source code on this very decision; it would go into an infinite loop. So, we get stuck when we try to do the reasoning. Similar problems occur in XOR blackmail.
I claim that reasoning about what CDT and EDT do in Newcomb’s problem and XOR blackmail implicitly assume some solution to logical uncertainty. The correlation which EDT is supposed to conclude exists between its action and the predictor’s guess at its action is a logical correlation. But, logical induction doesn’t necessarily resolve this kind of logical uncertainty in the intuitive way.
In particular, logical induction implements a version of the ratifiability condition. Because LIEDT agents know their own policies, they are screened off from would-be correlates of their decisions in much the same way LICDT agents are. And because LICDT agents are learning counterfactuals by exploration, they treat predictors who have more information about their own actions than they themselves do as causally downstream—the law of logical causality which I conjectured would, together with ratifiability, imply CDT=EDT.
When does LICDT=LIEDT?
It’s obvious that LIEDT usually equals LICDT when LIEDT converges to taking just one of the actions on non-exploration rounds; the other actions are only taken when exploring, so LIEDT’s expectation of those actions just equals LICDT’s. What about the expectation of the main action? Well, it may differ, if there is a reliable difference in utility when it is taken as an exploration vs deliberately chosen. However, this seems to be in some sense unfair; the environment is basing its payoff on the agent’s reasons for taking an action, rather than on the action alone. While we’d like to be able to deal with some such environments, allowing this in general allows an environment to punish one decision theory selectively. So, for now, we rule such decision problems out:
Definition:Decision problem. A decision problem is a function Un(An) which takes an agent and a step number, and yields a LUV which is the payout.
Definition:Fair decision problem. A fair decision problem is a decision problem such that the same limiting action probabilities on the part of the agent imply the same limiting expectations on utility per action, and these expectations do not differ between exploration actions and plain actions. Formally: if limn→∞Pn(An=a) exists for all a∈A, and a second agent Bn has limiting action probabilities which also exist and are the same, then limn→∞En(Un(An)|An=a) also exist and are the same as the corresponding quantities for Bn; and furthermore, limn→∞En(Un(An)|An=a∧Sn) exist and are the same as the limits without Sn.
I don’t expect that this definition is particularly good; alternatives welcome. In particular, using the inductor itself to estimate the action probability introduces an unfortunate dependence on inductor.
Definition: A continuous fair decision problem is a fair problem for which the function from limiting action probabilities to limiting expected utilities is continuous.
Observation. Continuous fair decision problems have “equilibrium” action distributions, where the best response to the action utilities which come from those is consistent with the action distribution. These are the same whether “best-response” is in the LICDT or LIEDT sense, since the expected utility is required to be the same in both senses. If either LICDT or LIEDT converge on some such problem, then clearly they converge to one of these equilibria.
This doesn’t necessarily mean that LICDT and LIEDT converge to the same behavior, though, since it is possible that they fail to converge, and that they converge to different equilibria. I would be somewhat surprised if there is some essential difference in behavior between LICDT and LIEDT on these problems, but I’m not sure what conjecture to state.
I’m more confident in a conjecture for a much narrower notion of fairness (I wasn’t able to prove this, but I wouldn’t be very surprised if it turned out not to be that hard to prove):
Definition.Deterministically fair decision problem: A decision problem is deterministically fair if the payout on instance n are only a function of the action probabilities according to Pn and of the actual action taken (where Pn is the logical inductor used by the agent itself).
Conjecture. Given a continuous deterministically fair decision problem, the set of mixed strategies which LICDT may converge to and LIEDT may converge to, varying the choice of logical inductor, are the same. That is, if LICDT converges to an equilibrium, we can find a logical inductor for which LIEDT converges to that same equilibrium, and vice versa.
This seems likely to be true, since the narrowness of the decision problem class leaves very little room to wedge the two decision theories apart.
The difficulty of proving comparison theorems for LICDT and LIEDT is closely related to the difficulty of proving optimality theorems for them. If we had a good characterization of the convergence and optimality conditions of these two decision theories, we would probably be in a better position to study the relationship between them.
At least we can prove the following fairly boring theorem:
Theorem. For a decision problem in which the utility depends only on the action taken, in a way which does not depend on n or on the agent, and for which all the actions have different utilities, LIEDT and LICDT will converge to the same distribution on actions.
Proof. Epsilon exploration ensures that there continues to be some probability of each action, so the logical inductor will eventually learn the action utilities arbitrarily well. Once the utilities are accurate enough to put the actions in the right ordering, the agent will simply take the best action, with the exception of exploration rounds. □
Law of Logical Counterfactuals
The interesting thing here is that LIEDT seems to be the same as LICDT under an assumption that the environment doesn’t specifically mess with it by doing something differently on exploration rounds and non-exploration rounds. This is sort of obvious from the definitions of LICDT and LIEDT (despite the difficulty of actually proving the result). However, notice that it’s much different from usual statements of the difference between EDT and CDT.
I claim that LIEDT and LICDT are more-or-less appropriately reflecting the spirit of EDT and CDT under (1) the condition of ratifiability, which is enforced by the self-knowledge properties of logical inductors, and (2) the “law of logical counterfactuals” (LLC) I posited last time, which is enforced by the way LICDT learns causality via experimentation. You can’t learn that something which knows more about your action than you do is upstream of you if you make the assumption that you can perform randomized controlled trials!
Since the law of logical counterfactuals was vaguely stated, I had hoped to learn something about what shape it has to take by examining this case. Unfortunately, this case is a bit intractable. However, it did suggest an additional condition: (3) the environment’s behavior doesn’t depend on whether you explore. (1), (2), and (3) are stated rather informally for now, but together are supposed to imply CDT=EDT, in a formal analysis which is yet-to-be.
Actually, assumption (3) is a sort of “randomized controlled trials” assumption, which seems to justify (2). You assume that you can arrange for actions to be uncorrelated with anything in the world, and that justifies your use of exploration to learn about counterfactuals.
It’s not obvious at first, but exploration-based counterfactuals are very similar to counterfactuals based on the chicken rule in proof-based decision theories such as MUDT. The chicken rule requires that if you can prove what your own action will be, you take a different action. This allows the proofs from alternative actions to be well-behaved, rather than exploding in contradiction.
You can see how that’s analogous to the way epsilon-exploration makes sure that LICDT and LIEDT can condition on their own actions without dividing by zero. It’s also analogous for a deeper reason. Remember the trick of using self-referential sentences for exploration, at the beginning of this writeup? A very similar trick is to take any action which the logical inductor currently assigns probability <ϵ. In other words, do anything you strongly believe you won’t do. This is very close to the chicken rule; just substitute probabilistic belief for proof.
In fact, we can go a bit further. One of the shortcomings of MUDT is that it doesn’t do what we’d like in the Agent Simulates Predictor scenario, and a host of other problems where a more powerful logic is required. We can address these issues by giving it a more powerful logic, but that does not address the intuitive concern, that it seems as if we should be able to solve these problems without fully trusting a more powerful logic: if we strongly suspect that the more powerful logic is consistent, we should be able to mostly do the same thing.
And indeed, we can accomplish this with logical induction. What we do is play chicken against anything which seems to predict our action beyond a certain tolerable degree. This gives us exactly the pseudorandom epsilon-exploration of LICDT/LIEDT. Unfortunately, there’s a version of Agent Simulates Predictor which trips these up as well. Alas. (Asymptotic Decision Theory, on the other hand, gets Agent Simulates Predictor right; but, it is bad for other reasons.)
It’s interesting that in proof-based decision theory, you never have to take the action; you just threaten to take it. (Or rather: you only take it if your logic is inconsistent.) It’s like being able to learn from an experiment which you never perform, merely by virtue of putting yourself in a position where you almost do it.
Troll Bridge
Condition (3) is a sort of “no Troll Bridge”condition. The write-ups of the Troll Bridge on this forum are somewhat inadequate references to make my point well (they don’t even call it Troll Bridge!), but the basic idea is that you put something in the environment which depends on the consistency of PA, in a way which makes a MUDT agent do the wrong thing via a very curious Löbian argument. There’s a version of Troll Bridge which works on the self-referential exploration sentences Sn rather than the consistency of PA. It seems like what’s going on in troll bridge has a lot to do with part of the environment correlating itself with your internal machinery; specifically, the internal machinery which ensures that you can have counterfactuals.
Troll Bridge is a counterexample to the idea of proof-length counterfactuals. The hypothesis behind proof-length counterfactuals is that A is a legitimate counterfactual consequence of B if a proof of A from B is much shorter than a proof of ¬A from B. It’s interesting that my condition (3) rules it out like this; it suggests a possible relationship between what I’m doing and proof-length counterfactuals. But I suspected such a relationship since before writing SLSIII. The connection is this:
Proof-length counterfactuals are consistent with MUDT, and with variants of MUDT based on bounded-time proof search, because what the chicken rule does is put proofs of what the agent does juuust out of reach of the agent itself. If the environment is tractable enough that there exist proofs of the consequences of actions which the agent will be able to find, the chicken rule pushes the proofs of the agent’s own actions far enough out that they won’t interfere with that. As a result, you have to essentially step through the whole execution of the agent’s code in order to prove what it does, which of course the agent can’t do itself while it’s running. We can refine proof-length counterfactuals to make an even tighter fit: given a proof search, A is a legitimate counterfactual consequence of B if the search finds a proof of A from B before it finds one for ¬A.
This makes it clear that the notion of counterfactual is quite subjective. Logical inductors actually make it significantly less so, because an LI can play chicken against all proof systems; it will be more effective in the short-run at playing chicken against its own proof system, but in the song run it learns to predict theorems as fast as any proof system would, so it can play chicken against them all. Nonetheless, LIDT still plays chicken against a subjective notion of predictability.
And this is obviously connected to my assumption (3). LIDT tries valiantly to make (3) true by decorrelating its actions with anything which can predict them. Sadly, as for Agent Simulates Predictor, we can construct a variant of Troll Bridge which causes this to fail anyway.
In any case, this view of what exploration-based counterfactuals are makes me regard them as significantly more natural than I think others do. Nonetheless, the fact remains that neither LIDT nor MUDT do all the things we’d like them to. They don’t do what we would like in multi-agent situations. They don’t solve Agent Simulates Predictor or Troll Bridge. It seems to me that when they fail, they fail for largely the same reasons, thanks to the strong analogy between their notions of counterfactual.
(There are some disanalogies, however; for one, logical inductor decision theories have a problem of selecting equilibria which doesn’t seem to exist in proof-based decision theories.)
Comparing LICDT and LIEDT
Attempted versions of CDT and EDT can be constructed using logical inductors, called LICDT and LIEDT. It is shown, however, that LICDT fails XOR Blackmail, and LIEDT fails Newcomb. One interpretation of this is that LICDT and LIEDT do not implement CDT and EDT very well. I argue that they are indeed forms of CDT and EDT, but stray from expectations because they also implement the ratifiability condition I discussed previously. Continuing the line of thinking from that post, I discuss conditions in which LICDT=LIEDT, and try to draw out broader implications for decision theory.
Thanks to Scott and Sam for discussions shaping this post. Also thanks to many participants at AISFP for discussions shaping my current view of counterfactuals.
I’m not sure who gets credit for LICDT and LIEDT, but they’ve been discussed around MIRI since shortly after logical induction itself. LIEDT was sort of the obvious first thing to try; LICDT is a slight variation. (Scott thinks Jessica may have come up with LICDT.) They might be thought of as the punching bag for better logical induction DTs to be contrasted with (although, Tsvi wrote up a likely-better baseline proposal).
Both LICDT and LIEDT use a logical inductor which has been run for n steps, Pn. I’ll abbreviate an agent as An (parameterizing by the same n as for the inductor), with An=a to say that the agent takes action a from some action set A. We can define self-referential sentences Sn↔Pn(Sn)<ϵ. Both LICDT and LIEDT explore when such sentences are true. We can take whichever action the agent least expects itself to do conditioned on its exploring. This forces the agent to take every action with frequency at least ϵ/|A| in the limit, and also makes the exploration pseudorandom in the sense that the logical inductor cannot predict it much better than to assign probability ϵ to exploration (and therefore, neither can any poly-time computable predictor).
When it isn’t taking an action due to the exploration clause, LIEDT chooses actions based on the expected utility conditioning on each action. Utility is defined as a logically uncertain variable (LUV), in the terminology of the logical induction paper. Let Un be the LUV for the utility of An, and En(Un|An=a) be the conditional expectation of Un in Pn given that the agent takes action a. The conditional expectation is always well-defined thanks to the exploration, which ensures that the probability of each action remains above zero.
LICDT is similar, but rather than taking the expectation conditioned on each action, it takes the expectation conditioned on exploring and taking that action. Judging actions by what would happen if you took those actions randomly, rather than reliably, is supposed to remove the kind of correlation which makes EDT cooperate in prisoner’s dilemma, one-box in Newcomb, et cetera. We will see that this only party works.
Both LICDT and LIEDT include any observations in their deductive state. (There can be special predicates representing sensory states.) So, they are updateful decision theories.
LICDT and LIEDT aren’t very different, and mostly we just talk about LIEDT, calling it LIDT. However, I’m recently realizing just how similar LICDT and LIEDT really are.
LIEDT two-boxes in Newcomb.
Suppose we have an LIEDT agent facing Newcomb’s problem. We can specify a sequence of Newcomb problems (for logical inductors of increasing power) by the utility function Un:=10Pn(An=1)−I(An=1), where An=1 is the proposition stating that the agent (of power n) one-boxes, and I() is the indicator function which returns 1 for true propositions and 0 for false. This is a Newcomb problem where Omega is fallible; in fact, Omega can only predict the agent as well as the agent can predict itself, since both use the same logical inductor. (And, Newcomb deals with the uncertainty by putting the money in the box with probability equal to its estimation of the probability LIEDT one-boxes.) The best reward the agent can get is if Omega predicts one-boxing, but the agent unexpectedly two-boxes. Of course, a logical inductor can’t be fooled like this reliably; so the agent is incentivised to one-box.
Theorem. LIEDT converges to two-box on non-exploration rounds as n increases.
Proof. The logical inductor comes to predict En(Un|An=1)=Pn(An=1)−1 and En(Un|An=2)=Pn(An=1) with increasing accuracy as n increases, since it has access to Pn(An=1), and since the conditioning is always well-defined thanks to exploration. Therefore, two-boxing eventually becomes the most appealing option. □
So, LIEDT does not come to see Omega as correlated with its action, because it knows its own general policy, and the general policy screens off all the correlation between Omega and its action.
Now, it’s true that if Omega was a more powerful predictor than the agent, LIEDT could one-box—but, so could LICDT. In particular, if Omega simply knows the action precisely, then Un=10I(An=1)−I(An=1)=9I(An=1), and both LICDT and LIEDT one-box.
LICDT is XOR-blackmailed.
On the other hand, consider the XOR Blackmail letter, which is supposed to be a case where CDT does better than EDT. There is a difficult-to-predict disaster, Dn, with pseudorandom probability 0.01. However, an AI researcher can predict both the disaster and the AI, and will use that knowledge to try and extract money from the AI. Let’s call the AI sending money to the researcher An=1, and not sending money An=0. The AI researcher sends a letter asking for money if and only if [they predict the AI will respond by sending money XOR Dn]. Let’s say the AI researcher asks for half the cost of the disaster. Un=I(Dn)(−I(An)−2)+I(¬Dn)(−I(An)). Moreover, the deductive state includes knowledge of the letter, Ln=XOR(Dn,An).
Theorem. LICDT converges to sending the blackmailer money when a letter is received, on non-exploration rounds.
Proof. LICDT bases its decision on the utility observed in exploration rounds. Conditional on its receiving the letter and exploring into sending the money, no disaster has occurred. Conditional on its receiving the letter and exploring into not sending money, the disaster has occurred. It will come to predict both of these things accurately, and its conditional expectations will be consistent with them. Therefore, it will send the money. □
Interpretation of the two experiments.
It appears that these aren’t very good implementations of CDT and EDT. The attempted CDT fails blackmail letter; the attempted EDT fails Newcomb. But, if we look a little closer, something more interesting is going on. I didn’t prove it here, but both of them will one-box when Omega is a perfect predictor, and two-box when Omega is fallible. Both of them will send the blackmail letter when the blackmailer is a perfect predictor, and refuse when the blackmailer is fallible. They appear to be following my “Law of Logical Causality” from SLS III.
When people argue that EDT one-boxes in Newcomb and that CDT two-boxes, and that EDT sends the money in XOR Blackmail but EDT abstains, they often aren’t careful that EDT and CDT are being given the same problem. CDT is supposed to be taking a physical-causation counterfactual, meaning it represents the problem in a Bayesian network representing its physical uncertainty, in which the direction of links lines up with physical causality. If we give EDT the same Bayesian network, it will disregard the causal information contained therein, and compute conditional utilities of actions. But, it is unclear that EDT will then one-box in Newcomb. Reasoning about the physical situation, will it really conclude that conditioning on one action or another changes the expected prediction of Omega? How does the conditional probability flow from the action to Omega? Omega’s prediction is based on some observations made in the past. It may be that the agent knows equally well what those observations were; it just doesn’t know exactly what Omega concluded from them. The knowledge of the observations screens off any probabilistic relationship. Or, even worse, it may be that the physical information which the agent has includes its own source code. The agent can’t try to run its own source code on this very decision; it would go into an infinite loop. So, we get stuck when we try to do the reasoning. Similar problems occur in XOR blackmail.
I claim that reasoning about what CDT and EDT do in Newcomb’s problem and XOR blackmail implicitly assume some solution to logical uncertainty. The correlation which EDT is supposed to conclude exists between its action and the predictor’s guess at its action is a logical correlation. But, logical induction doesn’t necessarily resolve this kind of logical uncertainty in the intuitive way.
In particular, logical induction implements a version of the ratifiability condition. Because LIEDT agents know their own policies, they are screened off from would-be correlates of their decisions in much the same way LICDT agents are. And because LICDT agents are learning counterfactuals by exploration, they treat predictors who have more information about their own actions than they themselves do as causally downstream—the law of logical causality which I conjectured would, together with ratifiability, imply CDT=EDT.
When does LICDT=LIEDT?
It’s obvious that LIEDT usually equals LICDT when LIEDT converges to taking just one of the actions on non-exploration rounds; the other actions are only taken when exploring, so LIEDT’s expectation of those actions just equals LICDT’s. What about the expectation of the main action? Well, it may differ, if there is a reliable difference in utility when it is taken as an exploration vs deliberately chosen. However, this seems to be in some sense unfair; the environment is basing its payoff on the agent’s reasons for taking an action, rather than on the action alone. While we’d like to be able to deal with some such environments, allowing this in general allows an environment to punish one decision theory selectively. So, for now, we rule such decision problems out:
Definition: Decision problem. A decision problem is a function Un(An) which takes an agent and a step number, and yields a LUV which is the payout.
Definition: Fair decision problem. A fair decision problem is a decision problem such that the same limiting action probabilities on the part of the agent imply the same limiting expectations on utility per action, and these expectations do not differ between exploration actions and plain actions. Formally: if limn→∞Pn(An=a) exists for all a∈A, and a second agent Bn has limiting action probabilities which also exist and are the same, then limn→∞En(Un(An)|An=a) also exist and are the same as the corresponding quantities for Bn; and furthermore, limn→∞En(Un(An)|An=a∧Sn) exist and are the same as the limits without Sn.
I don’t expect that this definition is particularly good; alternatives welcome. In particular, using the inductor itself to estimate the action probability introduces an unfortunate dependence on inductor.
Compare to the notion of fairness in Asymptotic DT.
Definition: A continuous fair decision problem is a fair problem for which the function from limiting action probabilities to limiting expected utilities is continuous.
Observation. Continuous fair decision problems have “equilibrium” action distributions, where the best response to the action utilities which come from those is consistent with the action distribution. These are the same whether “best-response” is in the LICDT or LIEDT sense, since the expected utility is required to be the same in both senses. If either LICDT or LIEDT converge on some such problem, then clearly they converge to one of these equilibria.
This doesn’t necessarily mean that LICDT and LIEDT converge to the same behavior, though, since it is possible that they fail to converge, and that they converge to different equilibria. I would be somewhat surprised if there is some essential difference in behavior between LICDT and LIEDT on these problems, but I’m not sure what conjecture to state.
I’m more confident in a conjecture for a much narrower notion of fairness (I wasn’t able to prove this, but I wouldn’t be very surprised if it turned out not to be that hard to prove):
Definition. Deterministically fair decision problem: A decision problem is deterministically fair if the payout on instance n are only a function of the action probabilities according to Pn and of the actual action taken (where Pn is the logical inductor used by the agent itself).
Conjecture. Given a continuous deterministically fair decision problem, the set of mixed strategies which LICDT may converge to and LIEDT may converge to, varying the choice of logical inductor, are the same. That is, if LICDT converges to an equilibrium, we can find a logical inductor for which LIEDT converges to that same equilibrium, and vice versa.
This seems likely to be true, since the narrowness of the decision problem class leaves very little room to wedge the two decision theories apart.
The difficulty of proving comparison theorems for LICDT and LIEDT is closely related to the difficulty of proving optimality theorems for them. If we had a good characterization of the convergence and optimality conditions of these two decision theories, we would probably be in a better position to study the relationship between them.
At least we can prove the following fairly boring theorem:
Theorem. For a decision problem in which the utility depends only on the action taken, in a way which does not depend on n or on the agent, and for which all the actions have different utilities, LIEDT and LICDT will converge to the same distribution on actions.
Proof. Epsilon exploration ensures that there continues to be some probability of each action, so the logical inductor will eventually learn the action utilities arbitrarily well. Once the utilities are accurate enough to put the actions in the right ordering, the agent will simply take the best action, with the exception of exploration rounds. □
Law of Logical Counterfactuals
The interesting thing here is that LIEDT seems to be the same as LICDT under an assumption that the environment doesn’t specifically mess with it by doing something differently on exploration rounds and non-exploration rounds. This is sort of obvious from the definitions of LICDT and LIEDT (despite the difficulty of actually proving the result). However, notice that it’s much different from usual statements of the difference between EDT and CDT.
I claim that LIEDT and LICDT are more-or-less appropriately reflecting the spirit of EDT and CDT under (1) the condition of ratifiability, which is enforced by the self-knowledge properties of logical inductors, and (2) the “law of logical counterfactuals” (LLC) I posited last time, which is enforced by the way LICDT learns causality via experimentation. You can’t learn that something which knows more about your action than you do is upstream of you if you make the assumption that you can perform randomized controlled trials!
Since the law of logical counterfactuals was vaguely stated, I had hoped to learn something about what shape it has to take by examining this case. Unfortunately, this case is a bit intractable. However, it did suggest an additional condition: (3) the environment’s behavior doesn’t depend on whether you explore. (1), (2), and (3) are stated rather informally for now, but together are supposed to imply CDT=EDT, in a formal analysis which is yet-to-be.
Actually, assumption (3) is a sort of “randomized controlled trials” assumption, which seems to justify (2). You assume that you can arrange for actions to be uncorrelated with anything in the world, and that justifies your use of exploration to learn about counterfactuals.
It’s not obvious at first, but exploration-based counterfactuals are very similar to counterfactuals based on the chicken rule in proof-based decision theories such as MUDT. The chicken rule requires that if you can prove what your own action will be, you take a different action. This allows the proofs from alternative actions to be well-behaved, rather than exploding in contradiction.
You can see how that’s analogous to the way epsilon-exploration makes sure that LICDT and LIEDT can condition on their own actions without dividing by zero. It’s also analogous for a deeper reason. Remember the trick of using self-referential sentences for exploration, at the beginning of this writeup? A very similar trick is to take any action which the logical inductor currently assigns probability <ϵ. In other words, do anything you strongly believe you won’t do. This is very close to the chicken rule; just substitute probabilistic belief for proof.
In fact, we can go a bit further. One of the shortcomings of MUDT is that it doesn’t do what we’d like in the Agent Simulates Predictor scenario, and a host of other problems where a more powerful logic is required. We can address these issues by giving it a more powerful logic, but that does not address the intuitive concern, that it seems as if we should be able to solve these problems without fully trusting a more powerful logic: if we strongly suspect that the more powerful logic is consistent, we should be able to mostly do the same thing.
And indeed, we can accomplish this with logical induction. What we do is play chicken against anything which seems to predict our action beyond a certain tolerable degree. This gives us exactly the pseudorandom epsilon-exploration of LICDT/LIEDT. Unfortunately, there’s a version of Agent Simulates Predictor which trips these up as well. Alas. (Asymptotic Decision Theory, on the other hand, gets Agent Simulates Predictor right; but, it is bad for other reasons.)
It’s interesting that in proof-based decision theory, you never have to take the action; you just threaten to take it. (Or rather: you only take it if your logic is inconsistent.) It’s like being able to learn from an experiment which you never perform, merely by virtue of putting yourself in a position where you almost do it.
Troll Bridge
Condition (3) is a sort of “no Troll Bridge”condition. The write-ups of the Troll Bridge on this forum are somewhat inadequate references to make my point well (they don’t even call it Troll Bridge!), but the basic idea is that you put something in the environment which depends on the consistency of PA, in a way which makes a MUDT agent do the wrong thing via a very curious Löbian argument. There’s a version of Troll Bridge which works on the self-referential exploration sentences Sn rather than the consistency of PA. It seems like what’s going on in troll bridge has a lot to do with part of the environment correlating itself with your internal machinery; specifically, the internal machinery which ensures that you can have counterfactuals.
Troll Bridge is a counterexample to the idea of proof-length counterfactuals. The hypothesis behind proof-length counterfactuals is that A is a legitimate counterfactual consequence of B if a proof of A from B is much shorter than a proof of ¬A from B. It’s interesting that my condition (3) rules it out like this; it suggests a possible relationship between what I’m doing and proof-length counterfactuals. But I suspected such a relationship since before writing SLSIII. The connection is this:
Proof-length counterfactuals are consistent with MUDT, and with variants of MUDT based on bounded-time proof search, because what the chicken rule does is put proofs of what the agent does juuust out of reach of the agent itself. If the environment is tractable enough that there exist proofs of the consequences of actions which the agent will be able to find, the chicken rule pushes the proofs of the agent’s own actions far enough out that they won’t interfere with that. As a result, you have to essentially step through the whole execution of the agent’s code in order to prove what it does, which of course the agent can’t do itself while it’s running. We can refine proof-length counterfactuals to make an even tighter fit: given a proof search, A is a legitimate counterfactual consequence of B if the search finds a proof of A from B before it finds one for ¬A.
This makes it clear that the notion of counterfactual is quite subjective. Logical inductors actually make it significantly less so, because an LI can play chicken against all proof systems; it will be more effective in the short-run at playing chicken against its own proof system, but in the song run it learns to predict theorems as fast as any proof system would, so it can play chicken against them all. Nonetheless, LIDT still plays chicken against a subjective notion of predictability.
And this is obviously connected to my assumption (3). LIDT tries valiantly to make (3) true by decorrelating its actions with anything which can predict them. Sadly, as for Agent Simulates Predictor, we can construct a variant of Troll Bridge which causes this to fail anyway.
In any case, this view of what exploration-based counterfactuals are makes me regard them as significantly more natural than I think others do. Nonetheless, the fact remains that neither LIDT nor MUDT do all the things we’d like them to. They don’t do what we would like in multi-agent situations. They don’t solve Agent Simulates Predictor or Troll Bridge. It seems to me that when they fail, they fail for largely the same reasons, thanks to the strong analogy between their notions of counterfactual.
(There are some disanalogies, however; for one, logical inductor decision theories have a problem of selecting equilibria which doesn’t seem to exist in proof-based decision theories.)