As some of you may know, I disagree with many of the criticisms leveled against evidential decision theory (EDT). Most notably, I believe that Smoking lesion-type problems don’t refute EDT. I also don’t think that EDT’s non-updatelessness leaves a lot of room for disagreement, given that EDT recommends immediate self-modification to updatelessness. However, I do believe there are some issues with run-of-the-mill EDT. One of them is naturalized induction. It is in fact not only a problem for EDT but also for causal decision theory (CDT) and most other decision theories that have been proposed in- and outside of academia. It does not affect logical decision theories, however.
The role of naturalized induction in decision theory
Recall that EDT prescribes taking the action that maximizes expected utility, i.e.
argmaxa∈AE[U(w)|a,o]=argmaxa∈A∑w∈WP(w|a,o)U(w),
where A is the set of available actions, U is the agent’s utility function, W is a set of possible world models, o represents the agent’s past observations (which may include information the agent has collected about itself). CDT works in a – for the purpose of this article – similar way, except that instead of conditioning on a in the usual way, it calculates some causal counterfactual, such as Pearl’s do-calculus: P(w|do(a),o). The problem of naturalized induction is that of assigning posterior probabilities to world models P(w|a,o) (or P(w|do(a),o) or whatever) when the agent is naturalized, i.e., embedded into its environment.
Consider the following example. Let’s say there are 5 world models W={w1,...,w5}, each of which has equal prior probability. These world models may be cellular automata. Now, the agent makes the observation o. It turns out that worlds w1 and w2 don’t contain any agents at all, and w3 contains no agent making the observation o. The other two world models, on the other hand, are consistent with o. Thus, P(wi∣o)=0 for i=1,2,3 and P(wi∣o)=12 for i=4,5. Let’s assume that the agent has only two actions A={a1,a2} and that in world model w4 the only agent making observation o takes action a1 and in w5 the only agent making observation o takes action a2, then P(w4∣a1)=1=P(w5∣a2) and P(w5∣a1)=0=P(w4∣a2). Thus, if, for example, U(w5)>U(w4), an EDT agent would take action a2 to ensure that world model w5 is actual.
The main problem of naturalized induction
This example makes it sound as though it’s clear what posterior probabilities we should assign. But in general, it’s not that easy. For one, there is the issue of anthropics: if one world model w1 contains more agents observing o than another world model w2, does that mean P(w1∣o)>P(w2∣o)? Whether CDT and EDT can reason correctly about anthropics is an interesting question in itself (cf. Bostrom 2002; Armstrong 2011; Conitzer 2015), but in this post I’ll discuss a different problem in naturalized induction: identifying instantiations of the agent in a world model.
It seems that the core of the reasoning in the above example was that some worlds contain an agent observing o and others don’t. So, besides anthropics, the central problem of naturalized induction appears to be identifying agents making particular observations in a physicalist world model. While this can often be done uncontroversially – a world containing only rocks contains no agents –, it seems difficult to specify how it works in general. The core of the problem is a type mismatch of the “mental stuff” (e.g., numbers or Strings) o and the “physics stuff” (atoms, etc.) of the world model. Rob Bensinger calls this the problem of “building phenomenological bridges” (BPB) (also see his Bridge Collapse: Reductionism as Engineering Problem).
Sensitivity to phenomenological bridges
Sometimes, the decisions made by CDT and EDT are very sensitive to whether a phenomenological bridge is built or not. Consider the following problem:
One Button Per Agent. There are two similar agents with the same utility function. Each lives in her own room. Both rooms contain a button. If agent 1 pushes her button, it creates 1 utilon. If agent 2 pushes her button, it creates −50 utilons. You know that agent 1 is an instantiation of you. Should you press your button?
Note that this is essentially Newcomb’s problem with potential anthropic uncertainty (see the second paragraph here) – pressing the button is like two-boxing, which causally gives you $1k if you are the real agent but costs you $1M if you are the simulation.
If agent 2 is sufficiently similar to you to count as an instantiation of you, then you shouldn’t press the button. If, on the other hand, you believe that agent 2 does not qualify as something that might be you, then it comes down to what decision theory you use: CDT would press the button, whereas EDT wouldn’t (assuming that the two agents are strongly correlated).
It is easy to specify a problem where EDT, too, is sensitive to the phenomenological bridges it builds:
One Button Per World. There are two possible worlds. Each contains an agent living in a room with a button. The two agents are similar and have the same utility function. The button in world 1 creates 1 utilon, the button in world 2 creates −50 utilons. You know that the agent in world 1 is an instantiation of you. Should you press the button?
If you believe that the agent in world 2 is an instantiation of you, both EDT and CDT recommend you not to press the button. However, if you believe that the agent in world 2 is not an instantiation of you, then naturalized induction concludes that world 2 isn’t actual and so pressing the button is safe.
Building phenomenological bridges is hard and perhaps confused
So, to solve the problem of naturalized induction and apply EDT/CDT-like decision theories, we need to solve BPB. The behavior of an agent is quite sensitive to how we solve it, so we better get it right.
Unfortunately, I am skeptical that BPB can be solved. Most importantly, I suspect that statements about whether a particular physical process implements a particular algorithm can’t be objectively true or false. There seems to be no way of testing any such relations.
One might think that one could map between physical processes and algorithms on a pragmatic or functional basis. That is, one could say that a physical process A implements a program p to the extent that the results of A correlate with the output of p. I think this idea goes into the right direction and we will later see an implementation of this pragmatic approach that does away with naturalized induction. However, it feels inappropriate as a solution to BPB. The main problem is that two processes can correlate in their output without having similar subjective experiences. For instance, it is easy to show that Merge sort and Insertion sort have the same output for any given input, even though they have very different “subjective experiences”. (Another problem is that the dependence between two random variables cannot be expressed as a single number and so it is unclear how to translate the entire joint probability distribution of the two into a single number determining the likelihood of the algorithm being implemented by the physical process. That said, if implementing an algorithm is conceived of as binary – either true or false –, one could just require perfect correlation.)
Getting rid of the problem of building phenomenological bridges
If we adopt an EDT perspective, it seems clear what we have to do to avoid BPB. If we don’t want to decide whether some world contains the agent, then it appears that we have to artificially ensure that the agent views itself as existing in all possible worlds. So, we may take every world model and add a causally separate or non-physical entity representing the agent. I’ll call this additional agent a logical zombie (l-zombie) (a concept introduced by Benja Fallenstein for a somewhat different decision-theoretical reason). To avoid all BPB, we will assume that the agent pretends that it is the l-zombie with certainty. I’ll call this the l-zombie variant of EDT (LZEDT). It is probably the most natural evidentialist logical decision theory.
Note that in the context of LZEDT, l-zombies are a fiction used for pragmatic reasons. LZEDT doesn’t make the metaphysical claim that l-zombies exist or that you are secretly an l-zombie. For discussions of related metaphysical claims, see, e.g., Brian Tomasik’s essay Why Does Physics Exist? and references therein.
LZEDT reasons about the real world via the correlations between the l-zombie and the real world. In many cases, LZEDT will act as we expect an EDT agent to act. For example, in One Button Per Agent, it doesn’t press the button because that ensures that neither agent pushes the button.
LZEDT doesn’t need any additional anthropics but behaves like anthropic decision theory/EDT+SSA, which seems alright.
Although LZEDT may assign a high probability to worlds that don’t contain any actual agents, it doesn’t optimize for these worlds because it cannot significantly influence them. So, in a way LZEDT adopts the pragmatic/functional approach (mentioned above) of, other things equal, giving more weight to worlds that contain a lot of closely correlated agents.
LZEDT is automatically updateless. For example, it gives the money in counterfactual mugging. However, it invariably implements a particularly strong version of updatelessness. It’s not just updatelessness in the way that “son of EDT” (i.e., the decision theory that EDT would self-modify into) is updateless, it is also updateless w.r.t. its existence. So, for example, in the One Button Per World problem, it never pushes the button, because it thinks that the second world, in which pushing the button generates −50 utilons, could be actual. This is the case even if the second world very obviously contains no implementation of LZEDT. Similarly, it is unclear what LZEDT does in the Coin Flip Creation problem, which EDT seems to get right.
So, LZEDT optimizes for world models that naturalized induction would assign zero probability to. It should be noted that this is not done on the basis of some exotic ethical claim according to which non-actual worlds deserve moral weight.
I’m not yet sure what to make of LZEDT. It is elegant in that it effortlessly gets anthropics right, avoids BPB and is updateless without having to self-modify. On the other hand, not updating on your existence is often counterintuitive and even regular updateless is, in my opinion, best justified via precommitment. Its approach to avoiding BPB isn’t immune to criticism either. In a way, it is just a very wrong approach to BPB (mapping your algorithm into fictions rather than your real instantiations). Perhaps it would be more reasonable to use regular EDT with an approach to BPB that interprets anything as you that could potentially be you?
Of course, LZEDT also inherits some of the potential problems of EDT, in particular, the 5-and-10 problem.
CDT is more dependant on building phenomenological bridges
It seems much harder to get rid of the BPB problem in CDT. Obviously, the l-zombie approach doesn’t work for CDT: because none of the l-zombies has a physical influence on the world, “LZCDT” would always be indifferent between all possible actions. More generally, because CDT exerts no control via correlation, it needs to believe that it might be X if it wants to control X’s actions. So, causal decision theory only works with BPB.
That said, a causalist approach to avoiding BPB via l-zombies could be to tamper with the definition of causality such that the l-zombie “logically causes” the choices made by instantiations in the physical world. As far as I understand it, most people at MIRI currently prefer this flavor of logical decision theory.
Acknowledgements
Most of my views on this topic formed in discussions with Johannes Treutlein. I also benefited from discussions at AISFP.
Naturalized induction – a challenge for evidential and causal decision theory
As some of you may know, I disagree with many of the criticisms leveled against evidential decision theory (EDT). Most notably, I believe that Smoking lesion-type problems don’t refute EDT. I also don’t think that EDT’s non-updatelessness leaves a lot of room for disagreement, given that EDT recommends immediate self-modification to updatelessness. However, I do believe there are some issues with run-of-the-mill EDT. One of them is naturalized induction. It is in fact not only a problem for EDT but also for causal decision theory (CDT) and most other decision theories that have been proposed in- and outside of academia. It does not affect logical decision theories, however.
The role of naturalized induction in decision theory
Recall that EDT prescribes taking the action that maximizes expected utility, i.e.
argmaxa∈A E[U(w)|a,o]=argmaxa∈A∑w∈WP(w|a,o)U(w),
where A is the set of available actions, U is the agent’s utility function, W is a set of possible world models, o represents the agent’s past observations (which may include information the agent has collected about itself). CDT works in a – for the purpose of this article – similar way, except that instead of conditioning on a in the usual way, it calculates some causal counterfactual, such as Pearl’s do-calculus: P(w|do(a),o). The problem of naturalized induction is that of assigning posterior probabilities to world models P(w|a,o) (or P(w|do(a),o) or whatever) when the agent is naturalized, i.e., embedded into its environment.
Consider the following example. Let’s say there are 5 world models W={w1,...,w5}, each of which has equal prior probability. These world models may be cellular automata. Now, the agent makes the observation o. It turns out that worlds w1 and w2 don’t contain any agents at all, and w3 contains no agent making the observation o. The other two world models, on the other hand, are consistent with o. Thus, P(wi∣o)=0 for i=1,2,3 and P(wi∣o)=12 for i=4,5. Let’s assume that the agent has only two actions A={a1,a2} and that in world model w4 the only agent making observation o takes action a1 and in w5 the only agent making observation o takes action a2, then P(w4∣a1)=1=P(w5∣a2) and P(w5∣a1)=0=P(w4∣a2). Thus, if, for example, U(w5)>U(w4), an EDT agent would take action a2 to ensure that world model w5 is actual.
The main problem of naturalized induction
This example makes it sound as though it’s clear what posterior probabilities we should assign. But in general, it’s not that easy. For one, there is the issue of anthropics: if one world model w1 contains more agents observing o than another world model w2, does that mean P(w1∣o)>P(w2∣o)? Whether CDT and EDT can reason correctly about anthropics is an interesting question in itself (cf. Bostrom 2002; Armstrong 2011; Conitzer 2015), but in this post I’ll discuss a different problem in naturalized induction: identifying instantiations of the agent in a world model.
It seems that the core of the reasoning in the above example was that some worlds contain an agent observing o and others don’t. So, besides anthropics, the central problem of naturalized induction appears to be identifying agents making particular observations in a physicalist world model. While this can often be done uncontroversially – a world containing only rocks contains no agents –, it seems difficult to specify how it works in general. The core of the problem is a type mismatch of the “mental stuff” (e.g., numbers or Strings) o and the “physics stuff” (atoms, etc.) of the world model. Rob Bensinger calls this the problem of “building phenomenological bridges” (BPB) (also see his Bridge Collapse: Reductionism as Engineering Problem).
Sensitivity to phenomenological bridges
Sometimes, the decisions made by CDT and EDT are very sensitive to whether a phenomenological bridge is built or not. Consider the following problem:
One Button Per Agent. There are two similar agents with the same utility function. Each lives in her own room. Both rooms contain a button. If agent 1 pushes her button, it creates 1 utilon. If agent 2 pushes her button, it creates −50 utilons. You know that agent 1 is an instantiation of you. Should you press your button?
Note that this is essentially Newcomb’s problem with potential anthropic uncertainty (see the second paragraph here) – pressing the button is like two-boxing, which causally gives you $1k if you are the real agent but costs you $1M if you are the simulation.
If agent 2 is sufficiently similar to you to count as an instantiation of you, then you shouldn’t press the button. If, on the other hand, you believe that agent 2 does not qualify as something that might be you, then it comes down to what decision theory you use: CDT would press the button, whereas EDT wouldn’t (assuming that the two agents are strongly correlated).
It is easy to specify a problem where EDT, too, is sensitive to the phenomenological bridges it builds:
One Button Per World. There are two possible worlds. Each contains an agent living in a room with a button. The two agents are similar and have the same utility function. The button in world 1 creates 1 utilon, the button in world 2 creates −50 utilons. You know that the agent in world 1 is an instantiation of you. Should you press the button?
If you believe that the agent in world 2 is an instantiation of you, both EDT and CDT recommend you not to press the button. However, if you believe that the agent in world 2 is not an instantiation of you, then naturalized induction concludes that world 2 isn’t actual and so pressing the button is safe.
Building phenomenological bridges is hard and perhaps confused
So, to solve the problem of naturalized induction and apply EDT/CDT-like decision theories, we need to solve BPB. The behavior of an agent is quite sensitive to how we solve it, so we better get it right.
Unfortunately, I am skeptical that BPB can be solved. Most importantly, I suspect that statements about whether a particular physical process implements a particular algorithm can’t be objectively true or false. There seems to be no way of testing any such relations.
Probably we should think more about whether BPB really is doomed. There even seems to be some philosophical literature that seems worth looking into (again, see this Brian Tomasik post; cf. some of Hofstadter’s writings and the literatures surrounding “Mary the color scientist”, the computational theory of mind, computation in cellular automata, etc.). But at this point, BPB looks confusing/confused enough to look into alternatives.
Assigning probabilities pragmatically?
One might think that one could map between physical processes and algorithms on a pragmatic or functional basis. That is, one could say that a physical process A implements a program p to the extent that the results of A correlate with the output of p. I think this idea goes into the right direction and we will later see an implementation of this pragmatic approach that does away with naturalized induction. However, it feels inappropriate as a solution to BPB. The main problem is that two processes can correlate in their output without having similar subjective experiences. For instance, it is easy to show that Merge sort and Insertion sort have the same output for any given input, even though they have very different “subjective experiences”. (Another problem is that the dependence between two random variables cannot be expressed as a single number and so it is unclear how to translate the entire joint probability distribution of the two into a single number determining the likelihood of the algorithm being implemented by the physical process. That said, if implementing an algorithm is conceived of as binary – either true or false –, one could just require perfect correlation.)
Getting rid of the problem of building phenomenological bridges
If we adopt an EDT perspective, it seems clear what we have to do to avoid BPB. If we don’t want to decide whether some world contains the agent, then it appears that we have to artificially ensure that the agent views itself as existing in all possible worlds. So, we may take every world model and add a causally separate or non-physical entity representing the agent. I’ll call this additional agent a logical zombie (l-zombie) (a concept introduced by Benja Fallenstein for a somewhat different decision-theoretical reason). To avoid all BPB, we will assume that the agent pretends that it is the l-zombie with certainty. I’ll call this the l-zombie variant of EDT (LZEDT). It is probably the most natural evidentialist logical decision theory.
Note that in the context of LZEDT, l-zombies are a fiction used for pragmatic reasons. LZEDT doesn’t make the metaphysical claim that l-zombies exist or that you are secretly an l-zombie. For discussions of related metaphysical claims, see, e.g., Brian Tomasik’s essay Why Does Physics Exist? and references therein.
LZEDT reasons about the real world via the correlations between the l-zombie and the real world. In many cases, LZEDT will act as we expect an EDT agent to act. For example, in One Button Per Agent, it doesn’t press the button because that ensures that neither agent pushes the button.
LZEDT doesn’t need any additional anthropics but behaves like anthropic decision theory/EDT+SSA, which seems alright.
Although LZEDT may assign a high probability to worlds that don’t contain any actual agents, it doesn’t optimize for these worlds because it cannot significantly influence them. So, in a way LZEDT adopts the pragmatic/functional approach (mentioned above) of, other things equal, giving more weight to worlds that contain a lot of closely correlated agents.
LZEDT is automatically updateless. For example, it gives the money in counterfactual mugging. However, it invariably implements a particularly strong version of updatelessness. It’s not just updatelessness in the way that “son of EDT” (i.e., the decision theory that EDT would self-modify into) is updateless, it is also updateless w.r.t. its existence. So, for example, in the One Button Per World problem, it never pushes the button, because it thinks that the second world, in which pushing the button generates −50 utilons, could be actual. This is the case even if the second world very obviously contains no implementation of LZEDT. Similarly, it is unclear what LZEDT does in the Coin Flip Creation problem, which EDT seems to get right.
So, LZEDT optimizes for world models that naturalized induction would assign zero probability to. It should be noted that this is not done on the basis of some exotic ethical claim according to which non-actual worlds deserve moral weight.
I’m not yet sure what to make of LZEDT. It is elegant in that it effortlessly gets anthropics right, avoids BPB and is updateless without having to self-modify. On the other hand, not updating on your existence is often counterintuitive and even regular updateless is, in my opinion, best justified via precommitment. Its approach to avoiding BPB isn’t immune to criticism either. In a way, it is just a very wrong approach to BPB (mapping your algorithm into fictions rather than your real instantiations). Perhaps it would be more reasonable to use regular EDT with an approach to BPB that interprets anything as you that could potentially be you?
Of course, LZEDT also inherits some of the potential problems of EDT, in particular, the 5-and-10 problem.
CDT is more dependant on building phenomenological bridges
It seems much harder to get rid of the BPB problem in CDT. Obviously, the l-zombie approach doesn’t work for CDT: because none of the l-zombies has a physical influence on the world, “LZCDT” would always be indifferent between all possible actions. More generally, because CDT exerts no control via correlation, it needs to believe that it might be X if it wants to control X’s actions. So, causal decision theory only works with BPB.
That said, a causalist approach to avoiding BPB via l-zombies could be to tamper with the definition of causality such that the l-zombie “logically causes” the choices made by instantiations in the physical world. As far as I understand it, most people at MIRI currently prefer this flavor of logical decision theory.
Acknowledgements
Most of my views on this topic formed in discussions with Johannes Treutlein. I also benefited from discussions at AISFP.