As far as I can reconstruct EDT’s algorithm, it goes something like this:
1) I know that smoking is correlated with lung cancer.
2) I’ve read in a medical journal that smoking and lung cancer have a common cause, some kind of genetic lesion. I don’t know if I have that lesion.
3) I’d like to smoke now, but I’m not sure if that’s the best decision.
4) My friend, a causal decision theorist, told me that smoking or not smoking cannot affect the lesion that I already have or don’t. But I don’t completely buy that reasoning. I prefer to use something else, which I will call “evidential decision theory”.
5) To figure out the best action to take, first I will counterfactually imagine myself as an automaton whose actions are chosen randomly, taking into account the lesion that I have or don’t, using the frequencies observed in the world. So an automaton with the lesion will have a higher probability of smoking and a higher probability of cancer.
6) Next, I will figure out what the automaton’s actions say about its utility, using ordinary conditional probabilities and expected values. It looks like the utility of automatons that smoke is lower than the utility of those that don’t, because the former ones are more likely to get cancer.
7) Now I will remember that I’m not an automaton, and choose to avoid smoking based on the above reasoning!
The problem with this line of reasoning is that the desire to smoke is correlated with smoking, and therefore with the genetic lesion. Since and EDT agent is assumed to perform Bayesian updates, it should update its probability of having the lesion upon the observation that it has a desire to smoke. How much it should update depends on its prior. If, according to its prior, the desire to smoke largely screens off the correlation between the lesion and smoking, then the agent will choose to smoke.
Sorry, are you saying that EDT is wrong, or that my explanation of EDT is wrong? If it’s the former, I agree. If it’s the latter, can you give a different explanation? Note that most of the literature agrees that EDT doesn’t smoke in the smoking lesion problem, so any alternative explanation should probably give the same result.
The latter. The objection that I described is known as “tickle defense of EDT”.
Keep in mind that EDT is defined formally, and informal scenarios typically have implicit assumptions of probabilistic conditional independence which affect the result. By making these assumption explicit, it is possible to have EDT smoke or not smoke in the smoking lesion problem, and two-box or one-box in Newcomb’s problem.
In fact the smoking lesion problem and Newcomb’s problem are two instances of the same type of decision problem, but their presentations may yield different implicit assumptions: in the smoking lesion problem virtually anybody makes assumptions such that smoking is intuitively the optimal choice, in Newcomb’s problem there is no consensus over the optimal choice.
OK, thanks. Though if that’s indeed the “proper” version of EDT, then I no longer understand the conflict between EDT and CDT. Do you know any problem where EDT+tickle disagrees with CDT?
Thanks, this mostly agrees with my understanding of “naive EDT.” Are you aware of serious efforts to steelman EDT against confounding issues? Smoking lesion is the simplest example, but there are many more complicated ones.
I haven’t seen any good attempts. If someone else was asking, I’d refer them to you, but since it’s you who’s asking, I’ll just say that I don’t know :-)
I have heard a claim that UDT is a kind of “sane precomputed EDT” (?). Why are “you” (they?) basing UDT on EDT? Is this because you are using the level of abstraction where causality somehow goes away, like it goes away if you look at the universal wave function (???). Maybe I just don’t understand UDT? Can you explain UDT? :)
I am trying very very hard to be charitable to the EDT camp, because I am sure there are very smart people in that camp (Savage? Although I think he was aware of confounding issues and tried to rule them out before licensing an action. The trouble is you cannot do it with just conditional independence, that way lie dragons). This is why I keep asking about EDT.
I’ll try to explain UDT by dividing it into “simple UDT” and “general UDT”. These are some terms I just came up with, and I’ll link to my own posts as examples, so please don’t take my comment as some kind of official position.
“Simple UDT” assumes that you have a set of possible histories of a decision problem, and you know the locations of all instances of yourself within these histories. It’s basically a reformulation of a certain kind of single-player games that are already well known in game theory literature. For more details, see this post. If you try to work through the problems listed in that post, there’s a good chance that the very first one (Absent-Minded Driver) will give you a feeling of how “simple UDT” works. I think it’s the complete and correct solution to the kind of problems where it’s applicable, and doesn’t need much more research.
“General UDT” assumes that the decision problem is given to you in some form that doesn’t explicitly point out all instances of yourself, e.g. an initial state of a huge cellular automaton, or a huge computer program that computes a universe, or even a prior over all possible universes. The idea is to reduce the problem to “simple UDT” by searching for instances of yourself within the decision problem, using various mathematical techniques. See this post and this post for examples. Unlike “simple UDT”, “general UDT” has many unsolved problems. Most of these problems deal with logical uncertainty and bounded reasoning, like the problem described in this post.
Does that help?
ETA: I notice that the description of “simple UDT” is pretty underwhelming. If you simplify it to “we should model the entire decision problem as a single-player game and play the best strategy in that game”, you might say it’s trivial and wonder what’s the fuss. Maybe it’s easier to understand by comparing it to other approaches. If you ask someone who doesn’t know UDT to solve Absent-Minded Driver or Psy-Kosh’s problem, they might get confused by things like “my subjective probability of being at such-and-such node”, which are part of standard Bayesian rationality (Savage’s theorem), but excluded from “simple UDT” by design. Or if you give them Counterfactual Mugging, they might get confused by Bayesian updating, which is also excluded from UDT by design.
It seems to me that talking about EDT, causality and universal wavefunctions is overcomplicating things a little. Let me just describe a problem that could motivate the creation of UDT, and you tell me if it makes sense to you.
Consider cellular automata. There’s no general concept of causality for CA because some of them are reversible and can be computed in either direction. But you can still build a computer inside a CA and write a program for it. The program will output instructions for some robot arms inside the CA to optimize some utility function on the CA’s states. Let’s also assume that the initial state of the CA can contain multiple computers running the program, with different architectures etc. A complete description of the initial state will be given to the program at startup, so there’s no uncertainty anywhere in the setup.
Now the question is, what’s the most general way to write such programs, for different cellular automata and utility functions? It seems to me that if you try to answer that question, you’ll first stumble on the idea of giving the program a quined description of itself, so it can find instances of itself inside the CA. Then you’ll get the idea of using something like “logical consequences” of different possible outputs, because physical consequences aren’t available. Then you’ll notice that provability in a formal theory is one possible way to formalize “logical consequences”, though it has many problems. And eventually you’ll come up with a version of UDT which might look something like this, or possibly this if you’re more concerned with provable optimality than computability.
Hi, can you explain EDT to me (by email)? :)
As far as I can reconstruct EDT’s algorithm, it goes something like this:
1) I know that smoking is correlated with lung cancer.
2) I’ve read in a medical journal that smoking and lung cancer have a common cause, some kind of genetic lesion. I don’t know if I have that lesion.
3) I’d like to smoke now, but I’m not sure if that’s the best decision.
4) My friend, a causal decision theorist, told me that smoking or not smoking cannot affect the lesion that I already have or don’t. But I don’t completely buy that reasoning. I prefer to use something else, which I will call “evidential decision theory”.
5) To figure out the best action to take, first I will counterfactually imagine myself as an automaton whose actions are chosen randomly, taking into account the lesion that I have or don’t, using the frequencies observed in the world. So an automaton with the lesion will have a higher probability of smoking and a higher probability of cancer.
6) Next, I will figure out what the automaton’s actions say about its utility, using ordinary conditional probabilities and expected values. It looks like the utility of automatons that smoke is lower than the utility of those that don’t, because the former ones are more likely to get cancer.
7) Now I will remember that I’m not an automaton, and choose to avoid smoking based on the above reasoning!
Does that make sense?
The problem with this line of reasoning is that the desire to smoke is correlated with smoking, and therefore with the genetic lesion. Since and EDT agent is assumed to perform Bayesian updates, it should update its probability of having the lesion upon the observation that it has a desire to smoke.
How much it should update depends on its prior.
If, according to its prior, the desire to smoke largely screens off the correlation between the lesion and smoking, then the agent will choose to smoke.
Sorry, are you saying that EDT is wrong, or that my explanation of EDT is wrong? If it’s the former, I agree. If it’s the latter, can you give a different explanation? Note that most of the literature agrees that EDT doesn’t smoke in the smoking lesion problem, so any alternative explanation should probably give the same result.
The latter. The objection that I described is known as “tickle defense of EDT”.
Keep in mind that EDT is defined formally, and informal scenarios typically have implicit assumptions of probabilistic conditional independence which affect the result.
By making these assumption explicit, it is possible to have EDT smoke or not smoke in the smoking lesion problem, and two-box or one-box in Newcomb’s problem.
In fact the smoking lesion problem and Newcomb’s problem are two instances of the same type of decision problem, but their presentations may yield different implicit assumptions: in the smoking lesion problem virtually anybody makes assumptions such that smoking is intuitively the optimal choice, in Newcomb’s problem there is no consensus over the optimal choice.
OK, thanks. Though if that’s indeed the “proper” version of EDT, then I no longer understand the conflict between EDT and CDT. Do you know any problem where EDT+tickle disagrees with CDT?
CDT essentially always chooses two-box/smoke in Newcomb-like problems, in EDT, the choice depends on the specific formalization of the problem.
Thanks, this mostly agrees with my understanding of “naive EDT.” Are you aware of serious efforts to steelman EDT against confounding issues? Smoking lesion is the simplest example, but there are many more complicated ones.
I haven’t seen any good attempts. If someone else was asking, I’d refer them to you, but since it’s you who’s asking, I’ll just say that I don’t know :-)
I have heard a claim that UDT is a kind of “sane precomputed EDT” (?). Why are “you” (they?) basing UDT on EDT? Is this because you are using the level of abstraction where causality somehow goes away, like it goes away if you look at the universal wave function (???). Maybe I just don’t understand UDT? Can you explain UDT? :)
I am trying very very hard to be charitable to the EDT camp, because I am sure there are very smart people in that camp (Savage? Although I think he was aware of confounding issues and tried to rule them out before licensing an action. The trouble is you cannot do it with just conditional independence, that way lie dragons). This is why I keep asking about EDT.
I’ll try to explain UDT by dividing it into “simple UDT” and “general UDT”. These are some terms I just came up with, and I’ll link to my own posts as examples, so please don’t take my comment as some kind of official position.
“Simple UDT” assumes that you have a set of possible histories of a decision problem, and you know the locations of all instances of yourself within these histories. It’s basically a reformulation of a certain kind of single-player games that are already well known in game theory literature. For more details, see this post. If you try to work through the problems listed in that post, there’s a good chance that the very first one (Absent-Minded Driver) will give you a feeling of how “simple UDT” works. I think it’s the complete and correct solution to the kind of problems where it’s applicable, and doesn’t need much more research.
“General UDT” assumes that the decision problem is given to you in some form that doesn’t explicitly point out all instances of yourself, e.g. an initial state of a huge cellular automaton, or a huge computer program that computes a universe, or even a prior over all possible universes. The idea is to reduce the problem to “simple UDT” by searching for instances of yourself within the decision problem, using various mathematical techniques. See this post and this post for examples. Unlike “simple UDT”, “general UDT” has many unsolved problems. Most of these problems deal with logical uncertainty and bounded reasoning, like the problem described in this post.
Does that help?
ETA: I notice that the description of “simple UDT” is pretty underwhelming. If you simplify it to “we should model the entire decision problem as a single-player game and play the best strategy in that game”, you might say it’s trivial and wonder what’s the fuss. Maybe it’s easier to understand by comparing it to other approaches. If you ask someone who doesn’t know UDT to solve Absent-Minded Driver or Psy-Kosh’s problem, they might get confused by things like “my subjective probability of being at such-and-such node”, which are part of standard Bayesian rationality (Savage’s theorem), but excluded from “simple UDT” by design. Or if you give them Counterfactual Mugging, they might get confused by Bayesian updating, which is also excluded from UDT by design.
Thinking about this.
It seems to me that talking about EDT, causality and universal wavefunctions is overcomplicating things a little. Let me just describe a problem that could motivate the creation of UDT, and you tell me if it makes sense to you.
Consider cellular automata. There’s no general concept of causality for CA because some of them are reversible and can be computed in either direction. But you can still build a computer inside a CA and write a program for it. The program will output instructions for some robot arms inside the CA to optimize some utility function on the CA’s states. Let’s also assume that the initial state of the CA can contain multiple computers running the program, with different architectures etc. A complete description of the initial state will be given to the program at startup, so there’s no uncertainty anywhere in the setup.
Now the question is, what’s the most general way to write such programs, for different cellular automata and utility functions? It seems to me that if you try to answer that question, you’ll first stumble on the idea of giving the program a quined description of itself, so it can find instances of itself inside the CA. Then you’ll get the idea of using something like “logical consequences” of different possible outputs, because physical consequences aren’t available. Then you’ll notice that provability in a formal theory is one possible way to formalize “logical consequences”, though it has many problems. And eventually you’ll come up with a version of UDT which might look something like this, or possibly this if you’re more concerned with provable optimality than computability.