I haven’t seen any good attempts. If someone else was asking, I’d refer them to you, but since it’s you who’s asking, I’ll just say that I don’t know :-)
I have heard a claim that UDT is a kind of “sane precomputed EDT” (?). Why are “you” (they?) basing UDT on EDT? Is this because you are using the level of abstraction where causality somehow goes away, like it goes away if you look at the universal wave function (???). Maybe I just don’t understand UDT? Can you explain UDT? :)
I am trying very very hard to be charitable to the EDT camp, because I am sure there are very smart people in that camp (Savage? Although I think he was aware of confounding issues and tried to rule them out before licensing an action. The trouble is you cannot do it with just conditional independence, that way lie dragons). This is why I keep asking about EDT.
I’ll try to explain UDT by dividing it into “simple UDT” and “general UDT”. These are some terms I just came up with, and I’ll link to my own posts as examples, so please don’t take my comment as some kind of official position.
“Simple UDT” assumes that you have a set of possible histories of a decision problem, and you know the locations of all instances of yourself within these histories. It’s basically a reformulation of a certain kind of single-player games that are already well known in game theory literature. For more details, see this post. If you try to work through the problems listed in that post, there’s a good chance that the very first one (Absent-Minded Driver) will give you a feeling of how “simple UDT” works. I think it’s the complete and correct solution to the kind of problems where it’s applicable, and doesn’t need much more research.
“General UDT” assumes that the decision problem is given to you in some form that doesn’t explicitly point out all instances of yourself, e.g. an initial state of a huge cellular automaton, or a huge computer program that computes a universe, or even a prior over all possible universes. The idea is to reduce the problem to “simple UDT” by searching for instances of yourself within the decision problem, using various mathematical techniques. See this post and this post for examples. Unlike “simple UDT”, “general UDT” has many unsolved problems. Most of these problems deal with logical uncertainty and bounded reasoning, like the problem described in this post.
Does that help?
ETA: I notice that the description of “simple UDT” is pretty underwhelming. If you simplify it to “we should model the entire decision problem as a single-player game and play the best strategy in that game”, you might say it’s trivial and wonder what’s the fuss. Maybe it’s easier to understand by comparing it to other approaches. If you ask someone who doesn’t know UDT to solve Absent-Minded Driver or Psy-Kosh’s problem, they might get confused by things like “my subjective probability of being at such-and-such node”, which are part of standard Bayesian rationality (Savage’s theorem), but excluded from “simple UDT” by design. Or if you give them Counterfactual Mugging, they might get confused by Bayesian updating, which is also excluded from UDT by design.
It seems to me that talking about EDT, causality and universal wavefunctions is overcomplicating things a little. Let me just describe a problem that could motivate the creation of UDT, and you tell me if it makes sense to you.
Consider cellular automata. There’s no general concept of causality for CA because some of them are reversible and can be computed in either direction. But you can still build a computer inside a CA and write a program for it. The program will output instructions for some robot arms inside the CA to optimize some utility function on the CA’s states. Let’s also assume that the initial state of the CA can contain multiple computers running the program, with different architectures etc. A complete description of the initial state will be given to the program at startup, so there’s no uncertainty anywhere in the setup.
Now the question is, what’s the most general way to write such programs, for different cellular automata and utility functions? It seems to me that if you try to answer that question, you’ll first stumble on the idea of giving the program a quined description of itself, so it can find instances of itself inside the CA. Then you’ll get the idea of using something like “logical consequences” of different possible outputs, because physical consequences aren’t available. Then you’ll notice that provability in a formal theory is one possible way to formalize “logical consequences”, though it has many problems. And eventually you’ll come up with a version of UDT which might look something like this, or possibly this if you’re more concerned with provable optimality than computability.
I haven’t seen any good attempts. If someone else was asking, I’d refer them to you, but since it’s you who’s asking, I’ll just say that I don’t know :-)
I have heard a claim that UDT is a kind of “sane precomputed EDT” (?). Why are “you” (they?) basing UDT on EDT? Is this because you are using the level of abstraction where causality somehow goes away, like it goes away if you look at the universal wave function (???). Maybe I just don’t understand UDT? Can you explain UDT? :)
I am trying very very hard to be charitable to the EDT camp, because I am sure there are very smart people in that camp (Savage? Although I think he was aware of confounding issues and tried to rule them out before licensing an action. The trouble is you cannot do it with just conditional independence, that way lie dragons). This is why I keep asking about EDT.
I’ll try to explain UDT by dividing it into “simple UDT” and “general UDT”. These are some terms I just came up with, and I’ll link to my own posts as examples, so please don’t take my comment as some kind of official position.
“Simple UDT” assumes that you have a set of possible histories of a decision problem, and you know the locations of all instances of yourself within these histories. It’s basically a reformulation of a certain kind of single-player games that are already well known in game theory literature. For more details, see this post. If you try to work through the problems listed in that post, there’s a good chance that the very first one (Absent-Minded Driver) will give you a feeling of how “simple UDT” works. I think it’s the complete and correct solution to the kind of problems where it’s applicable, and doesn’t need much more research.
“General UDT” assumes that the decision problem is given to you in some form that doesn’t explicitly point out all instances of yourself, e.g. an initial state of a huge cellular automaton, or a huge computer program that computes a universe, or even a prior over all possible universes. The idea is to reduce the problem to “simple UDT” by searching for instances of yourself within the decision problem, using various mathematical techniques. See this post and this post for examples. Unlike “simple UDT”, “general UDT” has many unsolved problems. Most of these problems deal with logical uncertainty and bounded reasoning, like the problem described in this post.
Does that help?
ETA: I notice that the description of “simple UDT” is pretty underwhelming. If you simplify it to “we should model the entire decision problem as a single-player game and play the best strategy in that game”, you might say it’s trivial and wonder what’s the fuss. Maybe it’s easier to understand by comparing it to other approaches. If you ask someone who doesn’t know UDT to solve Absent-Minded Driver or Psy-Kosh’s problem, they might get confused by things like “my subjective probability of being at such-and-such node”, which are part of standard Bayesian rationality (Savage’s theorem), but excluded from “simple UDT” by design. Or if you give them Counterfactual Mugging, they might get confused by Bayesian updating, which is also excluded from UDT by design.
Thinking about this.
It seems to me that talking about EDT, causality and universal wavefunctions is overcomplicating things a little. Let me just describe a problem that could motivate the creation of UDT, and you tell me if it makes sense to you.
Consider cellular automata. There’s no general concept of causality for CA because some of them are reversible and can be computed in either direction. But you can still build a computer inside a CA and write a program for it. The program will output instructions for some robot arms inside the CA to optimize some utility function on the CA’s states. Let’s also assume that the initial state of the CA can contain multiple computers running the program, with different architectures etc. A complete description of the initial state will be given to the program at startup, so there’s no uncertainty anywhere in the setup.
Now the question is, what’s the most general way to write such programs, for different cellular automata and utility functions? It seems to me that if you try to answer that question, you’ll first stumble on the idea of giving the program a quined description of itself, so it can find instances of itself inside the CA. Then you’ll get the idea of using something like “logical consequences” of different possible outputs, because physical consequences aren’t available. Then you’ll notice that provability in a formal theory is one possible way to formalize “logical consequences”, though it has many problems. And eventually you’ll come up with a version of UDT which might look something like this, or possibly this if you’re more concerned with provable optimality than computability.