It seems to me that talking about EDT, causality and universal wavefunctions is overcomplicating things a little. Let me just describe a problem that could motivate the creation of UDT, and you tell me if it makes sense to you.
Consider cellular automata. There’s no general concept of causality for CA because some of them are reversible and can be computed in either direction. But you can still build a computer inside a CA and write a program for it. The program will output instructions for some robot arms inside the CA to optimize some utility function on the CA’s states. Let’s also assume that the initial state of the CA can contain multiple computers running the program, with different architectures etc. A complete description of the initial state will be given to the program at startup, so there’s no uncertainty anywhere in the setup.
Now the question is, what’s the most general way to write such programs, for different cellular automata and utility functions? It seems to me that if you try to answer that question, you’ll first stumble on the idea of giving the program a quined description of itself, so it can find instances of itself inside the CA. Then you’ll get the idea of using something like “logical consequences” of different possible outputs, because physical consequences aren’t available. Then you’ll notice that provability in a formal theory is one possible way to formalize “logical consequences”, though it has many problems. And eventually you’ll come up with a version of UDT which might look something like this, or possibly this if you’re more concerned with provable optimality than computability.
It seems to me that talking about EDT, causality and universal wavefunctions is overcomplicating things a little. Let me just describe a problem that could motivate the creation of UDT, and you tell me if it makes sense to you.
Consider cellular automata. There’s no general concept of causality for CA because some of them are reversible and can be computed in either direction. But you can still build a computer inside a CA and write a program for it. The program will output instructions for some robot arms inside the CA to optimize some utility function on the CA’s states. Let’s also assume that the initial state of the CA can contain multiple computers running the program, with different architectures etc. A complete description of the initial state will be given to the program at startup, so there’s no uncertainty anywhere in the setup.
Now the question is, what’s the most general way to write such programs, for different cellular automata and utility functions? It seems to me that if you try to answer that question, you’ll first stumble on the idea of giving the program a quined description of itself, so it can find instances of itself inside the CA. Then you’ll get the idea of using something like “logical consequences” of different possible outputs, because physical consequences aren’t available. Then you’ll notice that provability in a formal theory is one possible way to formalize “logical consequences”, though it has many problems. And eventually you’ll come up with a version of UDT which might look something like this, or possibly this if you’re more concerned with provable optimality than computability.