As you know, I’m interested in decision theories that work in completely deterministic worlds. What does “pruning” mean if only one outcome is logically possible?
Not one, multiple. For example In Newcomb’s you can still choose to one-box (you get $1M) or two-box (you get $1k). However, “two-box and $1001000” is not in the problem domain at all, just like killing the predictor and grabbing all its riches isn’t. Similarly, if you play a game of, say, chess, there are valid moves and invalid moves. When designing a chess program you don’t need to worry about an opponent making an invalid move. In the cloned PD example CD and DC are invalid moves. If an algorithm (decision theory) cannot filter them out automatically, you have to prune the list of all moves until only valid moves are left before applying it. I am surprised that this trivial observation is not completely obvious.
The problem is that, for a deterministic decision algorithm running in a deterministic world, only one outcome actually happens. If you want to define a larger set of “logically possible” outcomes, I don’t see a difference in principle between the outcome where your decision algorithm returns something it doesn’t actually return, and the outcome where 1=2 and pumpkins fall from the sky.
You might say that outcomes are “possible” or “impossible” from the agent’s point of view, not absolutely. The agent must run some “pruning” algorithm, and the set of “possible” outcomes will be defined as the result of that. But then the problem is that the set of “possible” outcomes will depend on how exactly the “pruning” works, and how much time the agent spends on it. With all the stuff about self-fulfilling proofs in UDT, it might be possible to have an agent that hurts itself by overzealous “pruning”.
I must be missing something. Suppose you write a chess program. The part of it which determines which moves are valid is separate from the part which decides which moves are good. Does a chess bot not qualify as a “deterministic decision algorithm running in a deterministic world”?
Or is the issue that there is an uncertainty introduced by the other player? Then how about a Rubik cube solver? Valid moves are separate from the moves which get you close to the final state. You never apply your optimizer to invalid moves, which is exactly what CDT does in Newcomb’s.
As you know, I’m interested in decision theories that work in completely deterministic worlds. What does “pruning” mean if only one outcome is logically possible?
Not one, multiple. For example In Newcomb’s you can still choose to one-box (you get $1M) or two-box (you get $1k). However, “two-box and $1001000” is not in the problem domain at all, just like killing the predictor and grabbing all its riches isn’t. Similarly, if you play a game of, say, chess, there are valid moves and invalid moves. When designing a chess program you don’t need to worry about an opponent making an invalid move. In the cloned PD example CD and DC are invalid moves. If an algorithm (decision theory) cannot filter them out automatically, you have to prune the list of all moves until only valid moves are left before applying it. I am surprised that this trivial observation is not completely obvious.
The problem is that, for a deterministic decision algorithm running in a deterministic world, only one outcome actually happens. If you want to define a larger set of “logically possible” outcomes, I don’t see a difference in principle between the outcome where your decision algorithm returns something it doesn’t actually return, and the outcome where 1=2 and pumpkins fall from the sky.
You might say that outcomes are “possible” or “impossible” from the agent’s point of view, not absolutely. The agent must run some “pruning” algorithm, and the set of “possible” outcomes will be defined as the result of that. But then the problem is that the set of “possible” outcomes will depend on how exactly the “pruning” works, and how much time the agent spends on it. With all the stuff about self-fulfilling proofs in UDT, it might be possible to have an agent that hurts itself by overzealous “pruning”.
I must be missing something. Suppose you write a chess program. The part of it which determines which moves are valid is separate from the part which decides which moves are good. Does a chess bot not qualify as a “deterministic decision algorithm running in a deterministic world”?
Or is the issue that there is an uncertainty introduced by the other player? Then how about a Rubik cube solver? Valid moves are separate from the moves which get you close to the final state. You never apply your optimizer to invalid moves, which is exactly what CDT does in Newcomb’s.