The problem is that, for a deterministic decision algorithm running in a deterministic world, only one outcome actually happens. If you want to define a larger set of “logically possible” outcomes, I don’t see a difference in principle between the outcome where your decision algorithm returns something it doesn’t actually return, and the outcome where 1=2 and pumpkins fall from the sky.
You might say that outcomes are “possible” or “impossible” from the agent’s point of view, not absolutely. The agent must run some “pruning” algorithm, and the set of “possible” outcomes will be defined as the result of that. But then the problem is that the set of “possible” outcomes will depend on how exactly the “pruning” works, and how much time the agent spends on it. With all the stuff about self-fulfilling proofs in UDT, it might be possible to have an agent that hurts itself by overzealous “pruning”.
I must be missing something. Suppose you write a chess program. The part of it which determines which moves are valid is separate from the part which decides which moves are good. Does a chess bot not qualify as a “deterministic decision algorithm running in a deterministic world”?
Or is the issue that there is an uncertainty introduced by the other player? Then how about a Rubik cube solver? Valid moves are separate from the moves which get you close to the final state. You never apply your optimizer to invalid moves, which is exactly what CDT does in Newcomb’s.
The problem is that, for a deterministic decision algorithm running in a deterministic world, only one outcome actually happens. If you want to define a larger set of “logically possible” outcomes, I don’t see a difference in principle between the outcome where your decision algorithm returns something it doesn’t actually return, and the outcome where 1=2 and pumpkins fall from the sky.
You might say that outcomes are “possible” or “impossible” from the agent’s point of view, not absolutely. The agent must run some “pruning” algorithm, and the set of “possible” outcomes will be defined as the result of that. But then the problem is that the set of “possible” outcomes will depend on how exactly the “pruning” works, and how much time the agent spends on it. With all the stuff about self-fulfilling proofs in UDT, it might be possible to have an agent that hurts itself by overzealous “pruning”.
I must be missing something. Suppose you write a chess program. The part of it which determines which moves are valid is separate from the part which decides which moves are good. Does a chess bot not qualify as a “deterministic decision algorithm running in a deterministic world”?
Or is the issue that there is an uncertainty introduced by the other player? Then how about a Rubik cube solver? Valid moves are separate from the moves which get you close to the final state. You never apply your optimizer to invalid moves, which is exactly what CDT does in Newcomb’s.