I will get the hang of this eventually. I just have to break it down into a form I can accept first. I see what you, Nesov, Nisan, etc., are doing with the mutually dependent programs or functions. But we could tell a story, of Omega meeting a TDT agent who one-boxes and gets the reward, in which everything is caused by forward-in-time physical causality. So the status of “logical causality” is uncertain and perhaps suspect. It may not be an essential concept, in order to understand what’s going on here.
But we could tell a story, of Omega meeting a TDT agent who one-boxes and gets the reward, in which everything is caused by forward-in-time physical causality.
Many things can be explained in multiple different ways, and for physical events a physical (causal) explanation is always possible. The lesson of LW-style decision theories seems to be that one shouldn’t privilege the physical explanation over other types of knowledge about how events depend on each other, that other kinds of dependence can be equally useful (even though there must be a physical explanation for how those dependencies between physical events got established).
My CDT solution is to notice that Omega’s promise of the future is correct, then it must be capable of bringing it about. Perhaps it knows how to teleport the larger reward away if we go for the visible reward. Maybe Omega is just a master stage magician. Point is, taking the action of going and getting the visible reward will prevent me from getting the invisible one. I don’t need to worry about implementation details like whether it’s really based on my decision before I make it, or just the actions I take after I make it. The constraint is equally real whether I understand the mechanism or not.
Yes, that dodges the real question, but can an example illustrating the same deficiency of CDT be constructed, that isn’t subject to this dodge? I’m not certain it’s possible.
You could look at it another way. If a CDT agent knows it will face unspecified Newcomblike problems in the future, it will want to make the most general precommitment now. Of course you can’t come up with the most general precommitment that will solve all decision problems, because there could be a universe that arbitrarily punishes you for having a specific decision algorithm in your head, and rewards some other silly decision algorithm for being different. But if the universe rewards or punishes you only based on the return value of your algorithm and not its internals, then we can hope to figure out mathematically how the most general precommitment (UDT) should choose its return value in every situation. We already know enough to suspect that that it will probably talk about logical implication instead of physical causality, even in a world that runs on physical causality.
I will get the hang of this eventually. I just have to break it down into a form I can accept first. I see what you, Nesov, Nisan, etc., are doing with the mutually dependent programs or functions. But we could tell a story, of Omega meeting a TDT agent who one-boxes and gets the reward, in which everything is caused by forward-in-time physical causality. So the status of “logical causality” is uncertain and perhaps suspect. It may not be an essential concept, in order to understand what’s going on here.
Many things can be explained in multiple different ways, and for physical events a physical (causal) explanation is always possible. The lesson of LW-style decision theories seems to be that one shouldn’t privilege the physical explanation over other types of knowledge about how events depend on each other, that other kinds of dependence can be equally useful (even though there must be a physical explanation for how those dependencies between physical events got established).
My CDT solution is to notice that Omega’s promise of the future is correct, then it must be capable of bringing it about. Perhaps it knows how to teleport the larger reward away if we go for the visible reward. Maybe Omega is just a master stage magician. Point is, taking the action of going and getting the visible reward will prevent me from getting the invisible one. I don’t need to worry about implementation details like whether it’s really based on my decision before I make it, or just the actions I take after I make it. The constraint is equally real whether I understand the mechanism or not.
Yes, that dodges the real question, but can an example illustrating the same deficiency of CDT be constructed, that isn’t subject to this dodge? I’m not certain it’s possible.
You could look at it another way. If a CDT agent knows it will face unspecified Newcomblike problems in the future, it will want to make the most general precommitment now. Of course you can’t come up with the most general precommitment that will solve all decision problems, because there could be a universe that arbitrarily punishes you for having a specific decision algorithm in your head, and rewards some other silly decision algorithm for being different. But if the universe rewards or punishes you only based on the return value of your algorithm and not its internals, then we can hope to figure out mathematically how the most general precommitment (UDT) should choose its return value in every situation. We already know enough to suspect that that it will probably talk about logical implication instead of physical causality, even in a world that runs on physical causality.