I also made a post on Less Wrong sketching our reasons why backwards causation might not necessarily be absurd, but more for physics-related reasons. I would be keen to see someone with more physics knowledge develop this argument in greater depth.
I also feel that the Perfect deterministic twin prisoner’s dilemma is the strongest counter-example for CDT and really liked the “play money” intuition pump that you provided.
We can think of the magic, here, as arising centrally because compatibilism about free will is true… Is that changing the past? In one sense: no.
I would like to offer a frame that might help you disentangle this web. There are two views we can take of the universe:
Raw reality—In this perspective we look at the universe in its raw form; that is the territory component of the map and territory. From this perspective the outcome is fixed, but so is your decision. If we try constructing a decision theory problem while remaining anchored in this perspective we will end up with a Trivial Decision Problem where there is a single decision you can take and so if we are asked what you can do, we have to answer that you should make the only decision that you can.
Augmented reality—In this perspective we extend raw reality by constructing counterfactuals. It is only from within this perspective that we can talk about having a choice. The fact that some of these counterfactuals may model aspects of the situation as being contrary to the fact of the matter is not problematic as if these counterfactuals perfectly matched the factual then they would be utterly useless.
Once I understood that there were two perspectives we can operate from and that we can’t mix and match, I found myself much less likely to fall into confusion. For example, the problem of fatalism occurs when we ask what we should do (a real decision implies taking the perspective of augmented reality), but then claim that determinism implies only one possible outcome (taking the contrary position of raw reality).
Regarding managing the news, the fact that it is difficult to do doesn’t seem like a very philosophically relevant argument. Like the fact that it would be very hard to cheat in an exam doesn’t really have much relevance for whether it is ethical to cheat and the fact that it would be very hard to become an olympic champion doesn’t have much relevance for whether it would be good if you achieved that goal. Although, perhaps you’re just noting that it’s interesting how hard it is.
I think a good way of setting up augmented reality is with CDT-style surgery on an algorithm. By uncoupling an enactable event (action/decision/belief) from its definition, you allocate a new free variable (free will) that the world will depend on, and eventually set that variable to whatever you decide it to be, ensuring that the cut closes. The trick is to set up the cut in a way that won’t be bothered by the possibility of divergence between the enactable variable and its would-be definition, and that’s easier to ensure in a specifically constructed setting of an agent’s abstract algorithm rather than in a physical world of unclear nature.
CDT surgery is pretty effective most of the time, but the OP describes some of its limitations. I’m confused—are you just claiming it is effective most of the time or that we shouldn’t worry too much about these limitations?
Surgery being performed on the algorithm (more carefully, on the computation specified by the algorithm) rather than on instances in the world is the detail that makes the usual problems with CDT go away, including the issues discussed in the post.
I also made a post on Less Wrong sketching our reasons why backwards causation might not necessarily be absurd, but more for physics-related reasons. I would be keen to see someone with more physics knowledge develop this argument in greater depth.
I also feel that the Perfect deterministic twin prisoner’s dilemma is the strongest counter-example for CDT and really liked the “play money” intuition pump that you provided.
I would like to offer a frame that might help you disentangle this web. There are two views we can take of the universe:
Raw reality—In this perspective we look at the universe in its raw form; that is the territory component of the map and territory. From this perspective the outcome is fixed, but so is your decision. If we try constructing a decision theory problem while remaining anchored in this perspective we will end up with a Trivial Decision Problem where there is a single decision you can take and so if we are asked what you can do, we have to answer that you should make the only decision that you can.
Augmented reality—In this perspective we extend raw reality by constructing counterfactuals. It is only from within this perspective that we can talk about having a choice. The fact that some of these counterfactuals may model aspects of the situation as being contrary to the fact of the matter is not problematic as if these counterfactuals perfectly matched the factual then they would be utterly useless.
Once I understood that there were two perspectives we can operate from and that we can’t mix and match, I found myself much less likely to fall into confusion. For example, the problem of fatalism occurs when we ask what we should do (a real decision implies taking the perspective of augmented reality), but then claim that determinism implies only one possible outcome (taking the contrary position of raw reality).
See Why 1-boxing doesn’t imply backwards causation for this argument in more detail.
Regarding managing the news, the fact that it is difficult to do doesn’t seem like a very philosophically relevant argument. Like the fact that it would be very hard to cheat in an exam doesn’t really have much relevance for whether it is ethical to cheat and the fact that it would be very hard to become an olympic champion doesn’t have much relevance for whether it would be good if you achieved that goal. Although, perhaps you’re just noting that it’s interesting how hard it is.
I think a good way of setting up augmented reality is with CDT-style surgery on an algorithm. By uncoupling an enactable event (action/decision/belief) from its definition, you allocate a new free variable (free will) that the world will depend on, and eventually set that variable to whatever you decide it to be, ensuring that the cut closes. The trick is to set up the cut in a way that won’t be bothered by the possibility of divergence between the enactable variable and its would-be definition, and that’s easier to ensure in a specifically constructed setting of an agent’s abstract algorithm rather than in a physical world of unclear nature.
CDT surgery is pretty effective most of the time, but the OP describes some of its limitations. I’m confused—are you just claiming it is effective most of the time or that we shouldn’t worry too much about these limitations?
Surgery being performed on the algorithm (more carefully, on the computation specified by the algorithm) rather than on instances in the world is the detail that makes the usual problems with CDT go away, including the issues discussed in the post.