The standard account of causality depends on the idea of intervention: The question of what follows if X not just naturally occurs, but we bring it about artificially independently of its usual causes. This doesn’t go well with embedded agency. If the agent is part of the world, then its own actions will always be caused by the past state of the world, and so it couldnt know if the apparent effect of its interventions isn’t just due to some common cause. There is a potential way out of this if we limit the complexity of causal dependencies.
Classically, X is dependent on Y iff the conditional distribution of X on Y is different from the unconditional distribution on X. In slightly different formulation, there needs to be a program that takes Y as an input and outputs adjustments to our unconditional distribution on X, and where those adjustments improve prediction.
Now we could limit which programms we consider admissible. This will be by computational complexity of the program with respect to the precision of Y. For example, I will say that X is polynomially dependent on Y iff there is a programm running in polynomial time that fullfills these conditions. (Note that dependence in this new sense needn’t be symmetrical anymore.)
Unlike with the unlimited dependence, there’s nothing in principle impossible about the agents actions being polynomially independent from an entire past worldstate. This can form a weakened sense of intervention, and the limited-causal consequences of such interventions can be determined from actually observed frequencies.
Now, if we’re looking at things where all dependencies are within a certain complexity class, and we analyse it with something stronger than that, this will end up looking just like ordinary causality. It also explains the apparent failure of causality in Newcombs problem: We now have a substantive account of what it is to act in intervention (to be independent in a given complexity class). In general, this requires work by the agent. It needs to determine its actions in such a way that they dont show this dependence. Omega is constructed to make its computational resources insufficient for that. So the agent fails to make itself independent of Omega’s prediction. The agent would similarly fail for “future” events it causes where the dependence is in the complexity class. In a sense, this is what puts the events into its subjective future—that it cannot act independently of them.
Limiting Causality by Complexity Class
The standard account of causality depends on the idea of intervention: The question of what follows if X not just naturally occurs, but we bring it about artificially independently of its usual causes. This doesn’t go well with embedded agency. If the agent is part of the world, then its own actions will always be caused by the past state of the world, and so it couldnt know if the apparent effect of its interventions isn’t just due to some common cause. There is a potential way out of this if we limit the complexity of causal dependencies.
Classically, X is dependent on Y iff the conditional distribution of X on Y is different from the unconditional distribution on X. In slightly different formulation, there needs to be a program that takes Y as an input and outputs adjustments to our unconditional distribution on X, and where those adjustments improve prediction.
Now we could limit which programms we consider admissible. This will be by computational complexity of the program with respect to the precision of Y. For example, I will say that X is polynomially dependent on Y iff there is a programm running in polynomial time that fullfills these conditions. (Note that dependence in this new sense needn’t be symmetrical anymore.)
Unlike with the unlimited dependence, there’s nothing in principle impossible about the agents actions being polynomially independent from an entire past worldstate. This can form a weakened sense of intervention, and the limited-causal consequences of such interventions can be determined from actually observed frequencies.
Now, if we’re looking at things where all dependencies are within a certain complexity class, and we analyse it with something stronger than that, this will end up looking just like ordinary causality. It also explains the apparent failure of causality in Newcombs problem: We now have a substantive account of what it is to act in intervention (to be independent in a given complexity class). In general, this requires work by the agent. It needs to determine its actions in such a way that they dont show this dependence. Omega is constructed to make its computational resources insufficient for that. So the agent fails to make itself independent of Omega’s prediction. The agent would similarly fail for “future” events it causes where the dependence is in the complexity class. In a sense, this is what puts the events into its subjective future—that it cannot act independently of them.