Choosing an action is not a good way of exerting acausal influence on computations that aren’t already paying attention to you in particular. When agent A wants to influence computation C, there is some other computation D that C might be paying attention to, and A is free to also start paying attention to it by allowing D to influence A’s actions. This lets A create an incentive for D to act in particular ways, by channeling D’s decisions into the consequences of A’s actions that were arranged to depend on D’s decisions in a way visible to D. As a result, D gains influence over both A and C, and A becomes coordinated with C through both of them being influenced by D (here D plays the role of an adjudicator/contract between them). So correlations are not set a priori, setting them up should be part of how acausal influence is routed by decisions.
A priori, there could exist the danger that, by thinking more, they would unexpectedly learn the actual output of C. This would make the trade no longer possible, since then taking a would give them no additional evidence about whether c happens.
If A’s instrumental aim is to influence some D (a contract between A and C), what matters is D’s state of logical uncertainty about A and C (and about the way they depend on D), which is the basis for D’s decisions that affect C. A’s state of logical uncertainty about C is less directly relevant. So even if A gets to learn C’s outcome, that shouldn’t be a problem. Merely observing some fact doesn’t rule out that the observation took place in an impossible situation, so observing some outcome of C (from a situation of unclear actuality) doesn’t mean that the actual outcome is as observed. And if D is uncertain about actuality of that situation, it might be paying attention to what A does there, and how what A does there depends on D’s decisions. So A shouldn’t give up just because according to its state of knowledge, the influence of its actions is gone, since it still has influence over the way its actions depend on others’ decisions, according to others’ states of knowledge.
Choosing an action is not a good way of exerting acausal influence on computations that aren’t already paying attention to you in particular. When agent A wants to influence computation C, there is some other computation D that C might be paying attention to, and A is free to also start paying attention to it by allowing D to influence A’s actions. This lets A create an incentive for D to act in particular ways, by channeling D’s decisions into the consequences of A’s actions that were arranged to depend on D’s decisions in a way visible to D. As a result, D gains influence over both A and C, and A becomes coordinated with C through both of them being influenced by D (here D plays the role of an adjudicator/contract between them). So correlations are not set a priori, setting them up should be part of how acausal influence is routed by decisions.
If A’s instrumental aim is to influence some D (a contract between A and C), what matters is D’s state of logical uncertainty about A and C (and about the way they depend on D), which is the basis for D’s decisions that affect C. A’s state of logical uncertainty about C is less directly relevant. So even if A gets to learn C’s outcome, that shouldn’t be a problem. Merely observing some fact doesn’t rule out that the observation took place in an impossible situation, so observing some outcome of C (from a situation of unclear actuality) doesn’t mean that the actual outcome is as observed. And if D is uncertain about actuality of that situation, it might be paying attention to what A does there, and how what A does there depends on D’s decisions. So A shouldn’t give up just because according to its state of knowledge, the influence of its actions is gone, since it still has influence over the way its actions depend on others’ decisions, according to others’ states of knowledge.