I think acausal trade is just a special case of TDT-like decision theories, which consider “acausal consequences” of your decisions. That is, you reason in the following form, “If I were to output X in condition Y, so would all other sufficiently similar instantiations of me (including simulations). Therefore, in gauging the relative impact of my actions, I must also include the effect of all those instantiations outputting X.”
“Sufficiently similar” includes “different but symmetric” conditions like those described here, i.e., where you have different utility functions, but are in the same position with respect to each other.
In this case, the “acausal trade” argument is that, since everyone would behave symmetrically to you, and you would prefer that everyone do the 3-utility option, you should do it yourself, because it would entail everyone else doing so—even though your influence on the others is not causal.
Thanks! Is anything similar to acausal trade discussed anywhere outside of LessWrong? Coming up with the simplest case where acausal trade may be required seems like a thought experiment that (at least) philosophers should be aware of.
That I don’t know, and I hope someone else (lukeprog?) fills it in with a literature review.
I do, however, want to add a clarification:
TDT-like decision theories are the justification for engaging in “acausal trade”, while acausal trade itself refers to the actions you take (e.g. the 3-utility option) based on such justifications. (I blurred it a little by calling acausal trade a decision theory.)
Glad to have clarified the issue for and saved time for those who were wondering the same thing.
I’ve read all the literature on TDT that I can find, but I still find that I disagree with the people in this thread who claim that the compromise strategy is recommended by TDT in this problem.
Here is Yudkowsky’s brief summary of TDT:
The one-sentence version is: Choose as though controlling the logical output of the abstract computation you implement, including the output of all other instantiations and simulations of that computation.
The three-sentence version is: Factor your uncertainty over (impossible) possible worlds into a causal graph that includes nodes corresponding to the unknown outputs of known computations; condition on the known initial conditions of your decision computation to screen off factors influencing the decision-setup; compute the counterfactuals in your expected utility formula by surgery on the node representing the logical output of that computation. [...]
You treat your choice as determining the result of the logical computation, and hence all instantiations of that computation, and all instantiations of other computations dependent on that logical computation.
In the TDT pdf document, he also says:
Nonetheless, as external observers, we expect Andy8AM to correlate with AndySim, just as we expect calculators set to compute 678*987 to return the same answers at Mongolia and Neptune [...] We can organize this aspect of our uncertainty by representing the decisions of both Andy8AM and AndySim as connected to the latent node AndyPlatonic.
This refers to the idea that in a Pearlian causal graph, knowing the accurate initial physical state of two causally isolated but physically identical calculators, which are both poised to calculate 678x978, doesn’t (or shouldn’t) allow us to screen them off from each other and render them probabilistically independent. Knowing their physical state doesn’t imply that we know the answer to the calculation 678x978 – and if we press the “equals” button on one calculator and receive the answer 669186, this leads us to believe that this will be the answer displayed when we press the equals button on the other, causally isolated calculator.
Since knowing their initial physical state entirely does in fact cause us to screen off the two calculators in the causal graph, as such a graph would normally be drawn, we are led to conclude that the standard way of drawing a causal graph to represent this scenario is simply wrong. Therefore Yudkowsky includes another “latent” node with arcs to each of the calculator outputs, which represents the “platonic output” of the computation 678x987 (about which we are logically uncertain despite our physical knowledge of the calculators).
The latent node “AndyPlatonic” referred to by Yudkowsky in that quote is similar to the latent node representing the output of the platonic computation 678x987, except that in this case the computation is the computation implemented in an agent’s brain that determines whether he takes one or two boxes, and the causal graph is the one used by a TDT-agent in Newcomb’s problem.
So on the one hand we have an abstract or platonic computation “678x987” which is very explicit and simple, then later on page 85 of the TDT document we are shown a causal graph which is similar except that “678x987” is replaced by a platonic computation of expected utility that occurs in a human brain, which is not made explicit and must be extremely complex. This still seems fair enough to me because despite the complexity of the computation, by specification in Newcomb’s problem Omega has access to a highly accurate physical model of the human agent so that the computation it performs is expected to be very similar (i.e. accurate with ~99% probability) to the computation implemented in the human agent’s brain.
On the other hand in the problem under discussion in this thread, it seems that that the similarity in computations implemented in the human brains and the alien brains is rather vague. Assuming that the human responsible for making the decision whether humanity implements the “selfish” 10-utilon strategy or the co-operative 3-utilon strategy is a TDT agent – because this is the winning way – I still don’t see why he would choose the 3-utilon strategy.
He has no reason to think that the aliens possess a highly accurate model of him and the computations that occur in his brain. Therefore, he should expect that the extremely complex computation occurring in his brain, which decides whether to choose the 10-utilon or the 3-utilon strategy, is not instantiated in the alien brains with anything remotely close to the probability that would be necessary for it to be optimal for him to implement the 3-utilon strategy.
It is not enough that the computation is similar in a very general way, because within that generality there is much opportunity for the output to differ. It might only take a few bits difference for the computation to determine a different choice of strategy. For example if the aliens happen to be causal decision theorists then they are bound to choose the selfish strategy.
In other words I don’t see why “sufficient similarity” should hold in this case. It seems to me that the type of computation in question (determining the choice of strategy) is inevitably extremely complex – not comparable to 678x978. There is only good reason to expect such a complex computation to be instantiated predictably (i.e. with high probability) in any particular other location in the Universe if there is a powerful optimisation process (such as Omega) attempting to and capable of realising that goal. In this case there is not.
I therefore conclude that anyone advocating that humans implement the 3-utilon strategy in this problem is mistaken.
The links from http://wiki.lesswrong.com/wiki/Decision_theory should cover most of the main ideas. There are both more basic and more advanced ones, so you can read as many as appropriate to your current state of knowledge. It’s not all relevant, but most of what is relevant is at least touched on there.
I’m having trouble finding anything about acausal trade. Any recommended readings?
I think acausal trade is just a special case of TDT-like decision theories, which consider “acausal consequences” of your decisions. That is, you reason in the following form, “If I were to output X in condition Y, so would all other sufficiently similar instantiations of me (including simulations). Therefore, in gauging the relative impact of my actions, I must also include the effect of all those instantiations outputting X.”
“Sufficiently similar” includes “different but symmetric” conditions like those described here, i.e., where you have different utility functions, but are in the same position with respect to each other.
In this case, the “acausal trade” argument is that, since everyone would behave symmetrically to you, and you would prefer that everyone do the 3-utility option, you should do it yourself, because it would entail everyone else doing so—even though your influence on the others is not causal.
Thanks! Is anything similar to acausal trade discussed anywhere outside of LessWrong? Coming up with the simplest case where acausal trade may be required seems like a thought experiment that (at least) philosophers should be aware of.
That I don’t know, and I hope someone else (lukeprog?) fills it in with a literature review.
I do, however, want to add a clarification:
TDT-like decision theories are the justification for engaging in “acausal trade”, while acausal trade itself refers to the actions you take (e.g. the 3-utility option) based on such justifications. (I blurred it a little by calling acausal trade a decision theory.)
Glad to have clarified the issue for and saved time for those who were wondering the same thing.
I’ve read all the literature on TDT that I can find, but I still find that I disagree with the people in this thread who claim that the compromise strategy is recommended by TDT in this problem.
Here is Yudkowsky’s brief summary of TDT:
In the TDT pdf document, he also says:
This refers to the idea that in a Pearlian causal graph, knowing the accurate initial physical state of two causally isolated but physically identical calculators, which are both poised to calculate 678x978, doesn’t (or shouldn’t) allow us to screen them off from each other and render them probabilistically independent. Knowing their physical state doesn’t imply that we know the answer to the calculation 678x978 – and if we press the “equals” button on one calculator and receive the answer 669186, this leads us to believe that this will be the answer displayed when we press the equals button on the other, causally isolated calculator.
Since knowing their initial physical state entirely does in fact cause us to screen off the two calculators in the causal graph, as such a graph would normally be drawn, we are led to conclude that the standard way of drawing a causal graph to represent this scenario is simply wrong. Therefore Yudkowsky includes another “latent” node with arcs to each of the calculator outputs, which represents the “platonic output” of the computation 678x987 (about which we are logically uncertain despite our physical knowledge of the calculators).
The latent node “AndyPlatonic” referred to by Yudkowsky in that quote is similar to the latent node representing the output of the platonic computation 678x987, except that in this case the computation is the computation implemented in an agent’s brain that determines whether he takes one or two boxes, and the causal graph is the one used by a TDT-agent in Newcomb’s problem.
So on the one hand we have an abstract or platonic computation “678x987” which is very explicit and simple, then later on page 85 of the TDT document we are shown a causal graph which is similar except that “678x987” is replaced by a platonic computation of expected utility that occurs in a human brain, which is not made explicit and must be extremely complex. This still seems fair enough to me because despite the complexity of the computation, by specification in Newcomb’s problem Omega has access to a highly accurate physical model of the human agent so that the computation it performs is expected to be very similar (i.e. accurate with ~99% probability) to the computation implemented in the human agent’s brain.
On the other hand in the problem under discussion in this thread, it seems that that the similarity in computations implemented in the human brains and the alien brains is rather vague. Assuming that the human responsible for making the decision whether humanity implements the “selfish” 10-utilon strategy or the co-operative 3-utilon strategy is a TDT agent – because this is the winning way – I still don’t see why he would choose the 3-utilon strategy.
He has no reason to think that the aliens possess a highly accurate model of him and the computations that occur in his brain. Therefore, he should expect that the extremely complex computation occurring in his brain, which decides whether to choose the 10-utilon or the 3-utilon strategy, is not instantiated in the alien brains with anything remotely close to the probability that would be necessary for it to be optimal for him to implement the 3-utilon strategy.
It is not enough that the computation is similar in a very general way, because within that generality there is much opportunity for the output to differ. It might only take a few bits difference for the computation to determine a different choice of strategy. For example if the aliens happen to be causal decision theorists then they are bound to choose the selfish strategy.
In other words I don’t see why “sufficient similarity” should hold in this case. It seems to me that the type of computation in question (determining the choice of strategy) is inevitably extremely complex – not comparable to 678x978. There is only good reason to expect such a complex computation to be instantiated predictably (i.e. with high probability) in any particular other location in the Universe if there is a powerful optimisation process (such as Omega) attempting to and capable of realising that goal. In this case there is not.
I therefore conclude that anyone advocating that humans implement the 3-utilon strategy in this problem is mistaken.
The links from http://wiki.lesswrong.com/wiki/Decision_theory should cover most of the main ideas. There are both more basic and more advanced ones, so you can read as many as appropriate to your current state of knowledge. It’s not all relevant, but most of what is relevant is at least touched on there.