RSS

Acausal Trade

TagLast edit: 15 Jun 2022 13:24 UTC by Jennifer_Waldmann

In acausal trade, two agents each benefit by predicting what the other wants and doing it, even though they might have no way of communicating or affecting each other, nor even any direct evidence that the other exists.

Background: Superrationality and the one-shot Prisoner’s Dilemma

This concept emerged out of the much-debated question of how to achieve cooperation on a one-shot Prisoner’s Dilemma, where, by design, the two players are not allowed to communicate. On the one hand, a player who is considering the causal consequences of a decision (“Causal Decision Theory”) finds that defection always produces a better result. On the other hand, if the other player symmetrically reasons this way, the result is a Defect/​Defect equilibrium, which is bad for both agents. If they could somehow converge on Cooperate, they would each individually do better. The question is what variation on decision theory would allow this beneficial equilibrium.

Douglas Hofstadter (see references) coined the term “superrationality” to express this state of convergence. He illustrated it with a game in which twenty players, who do not know each other’s identities, each get an offer. If exactly one player asks for the prize of a billion dollars, they get it, but if none or multiple players ask, no one gets it. Players cannot communicate, but each might reason that the others are reasoning similarly. The “correct” decision—the decision which maximizes expected utility for each player, if all players symmetrically make the same decision—is to randomize a one-in-20 chance of asking for the prize.

Gary Drescher (see references) developed the concept further, introducing an ethical system called “acausal subjunctive morality.” Drescher’s approach relies on the agents being identical or at least similar, so that each agent can reasonably guess what the other will do based on facts about its own behavior, or even its own “source code.” If it cooperates, it can use this correlation to infer that the other will probably also cooperate.

Acausal trade goes one step beyond this. The agents do not need to be identical, nor similar, nor have the same utility function. Moreover, they do not need to know what the other agents are like, nor even if they exist. In acausal trade, an agent may have to surmise the probability that other agents, with their utility function and proclivities, exist.

Description

We have two agents, separated so that no interaction is possible. The separation can be simply because each is not aware of the location of the other; or else each may be prevented from communicating with or affecting the other.

In an asymmetrical example, one agent may be in the other’s future.

Other less prosaic thought experiments can be used to emphasize that interaction may be absolutely impossible. For example, agents that are outside each other’s light cones, or in separate parts of an Everett multiverse. And abstracting away from those scenarios, we can talk of counterfactual “impossible possible worlds” as a model for probability distributions.

In truly acausal trade, the agents cannot count on reputation, retaliation, or outside enforcement to ensure cooperation. The agents cooperate because each knows that the other can somehow predict its behavior very well. (Compare Omega in Newcomb’s problem.) Each knows that if it defects or cooperates, the other will (probabilistically) know this, and defect or cooperate, respectively.

Acausal trade can also be described in terms of (pre)commitment: Both agents commit to cooperate, and each has reason to think that the other is also committing.

Prediction mechanisms

For acausal trade to occur, each agent must infer there is some probability that an agent, of the sort that will acausally trade with it, exists.

The agent might be told, exogenously (as part of the scenario), that the other exists. But more interesting is the case in which the agent surmises the probability that the other exists.

A superintelligence might conclude that other superintelligences would tend to exist because increased intelligence is a convergent instrumental goal for agents. Given the existence of a superintelligence, acausal trade is one of the tricks it would tend to use.

To take a more prosaic example, we humans realize that humans tend to be alike: Even without knowing about specific trading partners, we know that there exist other people with similar situations, goals, desires, challenges, resource constraints, and mental architectures.

Once an agent realizes that another agent might exist, there are different ways that might predict the other agent’s behavior, and specifically that the other agent can be an acausal trading partner.

  1. They might know or surmise each other’s mental architectures (source code).

  2. In particular, they might know that they have identical or similar mental architecture, so that each one knows that its own mental processes approximately simulate the other’s.

  3. They might be able to simulate each other (perhaps probabalistically), or to predict the other’s behavior analytically. (Even we humans simulate each other’s thoughts to guess what the other would do.)

  4. More broadly, it is enough to know (probabilistically) that the other is a powerful optimizer, that it has a certain utility function, and that it can derive utility from resources. Seen mathematically, this is just an optimization problem: What is the best possible algorithm for an agent’s utility function? Cooperate/​Cooperate is optimal under certain assumptions, for if one agent could achieve optimal utility by defecting, then, symmetrically, so could the other, resulting in Defect/​Defect which generates inferior utility.

Decision Theories

Acausal trade is a special case of Updateless decision theory (or a variant like Functional Decision Theory, see references). Unlike better-known variations of Decision theory, such as Causal decision theory, acausal trade and UDT take into account the agent’s own algorithm as cause and caused.

In Causal Decision Theory, the agent’s algorithm (implementation) is treated as uncaused by the rest of the universe, so that though the agent’s decision and subsequent action can make a difference, its internal make-up cannot (except through that decision). In contrast, in UDT, the agents’ own algorithms are treated as causal nodes, influenced by other factors, such as the logical requirement of optimality in a utility-function maximizer. In UDT, as in acausal trade, the agent cannot escape the fact that its decision to defect or cooperate constitutes strong Bayesian evidence as to what the other agent will do, and so it is better off cooperating.

Limitations and Objections

Acausal trade only works if the agents are smart enough to predict each other’s behavior, and then smart enough to acausally trade. If one agent is stupid enough to defect, and the second is smart enough to predict the first, then neither will cooperate.

Also, as in regular trade, acausal trade only works if the two sides are close enough in power that the weaker side can do something worthwhile enough for the stronger.

A common objection to this idea: Why shouldn’t an agent “cheat” and choose to defect? Can’t it “at the last moment” back out after the other agent has committed? However, this approach takes into account only the direct effect of the decision, while a sufficiently intelligent trading partner could predict the agent’s choice, including that one, rendering the “cheating” approach suboptimal.

Another objection: Can an agent care about (have a utility function that takes into account) entities with which it can never interact, and about whose existence it is not certain? However, this is quite common even for humans today. We care about the suffering of other people in faraway lands about whom we know next to nothing. We are even disturbed by the suffering of long-dead historical people, and wish that, counterfactually, the suffering had not happened. We even care about entities that we are not sure exist. For example: We might be concerned by news report that a valuable archaeological artifact was destroyed in a distant country, yet at the same time read other news reports stating that the entire story is a fabrication and the artifact never existed. People even get emotionally attached to the fate of a fictional character.

An example of acausal trade with simple resource requirements

At its most abstract, the agents are simply optimization algorithms. As a toy example, let T be a utility function for which time is most valuable as a resource; while for utility function S, space is most valuable, and assume that these are the only two resources.

We will now choose the best algorithms for optimizing T. To avoid anthropomorphizing, we simply ask which algorithm—which string of LISP, for example—would give the highest expected utility for a given utility function. Thus, the choice of source code is “timeless”: We treat it as an optimization problem across all possible strings of LISP. We assume that computing power is unlimited. Mathematically, we are asking about argmax T.

We specify that there is a probability that either agent will be run in an environment where time is in abundance, and if not, some probability that it will be run in a space-rich universe.

If the algorithm for T is instantiated in a space-rich environment, it will only be able to gain a small amount of utility for itself, but S would be able to gain a lot of utility; and vice versa.

The question is: What algorithm for T provides the most optimization power, the highest expected value of utility function T?

If it turns out that the environment is space-rich, the agent for T may run the agent (the algorithm) for S, increasing the utility for S, and symmetrically the reverse. This will happen if each concludes, that the optimum occurs when the other agent has the “trading” feature. Given that this is the optimal case, the acausal trade will occur.

Acausal trade with complex resource requirements

In the toy example above, resource requirements are very simple. In general, given that agents can have complex and arbitrary goals requiring a complex mix of resources, an agent might not be able to conclude that a specific trading partner has a meaningful chance of existing and trading.

However, an agent can analyze the distribution of probabilities for the existence of other agents, and weight its actions accordingly. It will do acausal “favors” for one or more trading partners, weighting its effort according to its subjective probability that the trading partner exists. The expectation on utility given and received will come into a good enough balance to benefit the traders, in the limiting case of increasing super-intelligence.

Ordinary trade

Even ordinary trade can be analyzed acausally, using a perspective similar to that of Updateless decision theory. We ask: Which algorithm should an agent have to get the best expected value, summing across all possible environments weighted by their probability? The possible environments include those in which threats and promises have been made.

See also

References

[REPOST] The Demiurge’s Older Brother

Scott Alexander22 Mar 2017 2:03 UTC
96 points
2 comments6 min readLW link

Acausal normalcy

Andrew_Critch3 Mar 2023 23:34 UTC
181 points
30 comments8 min readLW link

Wor­ry­ing less about acausal extortion

Raemon23 May 2023 2:08 UTC
41 points
11 comments13 min readLW link

A new acausal trad­ing plat­form: RobinShould

Matthew Barnett1 Apr 2021 16:56 UTC
115 points
5 comments1 min readLW link

Coun­ter­fac­tual trade

owencb9 Mar 2015 13:23 UTC
22 points
19 comments3 min readLW link

De­ci­sion the­ory does not im­ply that we get to have nice things

So8res18 Oct 2022 3:04 UTC
170 points
72 comments26 min readLW link2 reviews

How Many of Me Are There?

Eneasz15 Apr 2011 19:00 UTC
17 points
43 comments2 min readLW link

Break­ing Or­a­cles: su­per­ra­tional­ity and acausal trade

Stuart_Armstrong25 Nov 2019 10:40 UTC
25 points
15 comments1 min readLW link

Acausal romance

lukeprog25 Feb 2012 9:13 UTC
40 points
37 comments1 min readLW link

[Question] Many Gods re­fu­ta­tion and In­stru­men­tal Goals. (Proper one)

aditya malik9 Aug 2022 11:59 UTC
0 points
15 comments1 min readLW link

Acausal Trade and the Ul­ti­ma­tum Game

Yair Halberstadt5 Sep 2021 5:36 UTC
17 points
8 comments3 min readLW link

[Question] How does acausal trade work in a de­ter­minis­tic mul­ti­verse?

sisyphus19 Nov 2022 1:50 UTC
2 points
13 comments1 min readLW link

A simu­la­tion basilisk

andrew sauer17 Sep 2021 17:44 UTC
3 points
1 comment1 min readLW link

Gems from the Wiki: Acausal Trade

13 Sep 2020 0:23 UTC
42 points
8 comments7 min readLW link
(www.lesswrong.com)

The Com­mit­ment Races problem

Daniel Kokotajlo23 Aug 2019 1:58 UTC
152 points
56 comments5 min readLW link

The Solomonoff prior is ma­lign. It’s not a big deal.

Charlie Steiner25 Aug 2022 8:25 UTC
41 points
9 comments7 min readLW link

[Question] Some doubts about Non Su­per­in­tel­li­gent AIs

aditya malik3 Aug 2022 19:55 UTC
0 points
4 comments1 min readLW link

A sketch of acausal trade in practice

Richard_Ngo4 Feb 2024 0:32 UTC
35 points
4 comments7 min readLW link

Acausal trade: con­clu­sion: the­ory vs practice

Stuart_Armstrong16 May 2017 19:33 UTC
1 point
1 comment1 min readLW link

Acausal trade: trade barriers

Stuart_Armstrong2 Jun 2017 15:32 UTC
0 points
1 comment2 min readLW link

Acausal trade: uni­ver­sal util­ity, or sel­l­ing non-ex­is­tence in­surance too late

Stuart_Armstrong2 Jun 2017 15:33 UTC
1 point
1 comment3 min readLW link

Acausal trade: full de­ci­sion algorithms

Stuart_Armstrong15 May 2017 10:12 UTC
2 points
1 comment2 min readLW link

Acausal trade: Introduction

Stuart_Armstrong11 May 2017 12:03 UTC
4 points
2 comments1 min readLW link

Acausal trade: differ­ent util­ities, differ­ent trades

Stuart_Armstrong2 Jun 2017 15:33 UTC
1 point
1 comment3 min readLW link

Acausal trade: dou­ble decrease

Stuart_Armstrong2 Jun 2017 15:33 UTC
10 points
3 comments2 min readLW link

Acausal trade: be­ing unusual

Stuart_Armstrong16 May 2017 18:38 UTC
4 points
1 comment1 min readLW link

AI-Box Ex­per­i­ment—The Acausal Trade Argument

XiXiDu8 Jul 2011 9:18 UTC
14 points
20 comments2 min readLW link

Have I just de­stroyed the acausal trade net­work?

Stuart_Armstrong12 Mar 2015 11:18 UTC
11 points
4 comments1 min readLW link

Re­port on mod­el­ing ev­i­den­tial co­op­er­a­tion in large worlds

Johannes Treutlein12 Jul 2023 16:37 UTC
45 points
3 comments1 min readLW link
(arxiv.org)

Acausal Now: We could to­tally acausally bar­gain with aliens at our cur­rent tech level if desired

Christopher King9 Aug 2023 0:50 UTC
1 point
5 comments4 min readLW link

Goldilocks and the Three Optimisers

dkl917 Aug 2023 18:15 UTC
−10 points
0 comments5 min readLW link
(dkl9.net)

Re­s­ur­rec­tion of the dead via mul­ti­verse-wide acausual cooperation

avturchin3 Sep 2018 11:21 UTC
24 points
30 comments7 min readLW link

Pas­cal’s Mug­ging: The Word Wars

johncrox16 Oct 2023 17:54 UTC
9 points
1 comment6 min readLW link

What We Owe the Past

Austin Chen5 May 2022 11:46 UTC
66 points
38 comments5 min readLW link

The Re­v­erse Basilisk

Dunning K.30 May 2022 23:10 UTC
17 points
23 comments2 min readLW link

Miti­gat­ing the dam­age from un­al­igned ASI by co­op­er­at­ing with aliens that don’t ex­ist yet

MSRayne21 Jun 2022 16:12 UTC
−8 points
7 comments6 min readLW link

Log­i­cal De­ci­sion The­o­ries: Our fi­nal failsafe?

Noosphere8925 Oct 2022 12:51 UTC
−7 points
8 comments1 min readLW link
(www.lesswrong.com)

Hu­mans do acausal co­or­di­na­tion all the time

Adam Jermyn2 Nov 2022 14:40 UTC
57 points
35 comments3 min readLW link

times­tamp­ing through the Singularity

throwaway91811912728 Feb 2024 19:09 UTC
−2 points
4 comments8 min readLW link

[Question] Is acausal ex­tor­tion pos­si­ble?

sisyphus11 Nov 2022 19:48 UTC
−20 points
34 comments3 min readLW link

“Far Co­or­di­na­tion”

DragonGod23 Nov 2022 17:14 UTC
5 points
17 comments9 min readLW link

Con­di­tions for Su­per­ra­tional­ity-mo­ti­vated Co­op­er­a­tion in a one-shot Pri­soner’s Dilemma

Jim Buhler19 Dec 2022 15:00 UTC
24 points
4 comments5 min readLW link

Could Roko’s basilisk acausally bar­gain with a pa­per­clip max­i­mizer?

Christopher King13 Mar 2023 18:21 UTC
1 point
8 comments1 min readLW link

Fur­ther con­sid­er­a­tions on the Ev­i­den­tial­ist’s Wager

Martín Soto3 Nov 2022 20:06 UTC
3 points
9 comments8 min readLW link

Even if we lose, we win

Morphism15 Jan 2024 2:15 UTC
24 points
17 comments4 min readLW link

Log­i­cal Line-Of-Sight Makes Games Se­quen­tial or Loopy

StrivingForLegibility19 Jan 2024 4:05 UTC
39 points
0 comments7 min readLW link

Ev­i­den­tial Cor­re­la­tions are Sub­jec­tive, and it might be a problem

Martín Soto7 Mar 2024 18:37 UTC
26 points
6 comments14 min readLW link

Leg­i­bil­ity Makes Log­i­cal Line-Of-Sight Transitive

StrivingForLegibility19 Jan 2024 23:39 UTC
13 points
0 comments5 min readLW link

Refram­ing Acausal Trol­ling as Acausal Patronage

StrivingForLegibility23 Jan 2024 3:04 UTC
14 points
0 comments2 min readLW link

Coun­ter­fac­tual Mechanism Networks

StrivingForLegibility30 Jan 2024 20:30 UTC
4 points
0 comments5 min readLW link

To Boldly Code

StrivingForLegibility26 Jan 2024 18:25 UTC
25 points
4 comments3 min readLW link

In­cor­po­rat­ing Mechanism De­sign Into De­ci­sion Theory

StrivingForLegibility26 Jan 2024 18:25 UTC
17 points
4 comments4 min readLW link

Tak­ing into ac­count prefer­ences of past selves

Jacob G-W15 Apr 2024 13:15 UTC
14 points
9 comments7 min readLW link

Co­op­er­a­tion is op­ti­mal, with weaker agents too  -  tldr

Ryo 18 Apr 2024 15:03 UTC
12 points
22 comments4 min readLW link
(medium.com)

[Question] Can agents co­or­di­nate on ran­dom­ness with­out out­side sources?

Mikhail Samin6 Jul 2024 13:43 UTC
6 points
16 comments1 min readLW link

Co­op­er­at­ing with aliens and AGIs: An ECL explainer

24 Feb 2024 22:58 UTC
51 points
8 comments1 min readLW link

Acausal trade nat­u­rally re­sults in the Nash bar­gain­ing solution

Christopher King8 May 2023 18:13 UTC
3 points
0 comments4 min readLW link

“Fully” acausal trade

Stuart_Armstrong4 Dec 2019 16:39 UTC
16 points
2 comments1 min readLW link

Or­a­cles: re­ject all deals—break su­per­ra­tional­ity, with superrationality

Stuart_Armstrong5 Dec 2019 13:51 UTC
20 points
4 comments8 min readLW link

Acausal trade barriers

Stuart_Armstrong11 Mar 2015 13:40 UTC
23 points
22 comments2 min readLW link
No comments.