I’ve never really understood acausal trade. So in a short series of posts, I’ll attempt to analyse the concept sufficiently that I can grasp it—and hopefully so others can grasp it as well.
Other posts in the series: Double decrease, Breaking acausal trade, Trade in different types of utility functions, Being special, Multiple acausal trade networks.
The simplest model
There are N different rooms. Since labels are arbitrary, assume you are in room 1, without loss of generality. The agents in room i exist with probability pi, and have a utility ui, which they are motivated to maximise. Each agent only acts in their room. They may choose to diminish ui to increase one or more other uj with i≠j.
The agents will never meet, never interact in any way, won’t even be sure of each other’s existence, may not known N, and may have uncertainty over the values of the other uj’s.
Infinities, utility weights, negotiations, trade before existence
There are a number of things I won’t be considering here. First of all, infinities. In reality, acausal trade would happen in the real universe, which is likely infinite. It’s not clear at all how to rank infinitely many causally disconnected world-pieces. So I’ll avoid that entirely, assuming N is finite (though possibly large).
I’ll ignore all these issues, and see the ui as functions from states of the world to real numbers: individual representatives of utility functions, not equivalence classes of equivalence functions. And the bargaining will be a straight one for one increase and decrease: a fair deal is one where ui and uj get the same benefit—as measured by ui and uj.
I’ll also ignore the possibility of trade before existence, or Rawlsian veils of ignorance. If you are a ui maximiser, but you could have been a uj maximiser if things had been different, then you have no responsibility to increase uj. Similarly, if there are uj maximisers out there, then you have no responsibility to maximiser uj without getting any ui increases out of that.
Changing that last assumption could radically alter the nature of acausal trade (eg potentially reducing to simply maximising a universal prior utility function), so it’s important to emphasise that that is being ignored.
Acausal trade: Introduction
I’ve never really understood acausal trade. So in a short series of posts, I’ll attempt to analyse the concept sufficiently that I can grasp it—and hopefully so others can grasp it as well.
Other posts in the series: Double decrease, Breaking acausal trade, Trade in different types of utility functions, Being special, Multiple acausal trade networks.
The simplest model
There are N different rooms. Since labels are arbitrary, assume you are in room 1, without loss of generality. The agents in room i exist with probability pi, and have a utility ui, which they are motivated to maximise. Each agent only acts in their room. They may choose to diminish ui to increase one or more other uj with i≠j.
The agents will never meet, never interact in any way, won’t even be sure of each other’s existence, may not known N, and may have uncertainty over the values of the other uj’s.
Infinities, utility weights, negotiations, trade before existence
There are a number of things I won’t be considering here. First of all, infinities. In reality, acausal trade would happen in the real universe, which is likely infinite. It’s not clear at all how to rank infinitely many causally disconnected world-pieces. So I’ll avoid that entirely, assuming N is finite (though possibly large).
There’s also the thorny issue of how to weigh and compare different utility functions, and/or the process of negotiation about how to divide the gains from trade.
I’ll ignore all these issues, and see the ui as functions from states of the world to real numbers: individual representatives of utility functions, not equivalence classes of equivalence functions. And the bargaining will be a straight one for one increase and decrease: a fair deal is one where ui and uj get the same benefit—as measured by ui and uj.
I’ll also ignore the possibility of trade before existence, or Rawlsian veils of ignorance. If you are a ui maximiser, but you could have been a uj maximiser if things had been different, then you have no responsibility to increase uj. Similarly, if there are uj maximisers out there, then you have no responsibility to maximiser uj without getting any ui increases out of that.
Changing that last assumption could radically alter the nature of acausal trade (eg potentially reducing to simply maximising a universal prior utility function), so it’s important to emphasise that that is being ignored.