I’m pretty skeptical of acausal trade, so I don’t think I’m the best one to answer this. But my understanding of decision theories that engage in it do so because they want to be in the universe/multiverse which contains this increased utility.
Thanks for the reply! I thought the point of the MWI multiverse is that the wavefunction evolves deterministically according to the Schrodinger equation, so if the utility function takes into account what happens in other universes then it will just output a single fixed constant no matter what the agent experiences, since the amplitude of the universal wave function at any given time is fixed. I think the only way for utility functions to make sense is for the agent to only care about its own branch of the universe and its own possible future observer-moments. Whatever “happens” in the other branches along with their reality measure is predetermined.
Yes, the universe in that model is indeed deterministic, which means that your wants have no effect on the future but are an artifact of you being an embedded agent. Compatibilism says that you will still act as if you have needs and wants… probably because all your actions are predetermined in every universe, anyway. There is no way to steer the future from its predetermined path, but you are compelled to act as if there was. This includes acausal trade and everything else.
But can that really be called acausal “trade”? It’s simply the fact that in an infinite multiverse there will be causally independent agents who converge onto the same computation. If I randomly think “if I do X there will exist an agent who does Y and we both benefit in return” and somewhere in the multiverse there will be an agent who does Y in return for me doing X, can I really call that “trade” instead of just a coincidence that necessarily has to occur? But if my actions are determined by a utility function and my utility function extends to other universes/branches then that utility function simply will not work since no matter what action the agent takes, the total amount of utility in the multiverse is conserved. In order for a utility function to give the agent’s actions different amounts of expected utility it necessarily has to focus on the single world the agent is in instead of caring about other branches of the multiverse. Therefore shouldn’t perfectly rational beings care only about their own branch of the multiverse since that’s the only way to have justified actions?
True acausal trade can only really work in toy problems, since the number of possible utility functions for agents across possible worlds almost certainly grows much faster with agent complexity than the agents’ abilities to reason about all those possible worlds. Whether the multiverse is deterministic or not isn’t really relevant.
Even in the toy problem case, I think of it as more similar to the concept of execution of a will than to a concept of trade. We carry out the allocation of resources of an agent that would have valued those allocations, despite them no longer existing in our causal universe.
There are some elements relevant to acausal trade in this real-world phenomenon. The decedent can’t know or meaningfully affect what the executors actually do, except via a decision structure that applies to both but is external to both (the law in this example, some decision theory in more general acausal trade). The executors now can’t affect what the decedent did in the past, or change the decedent’s actual utility in any way. The will mainly serves the role of a partial utility function which in this example is communicated, but in pure acausal trade many such functions must be inferred.
I think the fact that the multiverse is deterministic does play a role, since if an agent’s utility function covers the entire multiverse and the agent cares about the other branches, its decision theory would suffer paralysis since any action have the same expected utility—the total amount of utility available for the agent within the multiverse, which is predetermined. Utility functions seem to only make sense when constrained to one branch and the agent treats its branch as the sole universe, only in this scenario will different actions have different expected utilities.
You are not entitled to the assumption that the other parts of the multiverse remain constant and uncorrelated to what you do. The multiverse could be superdeterministic. Failing to take into account your causes means you have a worldview in which there are two underdetermined events in the multiverse, the big bang and what you are about to do. Both versions can not be heeding local causation and everything is connected.
It makes life a whole lot more practical if you do assume it.
There are certainly hypothetical scenarios in which acausal trade is rationally justified: cases in which the rational actors can know whether or not the other actors perform or don’t perform some acausally-determined actions depending upon the outcomes of their decision theories, even if they can’t observe it. Any case simple enough to discuss is obviously ridiculously contrived, but the mode of reasoning is not ruled out in principle.
My expectation is that such a mode of reasoning is overwhelmingly ruled out by practical constraints.
I understand the logic but in a deterministic multiverse the expected utility of any action is the same since the amplitude of the universal wave function is fixed at any given time. No action has any effect on the total utility generated by the multiverse.
I’m pretty skeptical of acausal trade, so I don’t think I’m the best one to answer this. But my understanding of decision theories that engage in it do so because they want to be in the universe/multiverse which contains this increased utility.
Thanks for the reply! I thought the point of the MWI multiverse is that the wavefunction evolves deterministically according to the Schrodinger equation, so if the utility function takes into account what happens in other universes then it will just output a single fixed constant no matter what the agent experiences, since the amplitude of the universal wave function at any given time is fixed. I think the only way for utility functions to make sense is for the agent to only care about its own branch of the universe and its own possible future observer-moments. Whatever “happens” in the other branches along with their reality measure is predetermined.
Yes, the universe in that model is indeed deterministic, which means that your wants have no effect on the future but are an artifact of you being an embedded agent. Compatibilism says that you will still act as if you have needs and wants… probably because all your actions are predetermined in every universe, anyway. There is no way to steer the future from its predetermined path, but you are compelled to act as if there was. This includes acausal trade and everything else.
But can that really be called acausal “trade”? It’s simply the fact that in an infinite multiverse there will be causally independent agents who converge onto the same computation. If I randomly think “if I do X there will exist an agent who does Y and we both benefit in return” and somewhere in the multiverse there will be an agent who does Y in return for me doing X, can I really call that “trade” instead of just a coincidence that necessarily has to occur? But if my actions are determined by a utility function and my utility function extends to other universes/branches then that utility function simply will not work since no matter what action the agent takes, the total amount of utility in the multiverse is conserved. In order for a utility function to give the agent’s actions different amounts of expected utility it necessarily has to focus on the single world the agent is in instead of caring about other branches of the multiverse. Therefore shouldn’t perfectly rational beings care only about their own branch of the multiverse since that’s the only way to have justified actions?
True acausal trade can only really work in toy problems, since the number of possible utility functions for agents across possible worlds almost certainly grows much faster with agent complexity than the agents’ abilities to reason about all those possible worlds. Whether the multiverse is deterministic or not isn’t really relevant.
Even in the toy problem case, I think of it as more similar to the concept of execution of a will than to a concept of trade. We carry out the allocation of resources of an agent that would have valued those allocations, despite them no longer existing in our causal universe.
There are some elements relevant to acausal trade in this real-world phenomenon. The decedent can’t know or meaningfully affect what the executors actually do, except via a decision structure that applies to both but is external to both (the law in this example, some decision theory in more general acausal trade). The executors now can’t affect what the decedent did in the past, or change the decedent’s actual utility in any way. The will mainly serves the role of a partial utility function which in this example is communicated, but in pure acausal trade many such functions must be inferred.
I think the fact that the multiverse is deterministic does play a role, since if an agent’s utility function covers the entire multiverse and the agent cares about the other branches, its decision theory would suffer paralysis since any action have the same expected utility—the total amount of utility available for the agent within the multiverse, which is predetermined. Utility functions seem to only make sense when constrained to one branch and the agent treats its branch as the sole universe, only in this scenario will different actions have different expected utilities.
You are not entitled to the assumption that the other parts of the multiverse remain constant and uncorrelated to what you do. The multiverse could be superdeterministic. Failing to take into account your causes means you have a worldview in which there are two underdetermined events in the multiverse, the big bang and what you are about to do. Both versions can not be heeding local causation and everything is connected.
It makes life a whole lot more practical if you do assume it.
There are certainly hypothetical scenarios in which acausal trade is rationally justified: cases in which the rational actors can know whether or not the other actors perform or don’t perform some acausally-determined actions depending upon the outcomes of their decision theories, even if they can’t observe it. Any case simple enough to discuss is obviously ridiculously contrived, but the mode of reasoning is not ruled out in principle.
My expectation is that such a mode of reasoning is overwhelmingly ruled out by practical constraints.
I understand the logic but in a deterministic multiverse the expected utility of any action is the same since the amplitude of the universal wave function is fixed at any given time. No action has any effect on the total utility generated by the multiverse.
I … don’t think your word bindings are right here, but I’m not quite sure how to make a better pointer to contrast to.