I’m not so sure that this is actually true. It has been shown that, given a fairly minimal set of constraints that don’t mention probability, decision-makers in a MWI setting maximise expected utility, where the expectation is given with respect to the Born rule: http://arxiv.org/abs/0906.2718
The quantum representation theorem is interesting, however, I don’t think it really proves the Born rule. If I understand correctly, it effectively assumes it (eq. 13, 14, 15) and then proves that given any preference ordering consistent with the “richness” and “rationality” axioms, there is an utility function such that its expectation w.r.t. the Born probabilities represent that ordering.
But the same applies to any other probability distribution, as long as it has its support inside the support of the Born probability distribution: Let p(x) be the Born probabilities and u(x) be the original utility function. Let p’(x) be another probability distribution. Then u’(x) = u(x) p(x)/p’(x) yields the correct preference ordering under expectation w.r.t. p’(x)
Equations 13, 14 and 15 introduce notation that aren’t used in the axioms, so they don’t really constitute an assumption that maximising Born-expected utility is the only rational strategy.
Your second paragraph has a subtle problem: the argument of u is which reward you get, but the argument of p might have to do with the coefficients of the branches in superposition.
To illustrate, suppose that I only care about getting Born-expected dollars. Then, letting psi_n denote the world where I get $n, my preference ordering includes
You might wonder if my preferences could be represented as maximising utility with respect to the uniform branch weights: you don’t care at all about branches with Born weight zero, but you care equally about all elements with non-zero coefficient, regardless of what that coefficient is. Then, if the new utility function is U′, we require
Equations 13, 14 and 15 introduce notation that aren’t used in the axioms, so they don’t really constitute an assumption that maximising Born-expected utility is the only rational strategy.
They are used in the last theorem.
… You might wonder if my preferences could be represented as maximising utility with respect to the uniform branch weights
I think this violates indifference to microstate/branching.
I agree that the notation that they introduce is used in the last two theorems (the Utility Lemma and the Born Rule Theorem), but I don’t see where in the proof that they assume that you should maximise Born-expected utility. If you could point out which step you think does this, that would help me understand your comment better.
I think this violates indifference to microstate/branching.
I agree. This is actually part of the point: you can’t just maximise utility with respect to any old probability function you want to define on superpositions, you have to use the Born rule to avoid violating diachronic consistency or indifference to branching or any of the others.
I agree that the notation that they introduce is used in the last two theorems (the Utility Lemma and the Born Rule Theorem), but I don’t see where in the proof that they assume that you should maximise Born-expected utility. If you could point out which step you think does this, that would help me understand your comment better.
It is used in to define the expected utility in the statement of these two theorems, eq. 27 and 30.
This is actually part of the point: you can’t just maximise utility with respect to any old probability function you want to define on superpositions, you have to use the Born rule to avoid violating diachronic consistency or indifference to branching or any of the others.
The issue is that the agent needs a decision rule that, given a quantum state computes an action, and this decision rule must be consistent with the agent’s preference ordering over observable macrostates (which has to obey the constraints specified in the paper). If the decision rule has to have the form of expected utility maximization, then we have two functions which are multiplied together, which gives us some wiggle room between them.
If I understand correctly, the claim is that if you restrict the utility function to depend only on the macrostate rather than the quantum state, then the probability distribution must be the Born Rule. It seems to me that while certain probability distributions are excluded, the paper didn’t prove that the Born Rule is the only consistent distribution.
Even if it turns out that it is, the result would be interesting but not particularly impressive, since macrostates are defined in terms projections, which naturally induces a L2 weighting. But defining macrostates this way makes sense precisely because there is the Born rule.
It is used in to define the expected utility in the statement of these two theorems, eq. 27 and 30.
Yes. The point of those theorems is to prove that if your preferences are ‘nice’, then you are maximising Born-expected utility. This is why Born-expected utility appears in the statement of the theorems. They do not assume that a rational agent maximises Born-expected utility, they prove it.
The issue is that the agent needs a decision rule that, given a quantum state computes an action, and this decision rule must be consistent with the agent’s preference ordering over observable macrostates (which has to obey the constraints specified in the paper).
Yes. My point is that maximising Born-expected utility is the only way to do this. This is what the paper shows. The power of this theorem is that other decision algoritms don’t obey the constraints specified in the paper.
If the decision rule has to have the form of expected utility maximization, then we have two functions which are multiplied together, which gives us some wiggle room between them.
No: the functions are of two different arguments. Utility (at least in this paper) is a function of what reward you get, whereas the probability will be a function of the amplitude of the branch. You can represent the strategy of maximising Born-expected utility as the strategy of maximising some other function with respect to some other set of probabilities, but that other function will not be a function of the rewards.
Even if it turns out that it is, the result would be interesting but not particularly impressive, since macrostates are defined in terms projections, which naturally induces a L2 weighting. But defining macrostates this way makes sense precisely because there is the Born rule.
A macrostate here is defined in terms of a subspace of the whole Hilbert space, which of course involves an associated projection operator. That being said, I can’t think of a reason why this doesn’t make sense if you don’t assume the Born rule. Could you elaborate on this?
I’m not sure that the proof can be summarised in a comment, but the theorem can:
Suppose you are an agent that knows that you are living in an Everettian universe. You have a choice between unitary transformations (the only type of evolution that the world is allowed to undergo in MWI), that will in general cause your ‘world’ to split and give you various rewards or punishments in the various resulting branches. Your preferences between unitary transformations satisfy a few constraints:
Some technical ones about which unitary transformations are available.
Your preferences should be a total ordering on the set of the available unitary transformations.
If you currently have unitary transformation U available, and after performing U you will have unitary transformations V and V’ available, and you know that you will later prefer V to V’, then you should currently prefer (U and then V) to (U and then V’).
If there are two microstates that give rise to the same macrostate, you don’t care about which one you end up in.
You don’t care about branching in and of itself: if I offer to flip a quantum coin and give you reward R whether it lands heads or tails, you should be indifferent between me doing that and just giving you reward R.
You only care about which state the universe ends up in.
If you prefer U to V, then changing U and V by some sufficiently small amount does not change this preference.
Then, you act exactly as if you have a utility function on the set of rewards, and you are evaluating each unitary transformation based on the weighted sum of the utility of the reward you get in each resulting branch, where you weight by the Born ‘probability’ of each branch.
Thanks! The list of assumptions seems longer than in the De Raedt et al. paper and you need to first postulate branching and unitarity (let’s set aside how reasonable/justified this postulate is) in addition to rational reasoning. But it looks like you can get there eventually.
I’m not so sure that this is actually true. It has been shown that, given a fairly minimal set of constraints that don’t mention probability, decision-makers in a MWI setting maximise expected utility, where the expectation is given with respect to the Born rule: http://arxiv.org/abs/0906.2718
Nice paper, thanks for linking it.
The quantum representation theorem is interesting, however, I don’t think it really proves the Born rule.
If I understand correctly, it effectively assumes it (eq. 13, 14, 15) and then proves that given any preference ordering consistent with the “richness” and “rationality” axioms, there is an utility function such that its expectation w.r.t. the Born probabilities represent that ordering.
But the same applies to any other probability distribution, as long as it has its support inside the support of the Born probability distribution:
Let p(x) be the Born probabilities and u(x) be the original utility function. Let p’(x) be another probability distribution.
Then u’(x) = u(x) p(x)/p’(x) yields the correct preference ordering under expectation w.r.t. p’(x)
Equations 13, 14 and 15 introduce notation that aren’t used in the axioms, so they don’t really constitute an assumption that maximising Born-expected utility is the only rational strategy.
Your second paragraph has a subtle problem: the argument of u is which reward you get, but the argument of p might have to do with the coefficients of the branches in superposition.
To illustrate, suppose that I only care about getting Born-expected dollars. Then, letting psi_n denote the world where I get $n, my preference ordering includes
psi_4succpsi_3
and
frac{1}{sqrt{3}}psi_0 sqrt{frac{2}{3}}psi_3simfrac{1}{sqrt{2}}psi_0 frac{1}{sqrt{2}}psi_4
You might wonder if my preferences could be represented as maximising utility with respect to the uniform branch weights: you don’t care at all about branches with Born weight zero, but you care equally about all elements with non-zero coefficient, regardless of what that coefficient is. Then, if the new utility function is U′, we require
%20%3E%20U’(\$3))and
%20+%20\frac{1}{2}%20U’(\$3)%20=%20\frac{1}{2}%20U’(\$0)%20+%20\frac{1}{2}%20U’($4))However, this is a contradiction, so my preferences cannot be represented in this way.
They are used in the last theorem.
I think this violates indifference to microstate/branching.
I agree that the notation that they introduce is used in the last two theorems (the Utility Lemma and the Born Rule Theorem), but I don’t see where in the proof that they assume that you should maximise Born-expected utility. If you could point out which step you think does this, that would help me understand your comment better.
I agree. This is actually part of the point: you can’t just maximise utility with respect to any old probability function you want to define on superpositions, you have to use the Born rule to avoid violating diachronic consistency or indifference to branching or any of the others.
It is used in to define the expected utility in the statement of these two theorems, eq. 27 and 30.
The issue is that the agent needs a decision rule that, given a quantum state computes an action, and this decision rule must be consistent with the agent’s preference ordering over observable macrostates (which has to obey the constraints specified in the paper).
If the decision rule has to have the form of expected utility maximization, then we have two functions which are multiplied together, which gives us some wiggle room between them.
If I understand correctly, the claim is that if you restrict the utility function to depend only on the macrostate rather than the quantum state, then the probability distribution must be the Born Rule.
It seems to me that while certain probability distributions are excluded, the paper didn’t prove that the Born Rule is the only consistent distribution.
Even if it turns out that it is, the result would be interesting but not particularly impressive, since macrostates are defined in terms projections, which naturally induces a L2 weighting. But defining macrostates this way makes sense precisely because there is the Born rule.
Yes. The point of those theorems is to prove that if your preferences are ‘nice’, then you are maximising Born-expected utility. This is why Born-expected utility appears in the statement of the theorems. They do not assume that a rational agent maximises Born-expected utility, they prove it.
Yes. My point is that maximising Born-expected utility is the only way to do this. This is what the paper shows. The power of this theorem is that other decision algoritms don’t obey the constraints specified in the paper.
No: the functions are of two different arguments. Utility (at least in this paper) is a function of what reward you get, whereas the probability will be a function of the amplitude of the branch. You can represent the strategy of maximising Born-expected utility as the strategy of maximising some other function with respect to some other set of probabilities, but that other function will not be a function of the rewards.
A macrostate here is defined in terms of a subspace of the whole Hilbert space, which of course involves an associated projection operator. That being said, I can’t think of a reason why this doesn’t make sense if you don’t assume the Born rule. Could you elaborate on this?
Can this argument be summarized in some condensed form? The paper is long.
I’m not sure that the proof can be summarised in a comment, but the theorem can:
Suppose you are an agent that knows that you are living in an Everettian universe. You have a choice between unitary transformations (the only type of evolution that the world is allowed to undergo in MWI), that will in general cause your ‘world’ to split and give you various rewards or punishments in the various resulting branches. Your preferences between unitary transformations satisfy a few constraints:
Some technical ones about which unitary transformations are available.
Your preferences should be a total ordering on the set of the available unitary transformations.
If you currently have unitary transformation U available, and after performing U you will have unitary transformations V and V’ available, and you know that you will later prefer V to V’, then you should currently prefer (U and then V) to (U and then V’).
If there are two microstates that give rise to the same macrostate, you don’t care about which one you end up in.
You don’t care about branching in and of itself: if I offer to flip a quantum coin and give you reward R whether it lands heads or tails, you should be indifferent between me doing that and just giving you reward R.
You only care about which state the universe ends up in.
If you prefer U to V, then changing U and V by some sufficiently small amount does not change this preference.
Then, you act exactly as if you have a utility function on the set of rewards, and you are evaluating each unitary transformation based on the weighted sum of the utility of the reward you get in each resulting branch, where you weight by the Born ‘probability’ of each branch.
Thanks! The list of assumptions seems longer than in the De Raedt et al. paper and you need to first postulate branching and unitarity (let’s set aside how reasonable/justified this postulate is) in addition to rational reasoning. But it looks like you can get there eventually.