What the decision theoretic account explains is why I should expect to see dark regions in a double slit experiment.
So how does it explain that? How can the imperative to maximize your expected utility, require you to “expect to see” photons to arrive in the dark zones less often than they arrive in the light zones, without it also being true that photons actually arrive in the dark zones less often than they arrive in the light zones?
Sorry. I edited my post to make it clearer before I saw yours, so the part you quoted has now disappeared. Anyway, I’m not entirely on board with the Deutsch-Wallace program, so I’m not going to offer a full defense of their view. I do want to make sure it’s clear what they claim to be doing.
Consider a simpler case then the two-slit experiment: a Stern-Gerlach experiment on spin-1/2 particles prepared in the superposition sqrt(1/4) |up> + sqrt(3/4) |down>. Ignoring fuzzy world complications for now, the Everettian says that upon measurement of the particle, my branch will split into two branches. In one branch, a future self will observe spin-up, and in the other branch a future self will observe spin-down. All of this is determined by the Schrodinger dynamics. The Born probabilities don’t enter into it.
Where the Born probabilities enter is in how I should behave pre-split. As an Everettian, I am not in a genuine state of subjective uncertainty about what will happen, but I am in the weird position of knowing that I’m going to be splitting. According to Wallace (and I’m not sure I agree with this), the appropriate way to behave in this circumstance is not as if I’m going to turn into two separate people. It is basically psychologically impossible for a human being to have this attitude. Instead, I should behave as if I am subjectively uncertain about which of the two future selves is going to be me. Perhaps on some intellectual level I know that both of them will be me, but we have not evolved to account for such fission in our decision-making processes, so I have to treat it as a case where I am going to end up as just one of them, but I don’t know which one.
Adopting this position of faux subjective uncertainty, I should plan for the future as if maximizing expected utility. And if I am organizing my beliefs this way, the decision theoretic argument establishes that I should set my probabilities in accord with the Born rule. In this case, the probabilities do not stem from genuine uncertainty, and they do not represent frequencies. So the fact that I expect to see spin-down does not mean that spin-down is more likely to happen in any ordinary sense. It means that as a rational agent, I should behave as if I am more likely to head down the spin-down branch.
The problematic step here is the one where decision-making in a branching world is posited to have the same rational structure as decision-making in a situation of uncertainty, even though there is no genuine uncertainty. There are a number of arguments for and against this proposition that we can go into if you like. For now, suffice it to say that I remain unconvinced that this is the right way to make decisions when faced with fission, but I don’t think the idea is completely insane. Wallace’s thoughts on this question are here: http://philsci-archive.pitt.edu/3811/1/websites.pdf
There is still the problem that if all histories exist and if they exist equally, then the majority of them will look nothing like the real world, the shape of which depends upon some things happening more often than others. Regardless of the validity of this reasoning about “decision-making in a branch world”, the characteristic experience of an agent in this sort of multiverse (where all possible histories exist equally) will be of randomness. If we think at the basic material level, agents shouldn’t even exist in most branches; atoms will just disintegrate, and basic fields will do random things. If we ignore that and (inconsistently) assume enough stability to have a sequence of measurements, the measurement statistics will be wrong—if we repeat your experiment, spin up will be seen as often as spin down, because the coefficients (or the measure, if you wish) are playing no existential role.
I can see a defense for Wallace: he can claim that because “there is no number of worlds”, that you’re not allowed to count them like I’m doing and draw the obvious conclusion, that |down,down> will exist once and |down,up> will exist once. It seems that not only are we not allowed to ask how many worlds there are, we’re not even allowed to ask questions like “what is the characteristic experience of an agent in this superposition?”, because implicitly that is also branch-counting.
The whole thing is sounding decisively implausible at this point, since we end up requiring that all physical order somehow derives from “multiverse agent rationality”, rather than from genuine microphysical cause and effect. The Born rule isn’t only responsible for the Stern-Gerlach experiment turning out right; you need it in order for every material object to remain stable, rather than immediately turning into a random plasma that belongs to the majority class of physical configurations.
If the decision-theoretic argument works, then a rational agent should expect to find herself in a branch which respects quantum statistics, so it should not surprise her to find herself in such a branch. Perhaps there is some measure according to which “most” observers are in branches where quantum statistics aren’t respected, but that measure is not one that should guide the expectations of rational agents, so I don’t see why it should be surprising that we are not typical observers in the sense of typicality associated with that measure.
It’s sounding like a Boltzmann brain… the observer who happens to have memories of Born-friendly statistics should still be blasted into random pieces in the next moment.
I haven’t pinned down the logic of it yet, but I do believe this issue—that the validity of quantum statistics is required for anything about observed reality to have any stability—seriously, even fatally, undermines Wallace’s argument. Consider your assumption 2, “Arbitrary quantum superpositions can be prepared”. This is the analogue, in the decision-theoretic argument, of Bohr’s original assumption that there is a classical world which provides the context of quantum measurements. That assumption is unsatisfactory if we are trying to explain, solely in terms of quantum mechanics, how a “classical world” manages to exist. It looks the same for Wallace: he is presupposing the existence of a world stable enough that an agent can exist in it, interact with it, and perform actions with known outcomes. We are told that we can get this from Schrodinger dynamics alone, but Schrodinger dynamics will also produce nonzero amplitudes for all the configurations where the world has dissolved into plasma. Since we are trying to justify the Born rule interpretation of those amplitudes, we can’t neglect consideration of these disintegrating-world branches just because the amplitude is small; that would be presupposing the conclusion. Also, observer selection won’t help, because there will be branches where the observer survives but the apparatus disintegrates.
It all sounds absurd, but this results directly from trying to talk about physical processes, without using the part of QM that gives us the probabilities. When we do use that part, we can safely say that the spontaneous disintegration of everyday objects is, not impossible, but so utterly unlikely that it is of no practical interest. When we try to describe reality without it, then all possible futures start on an equal footing, and most of them end in plasma. I just do not see how the argument can even get started.
The decision-theoretic argument is not supposed to prove everything. It’s supposed to explain why agents living in environments that have so far been stable should set their credences according to the Born probabilities. So, yes, there are presuppositions involved. But I don’t see how this is a devastating problem for Everettianism.
You brought up Boltzmann brains. It turns out that our best cosmological models predict that most observers in the universe will be Boltzmann brains. The universe will gradually approach an eternally expanding cold de Sitter phase, and thermal fluctuations in quantum fields will produce an infinity of Boltzmann brain type observers. Do you think this is a devastating objection to cosmology? I think the appropriate tack is to recognize anthropics as an important issue that we need to work on understanding, but in the meantime proceed with using those cosmological models under the assumption that we are not Boltzmann brain type observers.
Much of the evidence for quantum mechanics is statistical in nature. Relative frequency data summarizing the results of repeated experiments is compared to probabilities calculated from the theory; close agreement between the observed relative frequencies and calculated probabilities is taken as evidence in favour of the theory. The Everett interpretation, if it is to be a candidate for serious consideration, must be capable of doing justice to this sort of reasoning. Since, on the Everett interpretation, all outcomes with nonzero amplitude are actualized on dierent branches, it is not obvious that sense can be made of ascribing probabilities to outcomes of experiments, and this poses a prima facie problem for statistical inference. It is incumbent on the Everettian either to make sense of ascribing probabilities to outcomes of experiments in the Everett interpretation, or to find a substitute on which the usual statistical analysis of experimental results continues to count as evidence for quantum mechanics, and, since it is the very evidence for quantum mechanics that is at stake, this must be done in a way that does not presuppose the correctness of Everettian quantum mechanics. This requires an account of theory conrmation that applies to branching-universe theories but does not presuppose the correctness of any such theory. In this paper, we supply and defend such an account. The account has the consequence that statistical evidence can confirm a branching-universe theory such as Everettian quantum mechanics in the same way in which it can confirm a non-branching probabilistic theory.
So how does it explain that? How can the imperative to maximize your expected utility, require you to “expect to see” photons to arrive in the dark zones less often than they arrive in the light zones, without it also being true that photons actually arrive in the dark zones less often than they arrive in the light zones?
Sorry. I edited my post to make it clearer before I saw yours, so the part you quoted has now disappeared. Anyway, I’m not entirely on board with the Deutsch-Wallace program, so I’m not going to offer a full defense of their view. I do want to make sure it’s clear what they claim to be doing.
Consider a simpler case then the two-slit experiment: a Stern-Gerlach experiment on spin-1/2 particles prepared in the superposition sqrt(1/4) |up> + sqrt(3/4) |down>. Ignoring fuzzy world complications for now, the Everettian says that upon measurement of the particle, my branch will split into two branches. In one branch, a future self will observe spin-up, and in the other branch a future self will observe spin-down. All of this is determined by the Schrodinger dynamics. The Born probabilities don’t enter into it.
Where the Born probabilities enter is in how I should behave pre-split. As an Everettian, I am not in a genuine state of subjective uncertainty about what will happen, but I am in the weird position of knowing that I’m going to be splitting. According to Wallace (and I’m not sure I agree with this), the appropriate way to behave in this circumstance is not as if I’m going to turn into two separate people. It is basically psychologically impossible for a human being to have this attitude. Instead, I should behave as if I am subjectively uncertain about which of the two future selves is going to be me. Perhaps on some intellectual level I know that both of them will be me, but we have not evolved to account for such fission in our decision-making processes, so I have to treat it as a case where I am going to end up as just one of them, but I don’t know which one.
Adopting this position of faux subjective uncertainty, I should plan for the future as if maximizing expected utility. And if I am organizing my beliefs this way, the decision theoretic argument establishes that I should set my probabilities in accord with the Born rule. In this case, the probabilities do not stem from genuine uncertainty, and they do not represent frequencies. So the fact that I expect to see spin-down does not mean that spin-down is more likely to happen in any ordinary sense. It means that as a rational agent, I should behave as if I am more likely to head down the spin-down branch.
The problematic step here is the one where decision-making in a branching world is posited to have the same rational structure as decision-making in a situation of uncertainty, even though there is no genuine uncertainty. There are a number of arguments for and against this proposition that we can go into if you like. For now, suffice it to say that I remain unconvinced that this is the right way to make decisions when faced with fission, but I don’t think the idea is completely insane. Wallace’s thoughts on this question are here: http://philsci-archive.pitt.edu/3811/1/websites.pdf
There is still the problem that if all histories exist and if they exist equally, then the majority of them will look nothing like the real world, the shape of which depends upon some things happening more often than others. Regardless of the validity of this reasoning about “decision-making in a branch world”, the characteristic experience of an agent in this sort of multiverse (where all possible histories exist equally) will be of randomness. If we think at the basic material level, agents shouldn’t even exist in most branches; atoms will just disintegrate, and basic fields will do random things. If we ignore that and (inconsistently) assume enough stability to have a sequence of measurements, the measurement statistics will be wrong—if we repeat your experiment, spin up will be seen as often as spin down, because the coefficients (or the measure, if you wish) are playing no existential role.
I can see a defense for Wallace: he can claim that because “there is no number of worlds”, that you’re not allowed to count them like I’m doing and draw the obvious conclusion, that |down,down> will exist once and |down,up> will exist once. It seems that not only are we not allowed to ask how many worlds there are, we’re not even allowed to ask questions like “what is the characteristic experience of an agent in this superposition?”, because implicitly that is also branch-counting.
The whole thing is sounding decisively implausible at this point, since we end up requiring that all physical order somehow derives from “multiverse agent rationality”, rather than from genuine microphysical cause and effect. The Born rule isn’t only responsible for the Stern-Gerlach experiment turning out right; you need it in order for every material object to remain stable, rather than immediately turning into a random plasma that belongs to the majority class of physical configurations.
If the decision-theoretic argument works, then a rational agent should expect to find herself in a branch which respects quantum statistics, so it should not surprise her to find herself in such a branch. Perhaps there is some measure according to which “most” observers are in branches where quantum statistics aren’t respected, but that measure is not one that should guide the expectations of rational agents, so I don’t see why it should be surprising that we are not typical observers in the sense of typicality associated with that measure.
It’s sounding like a Boltzmann brain… the observer who happens to have memories of Born-friendly statistics should still be blasted into random pieces in the next moment.
I haven’t pinned down the logic of it yet, but I do believe this issue—that the validity of quantum statistics is required for anything about observed reality to have any stability—seriously, even fatally, undermines Wallace’s argument. Consider your assumption 2, “Arbitrary quantum superpositions can be prepared”. This is the analogue, in the decision-theoretic argument, of Bohr’s original assumption that there is a classical world which provides the context of quantum measurements. That assumption is unsatisfactory if we are trying to explain, solely in terms of quantum mechanics, how a “classical world” manages to exist. It looks the same for Wallace: he is presupposing the existence of a world stable enough that an agent can exist in it, interact with it, and perform actions with known outcomes. We are told that we can get this from Schrodinger dynamics alone, but Schrodinger dynamics will also produce nonzero amplitudes for all the configurations where the world has dissolved into plasma. Since we are trying to justify the Born rule interpretation of those amplitudes, we can’t neglect consideration of these disintegrating-world branches just because the amplitude is small; that would be presupposing the conclusion. Also, observer selection won’t help, because there will be branches where the observer survives but the apparatus disintegrates.
It all sounds absurd, but this results directly from trying to talk about physical processes, without using the part of QM that gives us the probabilities. When we do use that part, we can safely say that the spontaneous disintegration of everyday objects is, not impossible, but so utterly unlikely that it is of no practical interest. When we try to describe reality without it, then all possible futures start on an equal footing, and most of them end in plasma. I just do not see how the argument can even get started.
The decision-theoretic argument is not supposed to prove everything. It’s supposed to explain why agents living in environments that have so far been stable should set their credences according to the Born probabilities. So, yes, there are presuppositions involved. But I don’t see how this is a devastating problem for Everettianism.
You brought up Boltzmann brains. It turns out that our best cosmological models predict that most observers in the universe will be Boltzmann brains. The universe will gradually approach an eternally expanding cold de Sitter phase, and thermal fluctuations in quantum fields will produce an infinity of Boltzmann brain type observers. Do you think this is a devastating objection to cosmology? I think the appropriate tack is to recognize anthropics as an important issue that we need to work on understanding, but in the meantime proceed with using those cosmological models under the assumption that we are not Boltzmann brain type observers.
Anyway, the kind of problem you’re raising now is not one that Wallace’s decision-theoretic argument is intended to solve. This paper by Greaves and Myrvold might be relevant to your concerns, but I haven’t read it yet: http://philsci-archive.pitt.edu/4222/1/everett_and_evidence_21aug08.pdf
The abstract: