The latest attempt at a decision-theoretic account of QM probabilities is David Wallace’s, here: http://arxiv.org/PS_cache/arxiv/pdf/0906/0906.2718v1.pdf . I mention this because this proof is not susceptible to the criticisms that Barnum et al. raise against Deutsch’s proof.
If we’re going to be talking about the approach, it’s worth getting some sense of the argument. Below, I’ve reproduced a very non-technical summary. I describe the decision problem, the assumptions (which Wallace thinks are intuitive constraints on rational decision-making, although I’m not sure I agree), and the representation theorem itself. It is a remarkable result. The assumptions seem fairly weak but the theorem is striking. To get the gist of the theorem, scroll down to the bolded part. If it seems that it couldn’t possibly be true, look at the assumptions and think about which one you want to reject, because the theorem does follow from (appropriately formalized versions of) these assumptions.
The Decision Problem
The agent is choosing between different preparation-measurement-payment (or p-m-p) sequences (Wallace calls them acts, but this terminology is counter-intuitive, so I avoid it). In each sequence, some quantum state is prepared, then it is measured in some basis, and then rewards are doled out to the agent’s future selves on the basis of the measurement outcomes in their respective branches. An example sequence: a state is prepared in the superposition 1⁄2 |up> + sqrt(3/4) |down>, a measurement is made in the up-down basis, then the future self of the agent in the |up> branch is given a reward and the future self in the |down> branch is not.
The agent has a preference ordering over all possible p-m-p sequences. Of course, in any particular decision problem, only some of the possible sequences will be actual options. For example, if the agent is betting on outcomes of a pre-prepared and pre-measured state, then she is choosing between sequences that only differ in the “payment” part of “preparation-measurement-payment”.
The Assumptions
One can always set up a p-m-p sequence where a state is prepared, measured, and then the agent is rewarded regardless of the measurement outcome.
Arbitrary quantum superpositions can be prepared.
After p-m-p sequence is completed, any record of the measurement outcomes can always be erased. Two different p-m-p sequences could lead to the same macroscopic states after such an erasure is performed as long as they differ only in the measurement outcomes, not in the quantum amplitudes and payments associated with those outcomes.
For a given initial macrostate, the agent’s preferences define a total ordering over the set of possible p-m-p sequences.
The agent’s preferences are diachronically consistent. Let’s say a sequence U takes place between times t0 and t1. At t1, there will be branches corresponding to the different outcomes associated with U. Xi and Yi are different p-m-p sequences that could be performed at t1 in the i’th branch. If the agent in the i’th branch prefers Xi over Yi, then the pre-branching agent at time t0 prefers U followed by Xi over U followed by Yi.
The agent cares only about the macroscopic state of the world. She doesn’t prefer one microscopic state over another if they correspond to the same macroscopic state.
The agent doesn’t care about branching per se. She doesn’t consider the mere multiplication of future selves in distinct macroscopic states valuable in itself.
In the Everettian framework, p-m-p sequences are implemented by unitary transformations. If there are two different unitary transformations that have the same effect on the agent’s branch (but differ in their effect on other branches), the agent is indifferent between them.
The Representation Theorem
The preference ordering over sequences induces a preference ordering over rewards, because for any two rewards R1 and R2, there are p-m-p sequences which lead to R1 for all branches and R2 for all branches. If any sequence of the first kind is preferred over a sequence of the second kind, then reward R1 is preferred over reward R2.
Given a preference ordering over the rewards, there is a unique (up to affine transformations) utility function over the rewards. If the agent is to use standard decision theory to reason about which p-m-p sequences to choose in order to maximize her expectation of reward utility, and we want the expected utilities of the p-m-p sequences to reflect to the agent’s given preferences over those sequences, then the probability distribution over outcomes we use when calculating the expected utility of p-m-p sequences must be given by the Born rule.
I just looked at the statement of the theorem in the paper you linked. I would summarize it as:
Given a preference ordering over the rewards, there is a unique (up to affine transformation) utility function over the rewards with the property that this utility function recovers the preferences over sequences leading to those rewards iff the expected utility of the sequences is calculated using the Born probabilities.
Is that correct? Or does the result rule out the existence of a utility function which recovers the preferences when you calculate expected utility using that utility function and non-Born probabilities?
The way Wallace expresses the theorem in the paper is misleading. The theorem does rule out utility functions that recover preferences if expected utility is calculated using non-Born probabilities. I think many people, on first glance, interpret the theorem the way you did, which makes it much less impressive, and not really a justification of the Born probabilities at all.
The way to read the theorem is not ”… there is a unique utility function with the property that...”, It is ”...there is a unique utility function and it has the property that...”
Ah, I see. Yes, that kind of result is remarkable.
I don’t know what you mean, though, by “there is a unique (up to affine transformations) utility function over the rewards”. If you mean there is a unique utility function on rewards that recovers the agent’s preferences on rewards, that’s false. But I don’t know what else you could mean.
Ah, I thought of a charitable interpretation of “there is a unique (up to affine transformations) utility function over the rewards”. Given a preference ordering on sequences of rewards, there is a unique utility function on individual rewards that recovers that preference ordering. I believe this because if rewards are repeatable, the diachronicity hypothesis implies that any utility function on sequences of rewards must be additive. (We also need a hypothesis ruling out lexicographically-ordered preferences.)
You’re right. Diachronic consistency is required to establish the uniqueness of the utility function. Also, Wallace does include continuity axioms that rule out lexically ordered preferences, but I left them out of my summary for the sake of simplicity.
Before I try to parse this argument: do you really think this line of reasoning can explain why there are dark regions in the double-slit experiment? Are you really going to explain that in terms of the utility function of a perceiving agent?!
Before I try to parse this argument: do you really think this line of reasoning can explain why there are dark regions in the double-slit experiment? Are you really going to explain that in terms of the utility function of a perceiving agent?!
I don’t think it explains why I actually see dark regions in the double slit experiment. That is explained by the Schrodinger dynamics without any need for appeal to probabilistic resources. The decision-theoretic argument does tell me why I should assign probability close to zero that the detector will show a flash in certain regions. But this is a fact about how I should set my credences, and so it’s not entirely absurd that it has to do with my status as a decision-making agent.
What the decision theoretic account explains is why I should expect to see dark regions in a double slit experiment.
So how does it explain that? How can the imperative to maximize your expected utility, require you to “expect to see” photons to arrive in the dark zones less often than they arrive in the light zones, without it also being true that photons actually arrive in the dark zones less often than they arrive in the light zones?
Sorry. I edited my post to make it clearer before I saw yours, so the part you quoted has now disappeared. Anyway, I’m not entirely on board with the Deutsch-Wallace program, so I’m not going to offer a full defense of their view. I do want to make sure it’s clear what they claim to be doing.
Consider a simpler case then the two-slit experiment: a Stern-Gerlach experiment on spin-1/2 particles prepared in the superposition sqrt(1/4) |up> + sqrt(3/4) |down>. Ignoring fuzzy world complications for now, the Everettian says that upon measurement of the particle, my branch will split into two branches. In one branch, a future self will observe spin-up, and in the other branch a future self will observe spin-down. All of this is determined by the Schrodinger dynamics. The Born probabilities don’t enter into it.
Where the Born probabilities enter is in how I should behave pre-split. As an Everettian, I am not in a genuine state of subjective uncertainty about what will happen, but I am in the weird position of knowing that I’m going to be splitting. According to Wallace (and I’m not sure I agree with this), the appropriate way to behave in this circumstance is not as if I’m going to turn into two separate people. It is basically psychologically impossible for a human being to have this attitude. Instead, I should behave as if I am subjectively uncertain about which of the two future selves is going to be me. Perhaps on some intellectual level I know that both of them will be me, but we have not evolved to account for such fission in our decision-making processes, so I have to treat it as a case where I am going to end up as just one of them, but I don’t know which one.
Adopting this position of faux subjective uncertainty, I should plan for the future as if maximizing expected utility. And if I am organizing my beliefs this way, the decision theoretic argument establishes that I should set my probabilities in accord with the Born rule. In this case, the probabilities do not stem from genuine uncertainty, and they do not represent frequencies. So the fact that I expect to see spin-down does not mean that spin-down is more likely to happen in any ordinary sense. It means that as a rational agent, I should behave as if I am more likely to head down the spin-down branch.
The problematic step here is the one where decision-making in a branching world is posited to have the same rational structure as decision-making in a situation of uncertainty, even though there is no genuine uncertainty. There are a number of arguments for and against this proposition that we can go into if you like. For now, suffice it to say that I remain unconvinced that this is the right way to make decisions when faced with fission, but I don’t think the idea is completely insane. Wallace’s thoughts on this question are here: http://philsci-archive.pitt.edu/3811/1/websites.pdf
There is still the problem that if all histories exist and if they exist equally, then the majority of them will look nothing like the real world, the shape of which depends upon some things happening more often than others. Regardless of the validity of this reasoning about “decision-making in a branch world”, the characteristic experience of an agent in this sort of multiverse (where all possible histories exist equally) will be of randomness. If we think at the basic material level, agents shouldn’t even exist in most branches; atoms will just disintegrate, and basic fields will do random things. If we ignore that and (inconsistently) assume enough stability to have a sequence of measurements, the measurement statistics will be wrong—if we repeat your experiment, spin up will be seen as often as spin down, because the coefficients (or the measure, if you wish) are playing no existential role.
I can see a defense for Wallace: he can claim that because “there is no number of worlds”, that you’re not allowed to count them like I’m doing and draw the obvious conclusion, that |down,down> will exist once and |down,up> will exist once. It seems that not only are we not allowed to ask how many worlds there are, we’re not even allowed to ask questions like “what is the characteristic experience of an agent in this superposition?”, because implicitly that is also branch-counting.
The whole thing is sounding decisively implausible at this point, since we end up requiring that all physical order somehow derives from “multiverse agent rationality”, rather than from genuine microphysical cause and effect. The Born rule isn’t only responsible for the Stern-Gerlach experiment turning out right; you need it in order for every material object to remain stable, rather than immediately turning into a random plasma that belongs to the majority class of physical configurations.
If the decision-theoretic argument works, then a rational agent should expect to find herself in a branch which respects quantum statistics, so it should not surprise her to find herself in such a branch. Perhaps there is some measure according to which “most” observers are in branches where quantum statistics aren’t respected, but that measure is not one that should guide the expectations of rational agents, so I don’t see why it should be surprising that we are not typical observers in the sense of typicality associated with that measure.
It’s sounding like a Boltzmann brain… the observer who happens to have memories of Born-friendly statistics should still be blasted into random pieces in the next moment.
I haven’t pinned down the logic of it yet, but I do believe this issue—that the validity of quantum statistics is required for anything about observed reality to have any stability—seriously, even fatally, undermines Wallace’s argument. Consider your assumption 2, “Arbitrary quantum superpositions can be prepared”. This is the analogue, in the decision-theoretic argument, of Bohr’s original assumption that there is a classical world which provides the context of quantum measurements. That assumption is unsatisfactory if we are trying to explain, solely in terms of quantum mechanics, how a “classical world” manages to exist. It looks the same for Wallace: he is presupposing the existence of a world stable enough that an agent can exist in it, interact with it, and perform actions with known outcomes. We are told that we can get this from Schrodinger dynamics alone, but Schrodinger dynamics will also produce nonzero amplitudes for all the configurations where the world has dissolved into plasma. Since we are trying to justify the Born rule interpretation of those amplitudes, we can’t neglect consideration of these disintegrating-world branches just because the amplitude is small; that would be presupposing the conclusion. Also, observer selection won’t help, because there will be branches where the observer survives but the apparatus disintegrates.
It all sounds absurd, but this results directly from trying to talk about physical processes, without using the part of QM that gives us the probabilities. When we do use that part, we can safely say that the spontaneous disintegration of everyday objects is, not impossible, but so utterly unlikely that it is of no practical interest. When we try to describe reality without it, then all possible futures start on an equal footing, and most of them end in plasma. I just do not see how the argument can even get started.
The decision-theoretic argument is not supposed to prove everything. It’s supposed to explain why agents living in environments that have so far been stable should set their credences according to the Born probabilities. So, yes, there are presuppositions involved. But I don’t see how this is a devastating problem for Everettianism.
You brought up Boltzmann brains. It turns out that our best cosmological models predict that most observers in the universe will be Boltzmann brains. The universe will gradually approach an eternally expanding cold de Sitter phase, and thermal fluctuations in quantum fields will produce an infinity of Boltzmann brain type observers. Do you think this is a devastating objection to cosmology? I think the appropriate tack is to recognize anthropics as an important issue that we need to work on understanding, but in the meantime proceed with using those cosmological models under the assumption that we are not Boltzmann brain type observers.
Much of the evidence for quantum mechanics is statistical in nature. Relative frequency data summarizing the results of repeated experiments is compared to probabilities calculated from the theory; close agreement between the observed relative frequencies and calculated probabilities is taken as evidence in favour of the theory. The Everett interpretation, if it is to be a candidate for serious consideration, must be capable of doing justice to this sort of reasoning. Since, on the Everett interpretation, all outcomes with nonzero amplitude are actualized on dierent branches, it is not obvious that sense can be made of ascribing probabilities to outcomes of experiments, and this poses a prima facie problem for statistical inference. It is incumbent on the Everettian either to make sense of ascribing probabilities to outcomes of experiments in the Everett interpretation, or to find a substitute on which the usual statistical analysis of experimental results continues to count as evidence for quantum mechanics, and, since it is the very evidence for quantum mechanics that is at stake, this must be done in a way that does not presuppose the correctness of Everettian quantum mechanics. This requires an account of theory conrmation that applies to branching-universe theories but does not presuppose the correctness of any such theory. In this paper, we supply and defend such an account. The account has the consequence that statistical evidence can confirm a branching-universe theory such as Everettian quantum mechanics in the same way in which it can confirm a non-branching probabilistic theory.
Mitchell, in your previous post you suggested that the natural way to get probabilities in MWI is just by looking at relative frequencies of branches. Obviously if the representation theorem is correct, this strategy must be incompatible with some of the assumptions. The relevant assumptions here are (5) and (7).
Wallace gives a good example illustrating why the counting rule violates (5) and (7) on page 28 of the paper I linked above.
The latest attempt at a decision-theoretic account of QM probabilities is David Wallace’s, here: http://arxiv.org/PS_cache/arxiv/pdf/0906/0906.2718v1.pdf . I mention this because this proof is not susceptible to the criticisms that Barnum et al. raise against Deutsch’s proof.
If we’re going to be talking about the approach, it’s worth getting some sense of the argument. Below, I’ve reproduced a very non-technical summary. I describe the decision problem, the assumptions (which Wallace thinks are intuitive constraints on rational decision-making, although I’m not sure I agree), and the representation theorem itself. It is a remarkable result. The assumptions seem fairly weak but the theorem is striking. To get the gist of the theorem, scroll down to the bolded part. If it seems that it couldn’t possibly be true, look at the assumptions and think about which one you want to reject, because the theorem does follow from (appropriately formalized versions of) these assumptions.
The Decision Problem
The agent is choosing between different preparation-measurement-payment (or p-m-p) sequences (Wallace calls them acts, but this terminology is counter-intuitive, so I avoid it). In each sequence, some quantum state is prepared, then it is measured in some basis, and then rewards are doled out to the agent’s future selves on the basis of the measurement outcomes in their respective branches. An example sequence: a state is prepared in the superposition 1⁄2 |up> + sqrt(3/4) |down>, a measurement is made in the up-down basis, then the future self of the agent in the |up> branch is given a reward and the future self in the |down> branch is not.
The agent has a preference ordering over all possible p-m-p sequences. Of course, in any particular decision problem, only some of the possible sequences will be actual options. For example, if the agent is betting on outcomes of a pre-prepared and pre-measured state, then she is choosing between sequences that only differ in the “payment” part of “preparation-measurement-payment”.
The Assumptions
One can always set up a p-m-p sequence where a state is prepared, measured, and then the agent is rewarded regardless of the measurement outcome.
Arbitrary quantum superpositions can be prepared.
After p-m-p sequence is completed, any record of the measurement outcomes can always be erased. Two different p-m-p sequences could lead to the same macroscopic states after such an erasure is performed as long as they differ only in the measurement outcomes, not in the quantum amplitudes and payments associated with those outcomes.
For a given initial macrostate, the agent’s preferences define a total ordering over the set of possible p-m-p sequences.
The agent’s preferences are diachronically consistent. Let’s say a sequence U takes place between times t0 and t1. At t1, there will be branches corresponding to the different outcomes associated with U. Xi and Yi are different p-m-p sequences that could be performed at t1 in the i’th branch. If the agent in the i’th branch prefers Xi over Yi, then the pre-branching agent at time t0 prefers U followed by Xi over U followed by Yi.
The agent cares only about the macroscopic state of the world. She doesn’t prefer one microscopic state over another if they correspond to the same macroscopic state.
The agent doesn’t care about branching per se. She doesn’t consider the mere multiplication of future selves in distinct macroscopic states valuable in itself.
In the Everettian framework, p-m-p sequences are implemented by unitary transformations. If there are two different unitary transformations that have the same effect on the agent’s branch (but differ in their effect on other branches), the agent is indifferent between them.
The Representation Theorem
The preference ordering over sequences induces a preference ordering over rewards, because for any two rewards R1 and R2, there are p-m-p sequences which lead to R1 for all branches and R2 for all branches. If any sequence of the first kind is preferred over a sequence of the second kind, then reward R1 is preferred over reward R2.
Given a preference ordering over the rewards, there is a unique (up to affine transformations) utility function over the rewards. If the agent is to use standard decision theory to reason about which p-m-p sequences to choose in order to maximize her expectation of reward utility, and we want the expected utilities of the p-m-p sequences to reflect to the agent’s given preferences over those sequences, then the probability distribution over outcomes we use when calculating the expected utility of p-m-p sequences must be given by the Born rule.
I just looked at the statement of the theorem in the paper you linked. I would summarize it as:
Given a preference ordering over the rewards, there is a unique (up to affine transformation) utility function over the rewards with the property that this utility function recovers the preferences over sequences leading to those rewards iff the expected utility of the sequences is calculated using the Born probabilities.
Is that correct? Or does the result rule out the existence of a utility function which recovers the preferences when you calculate expected utility using that utility function and non-Born probabilities?
The way Wallace expresses the theorem in the paper is misleading. The theorem does rule out utility functions that recover preferences if expected utility is calculated using non-Born probabilities. I think many people, on first glance, interpret the theorem the way you did, which makes it much less impressive, and not really a justification of the Born probabilities at all.
The way to read the theorem is not ”… there is a unique utility function with the property that...”, It is ”...there is a unique utility function and it has the property that...”
Ah, I see. Yes, that kind of result is remarkable.
I don’t know what you mean, though, by “there is a unique (up to affine transformations) utility function over the rewards”. If you mean there is a unique utility function on rewards that recovers the agent’s preferences on rewards, that’s false. But I don’t know what else you could mean.
EDIT: See my comment below.
Ah, I thought of a charitable interpretation of “there is a unique (up to affine transformations) utility function over the rewards”. Given a preference ordering on sequences of rewards, there is a unique utility function on individual rewards that recovers that preference ordering. I believe this because if rewards are repeatable, the diachronicity hypothesis implies that any utility function on sequences of rewards must be additive. (We also need a hypothesis ruling out lexicographically-ordered preferences.)
You’re right. Diachronic consistency is required to establish the uniqueness of the utility function. Also, Wallace does include continuity axioms that rule out lexically ordered preferences, but I left them out of my summary for the sake of simplicity.
Before I try to parse this argument: do you really think this line of reasoning can explain why there are dark regions in the double-slit experiment? Are you really going to explain that in terms of the utility function of a perceiving agent?!
I don’t think it explains why I actually see dark regions in the double slit experiment. That is explained by the Schrodinger dynamics without any need for appeal to probabilistic resources. The decision-theoretic argument does tell me why I should assign probability close to zero that the detector will show a flash in certain regions. But this is a fact about how I should set my credences, and so it’s not entirely absurd that it has to do with my status as a decision-making agent.
So how does it explain that? How can the imperative to maximize your expected utility, require you to “expect to see” photons to arrive in the dark zones less often than they arrive in the light zones, without it also being true that photons actually arrive in the dark zones less often than they arrive in the light zones?
Sorry. I edited my post to make it clearer before I saw yours, so the part you quoted has now disappeared. Anyway, I’m not entirely on board with the Deutsch-Wallace program, so I’m not going to offer a full defense of their view. I do want to make sure it’s clear what they claim to be doing.
Consider a simpler case then the two-slit experiment: a Stern-Gerlach experiment on spin-1/2 particles prepared in the superposition sqrt(1/4) |up> + sqrt(3/4) |down>. Ignoring fuzzy world complications for now, the Everettian says that upon measurement of the particle, my branch will split into two branches. In one branch, a future self will observe spin-up, and in the other branch a future self will observe spin-down. All of this is determined by the Schrodinger dynamics. The Born probabilities don’t enter into it.
Where the Born probabilities enter is in how I should behave pre-split. As an Everettian, I am not in a genuine state of subjective uncertainty about what will happen, but I am in the weird position of knowing that I’m going to be splitting. According to Wallace (and I’m not sure I agree with this), the appropriate way to behave in this circumstance is not as if I’m going to turn into two separate people. It is basically psychologically impossible for a human being to have this attitude. Instead, I should behave as if I am subjectively uncertain about which of the two future selves is going to be me. Perhaps on some intellectual level I know that both of them will be me, but we have not evolved to account for such fission in our decision-making processes, so I have to treat it as a case where I am going to end up as just one of them, but I don’t know which one.
Adopting this position of faux subjective uncertainty, I should plan for the future as if maximizing expected utility. And if I am organizing my beliefs this way, the decision theoretic argument establishes that I should set my probabilities in accord with the Born rule. In this case, the probabilities do not stem from genuine uncertainty, and they do not represent frequencies. So the fact that I expect to see spin-down does not mean that spin-down is more likely to happen in any ordinary sense. It means that as a rational agent, I should behave as if I am more likely to head down the spin-down branch.
The problematic step here is the one where decision-making in a branching world is posited to have the same rational structure as decision-making in a situation of uncertainty, even though there is no genuine uncertainty. There are a number of arguments for and against this proposition that we can go into if you like. For now, suffice it to say that I remain unconvinced that this is the right way to make decisions when faced with fission, but I don’t think the idea is completely insane. Wallace’s thoughts on this question are here: http://philsci-archive.pitt.edu/3811/1/websites.pdf
There is still the problem that if all histories exist and if they exist equally, then the majority of them will look nothing like the real world, the shape of which depends upon some things happening more often than others. Regardless of the validity of this reasoning about “decision-making in a branch world”, the characteristic experience of an agent in this sort of multiverse (where all possible histories exist equally) will be of randomness. If we think at the basic material level, agents shouldn’t even exist in most branches; atoms will just disintegrate, and basic fields will do random things. If we ignore that and (inconsistently) assume enough stability to have a sequence of measurements, the measurement statistics will be wrong—if we repeat your experiment, spin up will be seen as often as spin down, because the coefficients (or the measure, if you wish) are playing no existential role.
I can see a defense for Wallace: he can claim that because “there is no number of worlds”, that you’re not allowed to count them like I’m doing and draw the obvious conclusion, that |down,down> will exist once and |down,up> will exist once. It seems that not only are we not allowed to ask how many worlds there are, we’re not even allowed to ask questions like “what is the characteristic experience of an agent in this superposition?”, because implicitly that is also branch-counting.
The whole thing is sounding decisively implausible at this point, since we end up requiring that all physical order somehow derives from “multiverse agent rationality”, rather than from genuine microphysical cause and effect. The Born rule isn’t only responsible for the Stern-Gerlach experiment turning out right; you need it in order for every material object to remain stable, rather than immediately turning into a random plasma that belongs to the majority class of physical configurations.
If the decision-theoretic argument works, then a rational agent should expect to find herself in a branch which respects quantum statistics, so it should not surprise her to find herself in such a branch. Perhaps there is some measure according to which “most” observers are in branches where quantum statistics aren’t respected, but that measure is not one that should guide the expectations of rational agents, so I don’t see why it should be surprising that we are not typical observers in the sense of typicality associated with that measure.
It’s sounding like a Boltzmann brain… the observer who happens to have memories of Born-friendly statistics should still be blasted into random pieces in the next moment.
I haven’t pinned down the logic of it yet, but I do believe this issue—that the validity of quantum statistics is required for anything about observed reality to have any stability—seriously, even fatally, undermines Wallace’s argument. Consider your assumption 2, “Arbitrary quantum superpositions can be prepared”. This is the analogue, in the decision-theoretic argument, of Bohr’s original assumption that there is a classical world which provides the context of quantum measurements. That assumption is unsatisfactory if we are trying to explain, solely in terms of quantum mechanics, how a “classical world” manages to exist. It looks the same for Wallace: he is presupposing the existence of a world stable enough that an agent can exist in it, interact with it, and perform actions with known outcomes. We are told that we can get this from Schrodinger dynamics alone, but Schrodinger dynamics will also produce nonzero amplitudes for all the configurations where the world has dissolved into plasma. Since we are trying to justify the Born rule interpretation of those amplitudes, we can’t neglect consideration of these disintegrating-world branches just because the amplitude is small; that would be presupposing the conclusion. Also, observer selection won’t help, because there will be branches where the observer survives but the apparatus disintegrates.
It all sounds absurd, but this results directly from trying to talk about physical processes, without using the part of QM that gives us the probabilities. When we do use that part, we can safely say that the spontaneous disintegration of everyday objects is, not impossible, but so utterly unlikely that it is of no practical interest. When we try to describe reality without it, then all possible futures start on an equal footing, and most of them end in plasma. I just do not see how the argument can even get started.
The decision-theoretic argument is not supposed to prove everything. It’s supposed to explain why agents living in environments that have so far been stable should set their credences according to the Born probabilities. So, yes, there are presuppositions involved. But I don’t see how this is a devastating problem for Everettianism.
You brought up Boltzmann brains. It turns out that our best cosmological models predict that most observers in the universe will be Boltzmann brains. The universe will gradually approach an eternally expanding cold de Sitter phase, and thermal fluctuations in quantum fields will produce an infinity of Boltzmann brain type observers. Do you think this is a devastating objection to cosmology? I think the appropriate tack is to recognize anthropics as an important issue that we need to work on understanding, but in the meantime proceed with using those cosmological models under the assumption that we are not Boltzmann brain type observers.
Anyway, the kind of problem you’re raising now is not one that Wallace’s decision-theoretic argument is intended to solve. This paper by Greaves and Myrvold might be relevant to your concerns, but I haven’t read it yet: http://philsci-archive.pitt.edu/4222/1/everett_and_evidence_21aug08.pdf
The abstract:
Mitchell, in your previous post you suggested that the natural way to get probabilities in MWI is just by looking at relative frequencies of branches. Obviously if the representation theorem is correct, this strategy must be incompatible with some of the assumptions. The relevant assumptions here are (5) and (7).
Wallace gives a good example illustrating why the counting rule violates (5) and (7) on page 28 of the paper I linked above.