One of the things impeding the many worlds vs wavefunction-collapse dialogue is that nobody seems to be able to point to a situation in which the difference clearly matters, where we would make a different decision depending on which theory we believe. If there aren’t any, pragmatism would instruct us to write the question off as meaningless.
Has anyone tried to pose a compelling thought experiment in which the difference matters?
As any collapse (if it does happen) occurs so ‘late’ that current experiments are unable to differentiate between many worlds and collapse—it seems quite possible that both theories will continue to give identical predictions for all realisable situations, with the only difference being ‘one branch becomes realised’ and ‘all branches become realised’.
General:
Assuming this practical indistinguishability between the theories, I think that any utility function based on one of the theories can be directly translated into the other theory by just reinterpreting the theory-inherent probabilities. This assumes that all branches in the many worlds reasoning are weighted with their ‘probability’ (e.g. the Quantum Russian Roulette thought experiment hinges on counting ‘I survive’-branches differently¹)
More Human related:
One relevant aspect is how natural utility maximisation feels using one of the two theories as world model. Thinking in many worlds terms makes expected utility maximisation a lot more vivid compared to the different future outcomes being ‘mere probabilities’—on the other hand, this vividness makes rationalisation of pre-existing intuitions easier.
Another point is that most people strongly value existence/non-existence additionally to the quality and ‘probability’ of existence (e.g. people might play Quantum Russian Roulette but not normal Russian Roulette as many worlds makes sure that they will survive [in some branches]). This makes many worlds feel more comforting when facing high probabilities of grim futures.
A third aspect is the consequences for the concept of identity. Adopting many worlds as world model also means that naive models of self and identity are up for a major revision. As argued above, valuing all future branch selves equally (=weighted by the ‘probabilities’) should make many worlds and collapse equivalent (up to the ‘certain survival [in some branches]’ aspect). A different choice in accounting for many worlds might not be translatable into the collapse world model.
Disclaimer:
I am still very much confused by decision theories that involve coordination without a causal link between agents such as Multiverse-wide Cooperation. For such theories, other considerations might also be important.
----
¹: To be more exact, I would argue that the case for Quantum Russian Roulette becomes identical to the case for normal Russian Roulette if many world branches are weighted with their ‘probabilities’ and also takes into account the ‘certain survival [in some branches]’ bonus that many worlds gives.
Mm, agreed. We’re fans of quantities, rather than qualities, so I may have been underrecognizing this.
Humans clearly have special concerns about not existing at all, that extend beyond the linear concern for merely existing less. A quantum multiverse (or maybe even just a physically large multiverse, with chance recurrences) would soundly and naturally decrease a human’s aversion to death, to some extent.
I think it’s more natural to ask “how might an agent behave differently as a result of believing an objective collapse theory?” One answer that comes to mind is that they will be less likely to invest in quantum computers, which will need to rely on entanglement between a large number of quantum systems that under objective collapse theories might not be maintained (depending on the exact collapse theory). Similarly, other different physical theories of quantum mechanics will result in different predictions about what will happen in various somewhat-arcane situations.
More flippantly, an agent might answer the question ‘What do you think the right theory of quantum mechanics is?’ differently.
[Edited to put the serious answer where people will see it in the preview]
Things like determinism and many worlds may not affect fine grained decision making, but they can profoundly impact what decision making, choice volition, agency and moral responsibility are. It is widely accepted that determinism affects freedom of choice, excluding some notions of free will. It is less often noticed that many worlds affects moral responsibility, because it removes refraining: if there is the slightest possibility that you would kill someone, then there is a world where you killed someone. You can’t refrain from doing anything that is possible for you to do
Does that mean that utilitarianism is incompatible with Many Worlds? if everything that is possible for you to do is something that you actually do then that would mean that utility, across the whole multiverse, is constant, even assuming any notion of free will.
Everything is possible, but not everything has the same measure (is equally likely). Killing someone in 10% of “worlds” is worse than killing them in 1% of “worlds”.
At the end, believing in many worlds will give you the same results as believing in collapse. It’s just that epistemologically, the believer in collapse needs to deal with the problem of luck. Does “having a 10% probability of killing someone, and actually killing them” make you a worse person that “having a 10% probability of killing someone, but not killing them”?
(From many-worlds perspective, it’s the same. You simply shouldn’t do things that have 10% risk of killing someone, unless it is to avoid even worse things.)
(And yes, there is the technical problem of how exactly you determine that the probability was exactly 10%, considering that you don’t see the parallel “words”.)
Apart from the other problem: MWI is deterministic, so you can’t alter the percentages by any kind of free will, despite what people keep asserting.
Actually killing them is certainly worse. We place moral weight on actions as well as character.
Neither most collapse-theories nor MWI allow for super-physical free will, so that doesn’t seem relevant to this question. Since the question concerns what one should do, it seems reasonable to assume that some notion of choice is possible.
(FWIW, I’d guess compatibilism is the most popular take on free will on LW.)
Yes, but compatibilism doesn’t suggest that you choose between different actions or between different decision theories.
Wait, what? If compatibilism doesn’t suggest that I’m choosing between actions, what am I choosing between?
Theories, imaginary ideas.
No, if 99% of timelines have utility 1, while in 1% of timelines something very improbable happens and you instead cause utility to go to 0, the global utility is still pretty much 1. Some part of the human utility function seems to care about absolute existence or nonexistence, and that component is going to be sort of steamrolled by multiverse theory, but we will mostly just keep on going in pursuit of greater relative measure.
That amounts to saying that if the conjunction of MWI and utilitarianism is correct, we would or should behave as though it isn’t. That is a major departure from typical rationalism (eg the Litany of Tarski).
The question isn’t really whether it’s correct, the question is closer to “is it equivalent to the thing we already believed”.
There is the Quantum Russian roulette thought experiment. It was posted in LessWrong.
Yeah. I reject it. If you’re any good at remapping your utility function after perspective shifts (“rescuing the utility function”), then, after digesting many worlds, you will resolve that being dead in all probable timelines is pretty much what death really is, then, and you have known for a long time that you do not want death, so you don’t have much use for quantum suicide gambits.
Many of the other comments deal with thought experiments rather than looking at the reality of how “many worlds” is USED. From my point of view as a non-physicist it seems to primarily be used as psuedo-science “woo”—a revival of mystery and awe under the cloak of scientific authority. A kind of paradoxical mysticism for non-religious people, or fans of “science-ism”.
An agent might act differently from MISUNDERSTANDING many worlds theory. Or by paying more attention to it. Psychological “priming” is real ansd powerful.
The answer by TAG below is case in point. For someone committed to a belief in determinism or fatalism, having a manyworlds theory in mind may buttress that belief.
If they are put into an interferometer, someone who thinks the wavefunction has collapsed would think, while in the middle, that they have a 50⁄50 chance of coming out each arm, while an everettian will make choices as if they might deterministically come out of one arm (depending on the construction of the interferometer).
The difficulty of putting humans into interferometers us more or less why this doesn’t matter much. Though of course “pragmatism” shouldn’t stop us from applying occam’s razor.
Assume you put enormous weight on avoiding being tortured and you recognize that signing up for cryonics results in some (very tiny) chance that you will be revived in an evil world that will torture you and this, absent many worlds, causes you to not sign up for cryonics. There is an argument that in many worlds there will be versions of you that are going to be tortured so your goal should be to reduce the percentage of these versions that get tortured. Signing up for cryonics in this world means you are vastly more likely to be revived and not tortured than revived and tortured and signing up for cryonics will thus likely lower percentage of you across the multiverse who are tortured. Signing up for cryonics in this world reduces the importance of versions of you trapped in worlds where the Nazis won and are torturing you.
If you use some form of noncausal decision theory, it can make a difference.
Suppose Omega flips a quantum coin, if its tails, they ask you for £1, if its heads they give you £100 if and only if they predict that you would have given them £1 had the coin landed tails.
There are some decision algorithms that would pay the £1 if and only if they believed in quantum many worlds. A CDT agent would never pay, and a UDT agent would always pay however.
It is of course possible to construct agents that want to do X if and only if quantum many worlds is true. It is also possible t construct agents that do the same thing whether it’s true or false. (Eg Alpha Go)
The answer to this question depends on which wave function collapse theory you use. There are a bunch of quantum superposition experiments where we can detect that no collapse is happening. If photons collapsed their superposition in the double slit experiment, we wouldn’t get an interference pattern. Collapse theories postulate a list of circumstances that we haven’t measured yet when collapse happens. If you believe that quantum collapse only happens when 10^40 kg of mass are in a single coherent superposition, this belief has almost no effect on your predictions.
If you believe that you can’t get 100 atoms into superposition then you are wrong, current experiments have tested that. If you believe that collapse happens at the 1gram level. Then future experiments could test this. In short, there are collapse theories in which collapse is so rare that you will never spot it. There are theories where collapse is so common that we would have already spotted it (so we know those theories are wrong), and there are theories in between. The in between theories will make different predictions about future experiments. They will not expect large quantum computers to work.
Another difference is that current QFT doesn’t contain gravity. In the search for a true theory of everything, many worlds and collapse might suggest different successors. This seems important to human understanding. It wouldn’t make a difference to an agent that could consider all possible theories.
Go on then, which decision algorithms? Note, though: They do have to be plausible models of agency. I don’t think it’s going to be all that informative if a pointedly irrational model acts contingent on foundational theory when CDT and FDT don’t.
An agent might care about (and acausally cooperate with) all versions of himself that “exist”. MWI posits more versions of himself. Imagine that he wants there to exist an artist like he could be, and a scientist like he could be—but the first 50% of universes that contain each are more important than the second 50%. Then in MWI, he could throw a quantum coin to decide what to dedicate himself to, while in CI this would sacrifice one of his dreams.
The agent first updates on the evidence that it has, and then takes logical counterfactuals over each possible action. This behaviour means that it only cooperates in newcolmblike situations with agents it believes actually exist. It will one box in Newcolmbs problem, and cooperate against an identical duplicate of itself. However it won’t pay in logical counterfactual blackmail, or any source of counterfactual blackmail accomplished with true randomness.
(I think this is a good chance for you to think of an answer yourself.)