So, in the spirit of stupid (but nagging) questions:
The sequences present a convincing case (to me at least) that MWI is the right view of things, and that it is the best conclusion of our understanding of physics. Yet I don’t believe it, because it seems to be in direct conflict with the fact of ethics: if all I can do is push the badness out of my path, and into some other path, then I can’t see how doing good things matters. I can’t change the fundamental amount of goodness, I can just push it around. Yet it matters that I’m good and not bad.
The ‘keep your own path clean’ answer is very unsatisfying, just because it doesn’t work anywhere else. I can’t just keep my own family, neighborhood, city, country, or planet clean. I can’t even just decide to keep my own temporal chunk of the universe clean, while ignoring the rest and even at the expense of the rest of it. Why should this principle suddenly work in the case of other worlds? It seems ad hoc.
So my stupid question is this: why aren’t MWI and ethics just flatly in conflict?
What do you mean by in conflict? Believing one says nothing about the other. You’re not “pushing” anything around. If you act good in one set of universes, that is a set of universes made better by your actions. If you act bad in another, the same thing. Acting good does not cause other universes to become bad.
People making decisions are not quantum events. When a photon could either end up in a detector or not, there are branches where it does and branches where it doesn’t. But when you decide whether or not to do something good, this decision is being carried out by neurons, which are big enough that quantum events do not influence them much. This means that if you decide to do something good, you probably also decided to do the same good thing in the overwhelming majority of Everette branches that diverge from when you started considering the decision.
This may be true, but I don’t think anyone knows for sure, and it seems likely
to me that the brain has the property of sensitivity to initial
conditions,
meaning that it’s likely to do different stuff in different Everett branches.
So this suggests that [....] over timescales like that of human history we
will see an overwhelmingly large number of universes that are completely
identical on the human level—ones where electrons ended up in slightly
different positions but no harm done [...]
More on-topic for the grandparent: Greg Egan’s novella
Oracle
talks about the ethical issue of bad stuff happening in other Everett branches.
The fact that I can reliably multiply numbers shows that at least some of my decisions are deterministic.
To the extent that I make ethical decisions based on some partially deterministic reasoning process, my ethical decisions are not chaotic.
If, due to chaos, I have a probability p of slapping my friends instead of hugging them, then Laplace’s law of succession tells me that p is less than 1%.
There must be chaotic amplification of quantum events going on. Any macroscopic system at finite temperature will be full of quantum events, like a molecule in an excited state returning to its ground state. The quantum randomness is a constant source of “noise” which normally averages out, but sometimes there will be fluctuations away from a mean, and sometimes they will be amplified into mesoscopic and macroscopic difference. This must be true, but it would be best to have a mathematical demonstration, e.g. that the impact of quantum fluctuations on the transfer of heat through an atmosphere will amplify into macroscopically different weather patterns on a certain timescale.
I have taken lots of decisions based on random bits from Fourmilab or random.org (especially before finding LessWrong—nowadays I only do that when deciding which password to use and stuff like that).
The sequences present a convincing case (to me at least) that MWI is the right view of things, and that it is the best conclusion of our understanding of physics.
Just a caution, here. The sequences only really talk about non-relativistic quantum mechanics (NRQM), and I agree that MWI is the best interpretation of this theory. However, NRQM is false, so it doesn’t follow that MWI is the “right view of things” in the general sense. Quantum field theory (QFT) is closer to the truth, but there are a number of barriers to a straightforward importation of MWI into the language of QFT. I’m reasonably confident that an MWI-like interpretation of QFT can be constructed, but it does not exist in any rigorous form as of yet (as far as I am aware, at least). You should be aware of this before committing yourself to the claim that MWI is an accurate description of the world, rather than just the best way of conceptualizing the world as described by NRQM.
This article (PDF) gives a nice (and fairly accessible) summary of some of the issues involved in extending MWI to QFT. See sections 4 and 8 in particular. Their focus in the paper is wavefunction realism, but given that MWI (at least the version advocated in the Sequences) is committed to wavefunction realism, their arguments apply. They offer a suggestion of the kind of theory that they think can replace MWI in the relativistic context, but the view is insufficiently developed (at least in that paper) for me to fully evaluate it.
A quick summary of the issues raised in the paper:
In NRQM, the wave function lives in configuration space, but there is no well-defined particle configuration space in QFT since particle number is not conserved and particles are emergent entities without precisely defined physical properties.
A move to field configuration space is unsatisfactory because quantum field theories admit of equivalent description using many different choices of field observable. Unlike NRQM, where there are solid dynamical reasons for choosing the position basis as fundamental, there seems to be no natural or dynamically preferred choice in QFT, so a choice of a particular field configuration space description would amount to ad hoc privileging.
MWI in NRQM treats physical space as non-fundamental. This is hard to justify in QFT, because physical space-time is bound up with the fundamentals of the theory to a much greater degree. The dynamical variables in QFT are operators that are explicitly associated with space-time regions.
This objection is particularly clever and interesting, I think. In MWI, the history of the universe is fully specified by giving the universal wavefunction at each time in some reference frame. In a relativistic context, one would expect that all one needs to do in order to describe how the universe looks in some other inertial reference frame is to perform a Lorentz transformation on this history. If the history really tells us everything about the physical state of the universe, then it gives us all the information required to determine how the universe looks under a Lorentz transformation. But in relativistic quantum mechanics, this is not true. Fully specifying the wavefunction (defined on an arbitrarily chosen field configuration space, say) at all times is not sufficient to determine what the universe will look like under a Lorentz transformation. See the example on p. 21 in the paper, or read David Albert’s paper on narratability. This suggests that giving the wavefunction at all times is not a full specification of the physical properties of the universe.
On the other hand, my understanding is that QFT itself doesn’t exist in a rigorous form yet, either.
I assume you’re referring to the infinities that arise in QFT when we integrate over arbitrarily short length scales. I don’t think this shows a lack of rigor in QFT. Thanks to the development of renormalization group theory in the 70s, we know how to do functional integrals in QFT with an imposed cutoff at some finite short length scale. QFT with a cutoff doesn’t suffer from problems involving infinities. Of course, the necessity of the cutoff is an indication that QFT is not a completely accurate description of the universe. But we already know that we’re going to need a theory of quantum gravity at the Planck scale. In the domain where it works, QFT is reasonably rigorously defined, I’d say.
This article (PDF) gives a nice (and fairly accessible) summary of some of the issues involved in extending MWI to QFT.
Thanks for that; it’s quite an interesting article, and I’m still trying to absorb it. However, one thing that seems pretty clear to me is that for EY’s intended philosophical purposes, there really is no important distinction between “wavefunction realism” (in the context of NRQM) and “spacetime state realism” (in the context of QFT). Especially since I consider this post to be mostly wrong: locality in configuration space is what matters, and configuration space is a vector space (specifically a Hilbert space) -- there is no preferred (orthonormal) basis.
I assume you’re referring to the infinities that arise in QFT when we integrate over arbitrarily short length scales. I don’t think this shows a lack of rigor in QFT
If the “problem” is merely that certain integrals are divergent, then I agree. No one says that the fact that int_{0}{1}frac{1}{x},dx
diverges shows a lack of rigor in real analysis!
What concerns me is whether any actual mathematical lies are being told—such as integrals being assumed to converge when they haven’t yet been proved to do so. Or something like the early history of the Dirac delta, when physicists unashamedly spoke of a “function” with properties that a function cannot, in fact, have.
If QFT is merely a physical lie—i.e., “not a completely accurate description of the universe”—and not a mathematical one, then that’s a different matter, and I wouldn’t call it an issue of “rigor”.
However, one thing that seems pretty clear to me is that for EY’s intended philosophical purposes, there really is no important distinction between “wavefunction realism” (in the context of NRQM) and “spacetime state realism” (in the context of QFT).
I’m a little unclear about what EY’s intended philosophical purposes are in this context, so this might well be true. One possible problem worth pointing out is that spacetime state realism involves an abandonment of a particular form of reductionism. Whether or not EY is committed to this form of reductionism somebody more familiar with the sequences than I would have to judge.
According to spacetime state realism, the physical state of a spacetime region is not supervenient on the physical states of its subregions, i.e. the physical state of a spacetime region could be different without any of its subregions being in different states. This is because subregions can be entangled with one another in different ways without altering their local states. This is not true of wavefunction realism set in configuration space. There, the only way a region of configuration space could have different physical properties is if some of its subregions had different properties.
Also, I think it’s possible that the fact that the different “worlds” in spacetime state realism are spatially overlapping (as opposed to wavefunction realism, where they are separated in configuration space) might lead to interesting conceptual differences between the two interpretations. I haven’t thought about this enough to give specific reasons for this suspicion, though.
Especially since I consider this post to be mostly wrong: locality in configuration space is what matters, and configuration space is a vector space (specifically a Hilbert space) -- there is no preferred (orthonormal) basis.
I’m not sure exactly what you’re saying here, but if you’re rejecting the claim that MWI privileges a particular basis, I think you’re wrong. Of course, you could treat configuration space itself as if it had no preferred basis, but this would still amount to privileging position over momentum. You can’t go from position space to momentum space by a change of coordinates in configuration space. Configuration space is always a space of possible particle position configurations, no matter how you transform the coordinates.
I think you might be conflating configuration space with the Hilbert space of wavefunctions on configuration space. In this latter space, you can transform from a basis of position eigenstates to a basis of momentum eigenstates with a coordinate transformation. But this is not configuration space itself, it is the space of square integrable functions on configuration space. [I’m lying a little for simplicity: Position and momentum eigenstates aren’t actually square integrable functions on configuration space, but there are various mathematical tricks to get around this complication.]
What concerns me is whether any actual mathematical lies are being told—such as integrals being assumed to converge when they haven’t yet been proved to do so. Or something like the early history of the Dirac delta, when physicists unashamedly spoke of a “function” with properties that a function cannot, in fact, have.
If this is your standard for lack of rigor, then perhaps QFT hasn’t been rigorously formulated yet, but the same would hold of pretty much any physical theory. I think you can find places in pretty much every theory where some such “mathematical lie” is relied upon. There’s an example of a standard mathematical lie told in NRQM earlier in my post.
In many of these cases, mathematicians have formulated more rigorous versions of the relevant proofs, but I think most physicists tend to be blithely ignorant of these mathematical results. Maybe QFT isn’t rigorously formulated according to the mathematician’s standards of rigor, but it meets the physicist’s lower standards of rigor. There’s a reason most physicists working on QFT are uninterested in things like Algebraic Quantum Field Theory.
I’m a little unclear about what EY’s intended philosophical purposes are in this context
As I read him, he mainly wants to make the point that “simplicity” is not the same as “intuitiveness”, and the former trumps the latter. It may seem more “humanly natural” for there to be some magical process causing wavefunction collapse than for there to be a proliferation of “worlds”, but because the latter doesn’t require any additions to the equations, it is strictly simpler and thus favored by Occam’s Razor.
I think you might be conflating configuration space with the Hilbert space of wavefunctions on configuration space.
Yes, sorry. What I actually meant by “configuration space” was “the Hilbert space that wavefunctions are elements of”. That space, whatever you call it (“state space”?), is the one that matters in the context of “wavefunction realism”.
(This explains an otherwise puzzling passage in the article you linked, which contrasts the “configuration space” and “Hilbert space” formalisms; but on the other hand, it reduces my credence that EY knows what he’s talking about in the QM sequence, since he doesn’t seem to talk about the space-that-wavefunctions-are-elements-of much at all.)
If this is your standard for lack of rigor, then perhaps QFT hasn’t been rigorously formulated yet, but the same would hold of pretty much any physical theory
This is contrary to my understanding. I was under the impression that classical mechanics, general relativity, and NRQM had all by now been given rigorous mathematical formulations (in terms of symplectic geometry, Lorentzian geometry, and the theory of operators on Hilbert space respectively).
Maybe QFT isn’t rigorously formulated according to the mathematician’s standards of rigor, but it meets the physicist’s lower standards of rigor. There’s a reason most physicists working on QFT are uninterested in things like Algebraic Quantum Field Theory.
The mathematician’s standards are what interests me, and are what I mean by “rigor”. I don’t consider it a virtue on the part of physicists that they are unaware of or uninterested in the mathematical foundations of physics, even if they are able to get away with being so uninterested. There is a reason mathematicians have the standards of rigor they do. (And it should of course be said that some physicists are interested in rigorous mathematics.)
This is a very good post, but I wonder: One of the authors in the paper you cite is David Wallace, perhaps the most prominent proponent of modern Everettian interpretation.
He just published a new book called “The Emergent Multiverse” and he claims there is no problem unifying MWI with QFT because interactions within worlds are local and only states are nonlocal.
I have yet to hear him mention any need for serious reformulation of anything in terms of MWI.
You said you suspect this is necessary, but that you hope we can recover a similar MWI, but isn’t it more reasonable to expect that at the planck scale something else will explain the quantum weirdness?
After all if MWI fails both probability and relativity, then there is no good reason to suspect that this interpretation is correct.
Have you given Gerard ’t Hoofts idea of cellular automata which he claims salvage determinism, locality and realism any thought?
You said you suspect this is necessary, but that you hope we can recover a similar MWI, but isn’t it more reasonable to expect that at the planck scale something else will explain the quantum weirdness?
When I talk about recovering MWI, I really just mean absorbing the lesson that our theory does not need to deliver determinate measurement results, and ad hoc tools for satisfying this constraint (such as collapse or hidden variables) are otiose. Of course, the foundations of our eventual theory of quantum gravity might be different enough from those of quantum theory that the interpretational options don’t translate. How different the foundations will be depends on which program ends up working out, I suspect. If something like canonical quantum gravity or loop quantum gravity turns out to be the way to go, then I think a lot of the conceptual work done in interpreting NRQM and QFT will carry over. If string theory turns out to be on the right track, then maybe a more radical interpretational revision will be required. The foundations of string theory are now thought to lie in M-theory, and the nature of this theory is still pretty conceptually opaque. It’s worth noting though that Bousso and Susskind have actually suggested that string theory provides a solid foundation for MWI, and that the worlds in the string theory landscape are the same thing as the worlds in MWI. See here for more on this. The paper has been on my “to read” list for a while, but I haven’t gotten around to it yet. I’m skeptical but interested.
Have you given Gerard ’t Hoofts idea of cellular automata which he claims salvage determinism, locality and realism any thought?
I know of ‘t Hooft’s cellular automata stuff, but I don’t know much about it. Speaking from a position of admitted ignorance, I’m skeptical. I suspect the only way to construct a genuinely deterministic local realist theory that reproduces quantum statistics is to embrace superdeterminism in some form, i.e. to place constraints on the boundary conditions of the universe that make the statistics work out by hand. This move doesn’t seem like good physics practice to me. Do you know if ’t Hooft’s strategy relies on some similar move?
’t Hooft’s latest paper is the first in which he maps a full QFT to a CA, and the QFT in question is a free field theory. So I think that in this case he evades Bell’s theorem, quantum complexity theorems, etc, by working in a theory where physical detectors, quantum computers, etc don’t exist, because interactions don’t exist. It’s like how you can evade the incompleteness theorems if your arithmetic only has addition but not multiplication. Elsewhere he does appeal to superselection / cosmological initial conditions as a way to avoid cat states (macroscopic superpositions), but I don’t see that playing a role here.
The mapping itself has something to do with focusing on the fractional part of particle momentum as finite, and avoiding divergences by focusing on a particular subspace. It’s not a trivial result. But extending it to interacting field theory will require new ideas, e.g. making the state space of each individual cell in the CA into a Fock space, or permitting CTCs in the CA grid. Surely you need radical ingredients like that in order to recover the full quantum state space…
Aha, I see.
So you do not share EY’s view that MWI is “correct” then and the only problem it faces is recovering the Born Rule?
I agree that obviously what will end up working will depend on what the foundations are :)
I remember that paper by Buosso and Susskind, I even remember sending a mail to Susskind about it, while at the same time asking him about his opinion of ‘t Hoofts work.
If I remember correctly the paper was discussed at some length over at physicsforums.com (can’t remember the post) and it seemed that the consensus was that the authors have misinterpreted decoherence in some way.
I don’t remember the details, but the fact that the paper itself has not been mentioned or cited in any article I have read since then indicates to me that there has had to have been some serious error in it.
Also Susskinds answer regarding ’t Hoofts work was illuminating. To paraphrase he said he felt that ’t Hooft might be correct, but due to there not being any predictions it was hard to hold a strong opinion either way on the matter. So it seems Susskind was not very sold on his own idea.
On the other hand, my understanding is that QFT itself doesn’t exist in a rigorous form yet, either.
Depends on what you mean by rigorous. (OTOH, it’s not fully compatible with general relativity, so we know it doesn’t exactly describe the world—or that GR doesn’t, or that neither does.)
If you bug physicists enough, they will admit that the standard model has some problems, like the Landau pole. However, there are toy QFTs in 2 spacial dimension that have models rigorous enough for mathematicians. That should be adequate for philosophical purposes.
I don’t think the Landau pole can be characterized as an actual problem. It was considered a problem for strong interactions, but we now know that quantum chronodynamics is asymptotically free, so it does not have a Landau pole. The Landau pole for quantum electrodynamics is at an energy scale much much higher than the Planck energy. We already know that we need new physics at the Planck scale, so the lack of asymptotic freedom in the Standard Model is not a real practical (or even conceptual) problem.
If you don’t like the question I’m answering, complain to Komponisto, not me.
I wasn’t complaining to anyone. And I don’t dislike the question. I was just adding some relevant information. Anyway, I did reply directly to komponisto as well. See the end of my long comment above.
But what would you count as a conceptual problem?
If we did not have independent evidence that QFT breaks down at the Planck scale (since gravity is not renormalizable), I might have considered the Landau pole a conceptual problem for QFT. But since it is only a problem in a domain where we already know QFT doesn’t work, I don’t see it that way.
I don’t think that’s the normal use of “conceptual problem.”
If physicists believe, as their verbiage seems to indicate, that QED is a real theory that is an approximation to reality, and they compute approximations to the numbers in QED, while QED is actually inconsistent, I would say that is an error and a paradigmatic example of a conceptual error.
What does it mean to interpret an inconsistent theory?
There is the standard MWI advocacy that matches Elieser’s views. This is a critique of this advocacy, point by point. See especially Q14, re QFT. This gives a reason why MWI is not a useful object of study.
This is a critique of this advocacy, point by point. See especially Q14, re QFT. This gives a reason why MWI is not a useful object of study.
The first critique seems to criticize something different that Eliezer says. It seems like the person quoted by the author did not express themselves clearly, and the critique takes a wrong explanation. For example this part:
When do worlds split?
The precise moment/location of the split is not sharply defined due to the subjective nature of irreversibility, but can be considered complete when much more than kT of energy has been released in an uncontrolled fashion into the environment. At this stage the event has become irreversible.
How can irreversibility be subjective if it defines what a measurement is and when worlds split? It would imply that when worlds split is also a subjective matter. But then it is observer-dependent, the very thing the interpretation is trying to avoid.
For me the Eliezer’s explanation of “blobs of amplitude” makes sense. There is a set of possible configurations, which at the beginning are all very similar, but because some interactions make the differences grow, the set gradually separates into smaller subsets. When exactly? Well, in theory the parts are connected forever, but the connection only has epsilon size related to the subsets, so it can be ignored. But asking when exactly is like asking “what exactly is the largest number that can be considered ‘almost zero’?”. If you want to be exact, only zero is exactly zero. On the other hand, 1/3^^^3 is for all practical purposes zero. I would feel uncomfrotable picking one number and saying “ok, this X is ‘almost zero’, but 1.000001 X is not ‘almost zero’”.
The quoted person seems to say something similar, just less clearly, which allows the critic to use the word “subjective” and jump to a wrong conclusion that author is saying that mathematics is observer-dependent. (Analogically, just because you and me can have different interpretations of ‘almost zero’, that does not mean mathematics is subjective and observer-depended. It just means that ‘almost zero’ is not exactly defined, but in real life we care whether e.g. the water we drink contains ‘almost zero’ poison.)
So generally for me it means that once someone famous says a wrong (or just ambiguous) explanation of MWI, that explanation will be forever used as an argument against anything similar to MWI.
This gives a reason why MWI is not a useful object of study.
Well, not quite. Someone ought to be thinking about this sort of stuff, and the claim that link makes is that MWI isn’t worth considering because it goes against the “scientific ethos.”
The reason I would tell people why MWI is not a useful object of study (for them) is because until you make it a disagreement about the territory, disagreeing about maps cashes out as squabbling. How you interpret QM should not matter, so don’t waste time on it.
I can’t change the fundamental amount of goodness, I can just push it around.
Wrong (even when assuming there is an exact definition of goodness).
You can’t fix all branches of the universe, because (1) in most branches you don’t exist, and (2) in a very few branches totally random events may prevent your actions. But this does not mean that your actions don’t increase the amount of goodness.
First, you are responsible only for the branches where you existed, so let’s just remove the other branches from our moral equation. Second, the exceptionally random events happen only in exceptionally small proportion of branches. So even if some kind of Maxwell’s demon can ruin your actions in 0.000 … … … 001 of branches, there are stil 0.999 … … … 999 of branches where your actions worked normally. And improving such majority of branches is a good thing.
In each world, people choose the course that seems best to them. Maybe they happen on a different line of thinking, and see new implications or miss others, and come to a different choice. But it’s not that one world chooses each choice. It’s not that one version of you chooses what seems best, and another version chooses what seems worst. In each world, apples go on falling and people go on doing what seems like a good idea.
In all the worlds, people’s choices determine outcomes in the same way they would in just one single world. The choice you make here does not have some strange balancing influence on some world elsewhere. There is no causal communication between decoherent worlds. In each world, people’s choices control the future of that world, not some other world. If you can imagine decisionmaking in one world, you can imagine decision-making in many worlds: just have the world constantly splitting while otherwise obeying all the same rules.
Well, lets say we posit some starting condition, say the condition of the universe on the day I turned 17. I am down one path from that initial condition, and a great many other worlds exist in which things went a little differently. I take it that it’s not (unfortunately) a physical or logical impossibility that in one or more of those branches, I have ten years down the line committed a murder.
Now, there are a finite number of murder-paths, and a finite number of non-murder-paths, and my path is identical to one of them. But it seems to me that whether or not I murder someone, the total number of murder-paths and the total number of non-murder-paths is the same? Is this totally off base? I hope that it is.
Anyway, if that’s true, then by not murdering, all I’ve done is put myself off of a murder-path. There’s one less murder in my world, but not one less murder absolutely. So, fine, live in my world and don’t worry about the others. But whence that rule? That seems arbitrary, and I’m not allowed to apply it in order to localize my ethical considerations in any other case.
On a macro level, a Many Worlds model should be mathematically equal to One World + Probabilities model. Being unhappy that in 0.01% of Many Worlds you are a murderer, is like being unhappy that with probability 0.01% you are a murderer in One World. The difference is that in One World you can later say “I was lucky” or “I was unlucky”, while in the Many Worlds model you can just say “this is a lucky branch” or “this is an unlucky branch”.
But it seems to me that whether or not I murder someone, the total number of murder-paths and the total number of non-murder-paths is the same?
At this point it seems to me that you are mixing a Many Worlds model with a naive determinism, and the problem is with the naive determinism. Imagine saying this: “on the day I turned 17, there is one fixed path towards the future, where I either commit a murder or don’t, and the result is the same whatever I do”. Is this right, or wrong, or confused, or...? Because this is what you are saying, just adding Many Worlds. The difference is that in One World model, if you say “I will flip a coin, and based on the result I will kill him or not” and you mean it, then you are a murderer with probability 50%, while in Many Worlds you are a murderer in 50% of branches. (Of course with the naive determinism the probability is also only in mind—you were already determined to throw the coin with given direction and speed.)
Simply speaking, in Many Worlds model all probabilities happen, but higher probabilities happen “more” and lower probabilities happen “less”. You don’t want to be a murderer? Then behave so that your probability of murdering someone is as small as possible! This is equally valid advice for One World and Many Worlds.
So, fine, live in my world and don’t worry about the others. But whence that rule?
Because you can’t influence what happen in the other branches. However, if you did something that could lead with some probability to other person’s death (e.g. shooting at them and missing them), you should understand that it was a bad thing which made you (in some other branch) a murderer, so you should not do that again (but neither should you do that again in One World). On the other hand, if you did something that could lead to a good outcome, but you randomly failed, you did (in some other branch) a good thing. (Careful! You have a big bias to overestimate the probability of the good outcome. So don’t reward yourself too much for trying.)
Being unhappy that in 0.01% of Many Worlds you are a murderer, is like being unhappy that with probability 0.01% you are a murderer in One World.
That doesn’t seem plausible. If there’s a 0.01% probability that I’m a murderer (and there is only one world) then if I’m not in fact a murderer, I have committed no murders. If there are many worlds, then I have committed no murders in this world, but the ‘me’ in another world (who’se path approximates mine to the extent that would call that person ‘me’) in fact is a murderer. It seems like a difference between some murders and no murders.
Because this is what you are saying, just adding Many Worlds.
I’m saying that depending on what I do, I end up in a non-murder path or a murder path. But nothing I do can change the number of non-murder or murder paths. So it’s not deterministic as regards my position in this selection, just deterministic as regards the selection itself. I can’t causally interact with other worlds, so my not murdering in one world has no effect on any other worlds. If there are five murder worlds branching off from myself at 17, then there are five no matter what. Maybe I can adjust that number prior to the day I turn 17, but there’s still a fixed number of murder worlds extending from the day I was born, and there’s nothing I can do to change that. Is that a faulty case of determinism?
Because you can’t influence what happen in the other branches.
That’s a good point. Would you be willing to commit to an a priori ethical principle such that ought implies can?
If there are five murder worlds branching off from myself at 17, then there are five no matter what.
That’s equivalent to saying “if at the moment of my 17th birthday there is a probability 5% that I will murder someone, then in that moment there is a probability 5% that I will murder someone no matter what”. I agree with this.
there’s still a fixed number of murder worlds extending from the day I was born, and there’s nothing I can do to change that.
That’s equivalent to saying “if at the day I was born there is an X% chance that I will become a murderer, there is nothing I can do to change that probability on that day”. True; you can’t travel back in time and create a counterfactual universe.
Short summary: You are mixing together two different views—timeful and timeless view. In timeful view you can say “today at 12:00 I decided to kill my neighbor”, and it makes sense. Then you switch to a position of a ceiling cat, an independent observer outside of our universe, outside of our time, and say “I cannot change the fact that today at 12:00 I killed my neighbor”. Yes, it also makes sense; if something happened, it cannot non-happen. But we confusing two narrators here: the real you, and the ceiling cat. You decided to kill your neighbor. The ceiling cat cannot decide that you didn’t, because the ceiling cat does not live in this universe; it can only observe what you did. The reason you killed your neighbor is that you, existing in this universe, have decided to do so. You are the cause. The ceiling cat sees your action as determined, because it is outside of the universe.
If we apply it to Many World hypothesis, there are 100 different yous, and one ceiling cat. From those, 5 yous commit murder (because they decided to do so), and 95 don’t (because they decided otherwise, or just failed to murder successfully). Inside the universes, the 5 yous are murderers, the 95 are not. The ceiling cat may decide to blame those 95 for the actions of those 5, but that’s the ceiling cat’s decision. It should at least give you credit for keeping the ratio 5:95 instead of e.g. 50:50.
Would you be willing to commit to an a priori ethical principle such that ought implies can?
That’s tricky. In some sense, we can’t do anything unless the atoms in our bodies do it; and our atoms are following that laws of physics. In some sense, there is no such thing as “can”, if we want to examine things on the atom level. (And that’s equally true in Many Worlds as in One World; only in One World there is also a randomness in the equations.) In other sense, humans are decision-makers. But we are decision-makers built from atoms, not decision-makers about the atoms we are built from.
So my answer would be that “ought” implies psychological “can”; not atomic “can”. (Because the whole ethics exists on psychological level, not on atomic level.)
Short summary: You are mixing together two different views—timeful and timeless view.
This sounds right to me, and I think your subsequent analysis is on target. So we have two views, the timeless view and the timeful view and we can’t (at least directly) translate ethical principles like ‘minimize evils’ across the views. So say we grant this and move on from here. Maybe my question is just that the timeless view is one in which ethics seems to make no sense (or at least not the same kind of sense), and the timeful view is a view in which it is a pressing concern. Would you object to that?
the timeless view is one in which ethics seems to make no sense
I didn’t fully realize that previously, but yes—in the timeless view there is no time, no change, no choice. Ethics is all about choices.
Ethical reasoning only makes sense in time, because the process of ethical reasoning is moving the particles in your brain, and the physical consequence of that can be a good or evil action. Ethics can have an influence on universe only if it is a part of the universe. The whole universe is determined only by its laws and its contents. The only way ethics can act is through the brains of people who contemplate it. Ethics is a human product (though we can discuss how much freedom did we have in creating this product; whether it would be different if we had a different history or biology) and it makes sense only on the human level, not on the level of particles.
I just stick with the timeless view and don’t have any trouble with ethics in it, but that’s because I’ve got all the phenomena of time fully embedded in the timeless view, including choice and morality. :)
Ethics is a human product (though we can discuss how much freedom did we have in creating this product; whether it would be different if we had a different history or biology) and it makes sense only on the human level, not on the level of particles.
I’m happy with the idea that ethics is a human product (since this doesn’t imply that it’s arbitrary or illusory or anything like that). I take this to mean, basically, that ethics concerns the relation of some subsystems with others. There’s no ethical language which makes sense from the ‘top-down’ or from a global perspective. But there’s also nothing to prevent (this is Eliezer’s meaning, I guess) a non-global perspective from being worked out in which ethical language does make sense. And this perspective isn’t arbitrary, because the subsystems working it out have always occupied that perspective as subsystems. To see an algorithm from the inside is to see world as a whole by seeing it as potentially involved in this algorithm. And this is what leads to the confusion between the global, timeless view from the (no less global, in some sense) timeful inside-an-algorithm view.
If that’s all passably normal (as skeptical as I am at the coherence of the idea of ‘adding up to normality’) then the question that remains is what I should do with my idea of things mattering ethically. Maybe the answer here is to see ethical agents as ontologically fundamental or something, though that sounds dangerously anthropocentric. But I don’t know how to justify the idea that physically-fundamental = ontologically-fundamental either.
Would you be willing to commit to an a priori ethical principle such that ought implies can?
I’m not Viliam Bur, but I wouldn’t quite agree with this, in that time matters. It’s not incoherent to talk about a system that can’t do X, could have done X, and ought to have done X, for example. It’s similarly not incoherent to talk about a system that can’t do X now but ought to have acted in the past so as to be able to do X now.
But yes, in general I would say the purpose of ethics is to determine right action. If we’re talking about the ethical status of a system with respect to actions we are virtually certain the system could not have taken, can not take, and will not be able to take, then we’re no longer talking about ethics in any straightforward sense.
Okay, so let’s adopt ‘ought implies can’ then, and restrict it to the same tense: if I ought to do X, I can do X. If I could have done (but can no longer do) X, then I ought to have done (but no longer ought to do) X.
How does this, in connection with MW, interact with consequentialism? The consequences of my actions can’t determine how much murdering I do (in the big world sense), just whether or not I fall on a murder-path. In the big world sense, I can’t (and therefore ought not) change the number of murder-paths. The consequence at which I should aim is the nature of the path I inhabit, because that’s what I can change.
Maybe this is right, but if it is, it seems to me to be an oddly subjective form of consequentialism. I’m not sure if this captures my thought, but it seems that it’s not as if I’m making the world a better place, I’m just putting myself in a better world.
it seems that it’s not as if I’m making the world a better place, I’m just putting myself in a better world.
It seems like you are not making world a better place because you think about fixed probability of becoming a murderer, which your decisions cannot change. But the probability of you becoming a murderer is a result of your decisions.
You have reversed the causality, because you imagine the probability of you ever being a murderer as something that existed sooner, and your decisions about murdering as something that happens later.
You treat probability of something happening in future as a fact that happened in the past. (Which is a common error. When humans talk about “outside of time”, they always imagine it in the past. No, the past is not outside of time; it is a part of time.)
The consequences of my actions can’t determine how much murdering I do (in the big world sense), [...] the nature of the path I inhabit, because that’s what I can change.
I’m not at all convinced that I endorse what you are doing with the word “I” here.
If we want to say that there exists some entity I, such that I commit murders on multiple branches, then to also talk about “the nature of the path I inhabit” seems entirely incoherent. There is no single path I inhabit, I (as defined here) inhabits all paths.
Conversely, if we want to say that there exists a single path that I inhabit (a much more conventional way of speaking), then murders committed on other branches are not murders I commit.
I’m not sure if that affects your point or not, but I have trouble refactoring your point to eliminate that confusion, so it seems relevant.
If we want to say that there exists some entity I, such that I commit murders on multiple branches, then to also talk about “the nature of the path I inhabit” seems entirely incoherent. There is no single path I inhabit, I (as defined here) inhabits all paths.
True, good point. That seems to be salt on the wound though. What I meant by ‘I’ is this: say I’m in path A. I have a parallel ‘I’ in path B if the configuration of something in B is such that, were it in A at some time past or future, I would consider it to be a (perhaps surprising) continuation of my existence in A.
If the Ai and the Bi are the same person, then I’m ethically responsible for the behavior of Bi for the same reasons I’m ethically responsible for myself (Ai). If Ai and Bi are not the same person (even if they’re very similar people) then I’m not responsible for Bi at all, but I’m also no longer de-coherent: there is always only one world with me in it. I take it neither of these options is true, and that some middle ground is to be preferred: Bi is not the same person as me, but something like a counterpart. Am I not responsible for the actions of my counterparts?
That’s a hard question to answer, but say I get uploaded and copied a bunch of times. A year later, some large percentage of my copies have become serial killers, while others have not. Are the peaceful copies morally responsible for the serial killing? If we say ‘no’ then it seems like we’re committed to at least some kind of libertarianism as regards free will. I understood the compatibilist view around here to be that you are responsible for your actions by way of being constituted in such and such a way. But my peaceful copies are constituted in largely the same way as the killer copies are. We only count them as numerically different on the basis of seemingly trivial distinctions like the fact that they’re embodied in different hardware.
What I meant by ‘I’ is this: say I’m in path A. I have a parallel ‘I’ in path B if the configuration of something in B is such that, were it in A at some time past or future, I would consider it to be a (perhaps surprising) continuation of my existence in A.
Well, OK. We are, of course, free to consider any entity we like an extension of our own identity in the sense you describe here. (I might similarly consider some other entity in my own path to be a “parallel me” if I wish. Heck, I might consider you a parallel me.)
If the Ai and the Bi are the same person, then I’m ethically responsible for the behavior of Bi for the same reasons I’m ethically responsible for myself (Ai).
It is not at all clear that I know what the reasons are that I’m ethically responsible for myself, if I am the sort of complex mostly-ignorant-of-its-own-activities entity scattered across multiple branches that you are positing I am. Again, transplanting an ethical intuition (like “I am ethically responsible for my actions”) unexamined from one context to a vastly different one is rarely justified.
So a good place to start might be to ask why I’m ethically responsible for myself, and why it matters.
I take it neither of these options is true, and that some middle ground is to be preferred: Bi is not the same person as me, but something like a counterpart.
Can you say more about that preference? I don’t share it, myself. I would say, rather, that I have some degree of confidence in the claim “Ai and Bi are the same person” and some degree of confidence that “Ai and Bi are different people,” and that multiple observers can have different degrees of confidence in these claims about a given (Ai, Bi) pair, and there’s no fact of the matter.
say I get uploaded and copied a bunch of times. A year later, some large percentage of my copies have become serial killers, while others have not. Are the peaceful copies morally responsible for the serial killing?
Say I belong to a group of distinct individuals, who are born and raised in the usual way, with no copying involved. A year later, some large percentage of the individuals in my group become serial killers, while others do not. Are the peaceful individuals morally responsible for the serial killing?
Almost all of the relevant factors governing my answer to your example seem to apply to mine as well. (My own answer to both questions is “Yes, within limits,” those limits largely being a function of the degree to which observations of Ai can serve as evidence about Bi.
But it seems to me that whether or not I murder someone, the total number of murder-paths and the total number of non-murder-paths is the same? Is this totally off base? I hope that it is.
Good news! It is totally off base. There is nothing in quantum mechanics requiring that the number of branches corresponding to an arbitrary macroscopic event and its negation must be equal.
There is nothing in quantum mechanics requiring that the number of branches corresponding to an arbitrary macroscopic event and its negation must be equal.
Aww, you had my hopes up. There’s nothing in my set-up that requires them to be equal either, just that the numbers be fixed.
So, fine, live in my world and don’t worry about the others. But whence that rule? That seems arbitrary
That feeling of arbitrariness is, IMHO, worth exploring more carefully.
Suppose, for example, it turns out that we don’t live in a Big World… that this is all there is, and that events either happen in this world or they don’t happen at all. Suppose you somehow were to receive confirmation of this. Big relief, right? Now you really can reduce the total amount of whatever in all of existence everywhere, so actions have meaning again.
But then you meet someone who says “But what about hypothetical people? No matter how many people I don’t actually murder, there’s still countless hypothetical people being hypothetically murdered! And, sure, you can tell me to just worry about actual people and don’t worry about the other, but whence that rule? It seems arbitrary.”
Would you find their position reasonable? What would you say to them, if not?
But then you meet someone who says “But what about hypothetical people? No matter how many people I don’t actually murder, there’s still countless hypothetical people being hypothetically murdered! And, sure, you can tell me to just worry about actual people and don’t worry about the other, but whence that rule? It seems arbitrary.”
Well put. This actually does come up in a philosophical view known as modal realism. Roughly, if we can make true or false claims about possible worlds, then those worlds must be actual in order to be truth-makers. So all possible worlds are actual.
If my someone said what you said he said, suppose I ask this in reply:
E:”Wait, are those hypothetical people being hypothetically murdered? Is that true?”
S: “Yes! And there’s nothing you can do!”
E:”And there’s some reality to which this part of the map, the hypothetical-people-being-murdered corresponds? Such that the hypothetical murder of these people is a real part of our world?”
S: “Well, sure.”
E: “Okay, well if we’re going to venture into modal realism then this just conflicts in the same way.”
S: Suppose we’re not modal realists then. Suppose there’s just not really a fact of the matter about whether or not hypothetical, and therefore non-existant people are being murdered.
E: No problem. I’m just interested in reducing real evils.
S: Isn’t that an arbitrary determination?
E: No, it’s the exact opposite of arbitrary. I also don’t take non-existant evidence as evidence, I don’t eat non-existant fruit, etc. If we call this arbitrary, then what isn’t?
I would certainly say you’re justified in not caring about hypothetical murders. I would also say you’re justified in not caring about murders in other MW branches.
What you seem to want to say here is that because murders in other MW branches are “actual”, you care about them, but since murders in my imagination are not “actual”, you don’t.
I have no idea what the word “actual” could possibly refer to so as to do the work you want it to do here.
There are certainly clusters of consistent experience to which a hypothetical murder of a hypothetical person corresponds. Those clusters might, for example, take the form of certain patterns of neural activation in my brain… that’s how I usually model it, anyway. I’m happy to say that those are “actual” patterns of neural activation. I would not say that they are “actual” murdered human beings.
That said, I’m not really sure it matters if they are. I mean, if they are, then… hold on, let me visualize… there: I just “actually” resurrected them and they are now “actually” extremely happy. Was their former murder still evil? At best, it seems all of my preconceived notions about murder (e.g., that it’s a permanent state change of some kind) have just been thrown out the window, and I should give some serious thought to why I think murder is evil in the first place.
It seems something similar is true about existence in a Big World… if I want to incorporate that into my thinking, it seems I ought to rethink all of my assumptions. Transplanting a moral intuition about murder derived in a small world into a big world without any alteration seems like a recipe for walking off conceptual cliffs.
What you seem to want to say here is that because murders in other MW branches are “actual”, you care about them, but since murders in my imagination are not “actual”, you don’t.
Right, exactly. I’m taking this sense of ‘actual’ (not literally) from the sequences. This is from ‘On being Decoherent’:
You only see nearby objects, not objects light-years away, because photons from those objects can’t reach you, therefore you can’t see them. By a similar locality principle, you don’t interact with distant configurations.
Later on in this post EY says that the Big World is already at issue in spatial terms: somewhere far away, there is another Esar (or someone enough like me to count as me). The implication is that existing in another world is analogous to existing in another place. And I certainly don’t think I’m allowed to apply the ‘keep your own corner clean’ principle to spatial zones.
In ’Living in Many Worlds”, EY says:
“Oh, there are a few implications of many-worlds for ethics. Average utilitarianism suddenly looks a lot more attractive—you don’t need to worry about creating as many people as possible, because there are already plenty of people exploring person-space. You just want the average quality of life to be as high as possible, in the future worlds that are your responsibility.
And you should always take joy in discovery, as long as you personally don’t know a thing. It is meaningless to talk of being the “first” or the “only” person to know a thing, when everything knowable is known within worlds that are in neither your past nor your future, and are neither before or after you.”
I take him to mean that there are really, actually many other people who exist (just in different worlds) and that I’m responsible for the quality of life for some sub-set of those people. And that there really are, actually, many people in other worlds who have discovered or know things I might take myself to have discovered or be the first to know. Such that it’s a small but real overturning of normality that I can’t really be the first to know something. (That, I assume, is what an ethical implication of MW for ethics amounts to, some overturning of some ethical normality).
I’m happy to say that those are “actual” patterns of neural activation. I would not say that they are “actual” murdered human beings.
If you modeled it to the point that you fully modeled a human being in your brain, and then murdered them, it seems obvious that you did actually kill someone. Hypothetical murders (but considered) fail to be murders because they fail to be good enough models.
Ordinarily, I would describe someone who is uncertain about obvious things as a fool. It’s not clear to me that I’m a fool, but it is also not at all clear to me that murder as you’ve defined it in this conversation is evil.
If you could explain that obvious truth to me, I might learn something.
Ordinarily, I would describe someone who is uncertain about obvious things as a fool. It’s not clear to me that I’m a fool, but it is also not at all clear to me that murder as you’ve defined it in this conversation is evil.
I didn’t mean to call you a fool, only I don’t think the disruption of your intuitions is a disruption of your ethical intuitions. It’s unintuitive to think of a human-being as something fully emulated within another human being’s brain, but if this is actually possible, it’s not unintuitive that ending this neural activity would be murder (if it weren’t some other form of killing-a-human-being). My point was just that the distinction in hardware can’t make a difference to the question of whether or not ending a neural activity is killing, and given a set of constants, murder.
Since I don’t think we’re any longer talking about my original question, I think I’ll tap out.
How do you know it all adds up to normality? What should I anticipate if it does, and what should I anticipate if it doesn’t? Or is this an a priori principle?
When Einstein overthrew the Newtonian version of gravity, apples didn’t stop falling, planets didn’t swerve into the Sun. Every new theory of physics must capture the successful predictions of the old theory it displaced; it should predict that the sky will be blue, rather than green.
So don’t think that many-worlds is there to make strange, radical, exciting predictions. It all adds up to normality.
Which means that your ethics should not depends on the potential existence of other worlds we have no way of interacting with. In other words, while it might well be simpler (for some people) to reason your ethics by using the many worlds paradigm, the outcome of this reasoning should not depend on the number of worlds.
So, I’ve been thinking about this, and say I and everyone I know believes that it’s possible to be the first one, absolutely, to whistle a tune. This is, for our strange culture, an important ethical belief. That belief is part of what I would call ‘normality’. Now, some jerk comes a long and proves MW, and so I learn that for any tune I would consider novel, odds are that it’s been whistled before in another world (I’m taking this example from EY in the sequences). So, depending on my normal, MW may add up to normality, and it may not. In a much more obvious sense, if my normal is Newtonian physics, MW doesn’t add up to normality either.
So what does adding up to normal mean? Consider that my other stupid question. Egan’s law seems to go un-argued for and unexplained. If it just means what the paragraph you cite says, then MW may well abolish or come into conflict with our ethical ideas, since apparently it comes into conflict with all kinds of other ideas (like false physical theories) and none of this requires the destruction of the solar system or flying apples.
It means that if you do not observe pink unicorns daily, no new weird and wonderful theory should claim that you should have. Or, as EY puts it “apples didn’t stop falling, planets didn’t swerve into the Sun”. Another name for this is the correspondence principle.
If your ethics requires for you to be the first tune whistler in the multiverse, not just in this world, it’s not a useful ethics.
If your ethics requires for you to be the first tune whistler in the multiverse, not just in this world, it’s not a useful ethics.
The usefulness of the ethics (if that’s the right standard to apply to an ethical idea) is not relevant to the example.
That is, unless you want to posit (and we should be super, super clear about this) that there is an a priori principle that any ethics capable of being contradicted by a true physical theory is not useful. But I very much doubt you want to say that.
I think modern physics pretty obviously doesn’t add up to normality in a number of cases. Long debates about cryonics took place because part of many people’s normal understanding of personal identity (an ethical category if there ever was one) involved a conception of material constituants like atoms such that there can be my atoms versus your atoms. This just turned out to be nonsense, as we discovered through investigation of physics. The fact that atoms no more have identities qua particular instances than do numbers overturned some element of normality.
Given cases like that, how does one actually argue for Egan’s law? It’s not enough to just state it.
It means that if in your branch you are the first one to whistle the tune, there is no one else in your branch to contradict you. (Just as you would expect in One World.) In some other branch someone else was first, and in that branch you don’t think that you were the first, so again no conflict.
if my normal is Newtonian physics
Then “adding up to normal” means that even when Einstein ruins your model, all things will behave the same way as they always did. Things that within given precision obeyed the Newtonian physics, will continue to do it. You will only see exceptions in unusual situations, such as GPS satellites. (But if you had GPS satellites before Einstein invented his theory, you would have seen those exceptions too. You just didn’t know that would happen.)
In case of morality it means that if you had a rule “X is good” because it usually has good consequences (or because it follows the rules, or whatever), then “X is good” even with Many Worlds. The exception is if you try to apply moral significance to a photon moving through a double slit.
An explanation may change: for example it was immoral to say “if the coin ends this side up, I will kill you”, and it is still immoral to do so, but the previous explanation was that “it is bad to kill people with 50% probability” and the new explanation is “it is bad to kill people in 50% of branches” (which means killing them with 50% probability in a random branch).
Okay, so on reflection, I think the idea that it all adds up to normality is just junk. It doesn’t mean anything. I’ll try to explain:
A: MW comes into conflict with this ethical principle.
B: It can’t come into conflict. Physics always adds up to normality.
A: Really? Suppose I see an apple falling, and you discover that there’s no such thing as an apple, but that what we called apples are actually a sub-species of blueberries. Now I’ve learned that I’ve in fact never seen an apple fall, since by ‘apple’ I meant the fruit of an independent species of plant. So, normality overturned.
B: No, that’s not an overturning of normality, that’s just a change of explanation. What you saw was this greenish round thing falling, and you explained this as an ‘apple’. Now your explanation is different, but the thing you observed is the same.
A: Ah, but lets say science discovers that the green round thing I saw isn’t green at all. In fact, green is just the color that bounces off the thing. If it’s any color, it’s the color of the wavelengths of light it absorbs. Normality overturned.
B: But that’s just what being ‘green’ now means. What you saw was some light your receptors in way that varied over time, and you explained this as a green thing moving. The observation, the light hitting your eye over time, is the same. The explanation has shifted.
A: Now say that it turns out that (bear with me) there is no motion or time. What I thought was some light hitting my retina over time is just my own brain co-evolving with a broader wave-function. Now that’s overturning normality.
B: No, what you experienced qualitatively is the same, but the explanation has changed.
A: What did I experience qualitatively?
B: If you’re willing to go into plausible but hypothetical discoveries, I can’t give it any description that is basic enough that it can’t be ‘overturned’. Even ‘experience’ is probably overturnable.
A: That’s why ‘it all adds up to normality’ is junk. By that standard, nothing is normal. If anything I can describe as a phenomenon is normal, then it can be overturned under that description.
You’re decisions aren’t random! If you decide to do something then the vast majority of other selves you have will decide the same thing. When you do good you do indeed do good in all universes branching from this one.
(But what if what I just said wasn’t the case? Would you let your sense of ethics override the physical evidence? Look at the causal history of your morality: it comes from evolution. Do you think that if MW was true then evolution would be forced to happen differently, in order to give you different morals?
But what if what I just said wasn’t the case? Would you let your sense of ethics override the physical evidence?
This is a good question, but I think it’s important to understand that it’s a good question. Evidence from the physical sciences doesn’t have some fixed priority over other kinds of evidence. One could argue that its an unusually good source of evidence, of course, but I’m not sure how to make the comparison in this case.
So my stupid question is this: why aren’t MWI and ethics just flatly in conflict?
This question used to worry me a lot too, and at one point I also considered the idea that we can’t “change the fundamental amount of goodness” but just choose a path through the branching worlds.
The view that’s currently prevalent among LWers who study decision theory is that you should think of yourself as being able to change mathematical facts, because decisions are themselves mathematical facts and by making decisions you determine other mathematical facts via logical implication. So for example the amount of goodness in a deterministic universe like MWI, given some initial conditions, is a mathematical fact that you can change through your decisions.
Hmm, I don’t think I understand that at all: how can one change a mathematical fact? Aren’t mathematical facts fixed? Is there something you could point me to, which explains this?
Try Towards a New Decision Theory and Controlling Constant Programs. Also, I used the word “change” in my comment since you were asking the question in terms of “change”, but perhaps a better term is “control”, which is what Nesov uses.
The issue is that the MWI does not address the phenomenon of single path being empirically special (your path). The theories as in the code that you would have when you use Solomonoff induction on your sensory input, have to address this phenomenon—they predict (or guess) sensory input not produce something which merely contains sensory input somewhere in the middle of enormous stream of alternatives. [putting aside for the moment that Solomonoff induction with Turing machine would have troubles with rotational and other symmetries]
That is true of physics in general—it is by design is concerned with predicting our sensory input NOT ‘explaining it away’ by producing an enormous body of things within which the input can be found, and this is why MWI, the way it is now, is seen as unsatisfactory, and why having the un-physical collapse of CI is acceptable. The goal is to guess the sensory input the best, and thus choice of path—even if made randomly—has to be part of theory.
Furthermore, if one is to seek the shortest ‘explanatory’ theory which contains you and your input somewhere within it, but doesn’t have to include the ‘guess where you are’ part, the MWI is not the winner, a program that iterates over all theories of physics and simulates them, is—you get other sort of multiverse.
edit: On a more general note, one shouldn’t be convinced simply because one can’t see a simpler alternative. It’s very hard to see alternatives in physics. Here is a good article about the issue.
So, in the spirit of stupid (but nagging) questions:
The sequences present a convincing case (to me at least) that MWI is the right view of things, and that it is the best conclusion of our understanding of physics. Yet I don’t believe it, because it seems to be in direct conflict with the fact of ethics: if all I can do is push the badness out of my path, and into some other path, then I can’t see how doing good things matters. I can’t change the fundamental amount of goodness, I can just push it around. Yet it matters that I’m good and not bad.
The ‘keep your own path clean’ answer is very unsatisfying, just because it doesn’t work anywhere else. I can’t just keep my own family, neighborhood, city, country, or planet clean. I can’t even just decide to keep my own temporal chunk of the universe clean, while ignoring the rest and even at the expense of the rest of it. Why should this principle suddenly work in the case of other worlds? It seems ad hoc.
So my stupid question is this: why aren’t MWI and ethics just flatly in conflict?
Actual answers aside, as a rationalist, this phrase should cause you to panic.
What do you mean by in conflict? Believing one says nothing about the other. You’re not “pushing” anything around. If you act good in one set of universes, that is a set of universes made better by your actions. If you act bad in another, the same thing. Acting good does not cause other universes to become bad.
People making decisions are not quantum events. When a photon could either end up in a detector or not, there are branches where it does and branches where it doesn’t. But when you decide whether or not to do something good, this decision is being carried out by neurons, which are big enough that quantum events do not influence them much. This means that if you decide to do something good, you probably also decided to do the same good thing in the overwhelming majority of Everette branches that diverge from when you started considering the decision.
This may be true, but I don’t think anyone knows for sure, and it seems likely to me that the brain has the property of sensitivity to initial conditions, meaning that it’s likely to do different stuff in different Everett branches.
Yvain recently asked about this on his blog—he tends to agree with you:
More on-topic for the grandparent: Greg Egan’s novella Oracle talks about the ethical issue of bad stuff happening in other Everett branches.
The fact that I can reliably multiply numbers shows that at least some of my decisions are deterministic.
To the extent that I make ethical decisions based on some partially deterministic reasoning process, my ethical decisions are not chaotic.
If, due to chaos, I have a probability p of slapping my friends instead of hugging them, then Laplace’s law of succession tells me that p is less than 1%.
There must be chaotic amplification of quantum events going on. Any macroscopic system at finite temperature will be full of quantum events, like a molecule in an excited state returning to its ground state. The quantum randomness is a constant source of “noise” which normally averages out, but sometimes there will be fluctuations away from a mean, and sometimes they will be amplified into mesoscopic and macroscopic difference. This must be true, but it would be best to have a mathematical demonstration, e.g. that the impact of quantum fluctuations on the transfer of heat through an atmosphere will amplify into macroscopically different weather patterns on a certain timescale.
I have taken lots of decisions based on random bits from Fourmilab or random.org (especially before finding LessWrong—nowadays I only do that when deciding which password to use and stuff like that).
Just a caution, here. The sequences only really talk about non-relativistic quantum mechanics (NRQM), and I agree that MWI is the best interpretation of this theory. However, NRQM is false, so it doesn’t follow that MWI is the “right view of things” in the general sense. Quantum field theory (QFT) is closer to the truth, but there are a number of barriers to a straightforward importation of MWI into the language of QFT. I’m reasonably confident that an MWI-like interpretation of QFT can be constructed, but it does not exist in any rigorous form as of yet (as far as I am aware, at least). You should be aware of this before committing yourself to the claim that MWI is an accurate description of the world, rather than just the best way of conceptualizing the world as described by NRQM.
This is important if true, and I would like to know more. What are the barriers?
On the other hand, my understanding is that QFT itself doesn’t exist in a rigorous form yet, either.
This article (PDF) gives a nice (and fairly accessible) summary of some of the issues involved in extending MWI to QFT. See sections 4 and 8 in particular. Their focus in the paper is wavefunction realism, but given that MWI (at least the version advocated in the Sequences) is committed to wavefunction realism, their arguments apply. They offer a suggestion of the kind of theory that they think can replace MWI in the relativistic context, but the view is insufficiently developed (at least in that paper) for me to fully evaluate it.
A quick summary of the issues raised in the paper:
In NRQM, the wave function lives in configuration space, but there is no well-defined particle configuration space in QFT since particle number is not conserved and particles are emergent entities without precisely defined physical properties.
A move to field configuration space is unsatisfactory because quantum field theories admit of equivalent description using many different choices of field observable. Unlike NRQM, where there are solid dynamical reasons for choosing the position basis as fundamental, there seems to be no natural or dynamically preferred choice in QFT, so a choice of a particular field configuration space description would amount to ad hoc privileging.
MWI in NRQM treats physical space as non-fundamental. This is hard to justify in QFT, because physical space-time is bound up with the fundamentals of the theory to a much greater degree. The dynamical variables in QFT are operators that are explicitly associated with space-time regions.
This objection is particularly clever and interesting, I think. In MWI, the history of the universe is fully specified by giving the universal wavefunction at each time in some reference frame. In a relativistic context, one would expect that all one needs to do in order to describe how the universe looks in some other inertial reference frame is to perform a Lorentz transformation on this history. If the history really tells us everything about the physical state of the universe, then it gives us all the information required to determine how the universe looks under a Lorentz transformation. But in relativistic quantum mechanics, this is not true. Fully specifying the wavefunction (defined on an arbitrarily chosen field configuration space, say) at all times is not sufficient to determine what the universe will look like under a Lorentz transformation. See the example on p. 21 in the paper, or read David Albert’s paper on narratability. This suggests that giving the wavefunction at all times is not a full specification of the physical properties of the universe.
I assume you’re referring to the infinities that arise in QFT when we integrate over arbitrarily short length scales. I don’t think this shows a lack of rigor in QFT. Thanks to the development of renormalization group theory in the 70s, we know how to do functional integrals in QFT with an imposed cutoff at some finite short length scale. QFT with a cutoff doesn’t suffer from problems involving infinities. Of course, the necessity of the cutoff is an indication that QFT is not a completely accurate description of the universe. But we already know that we’re going to need a theory of quantum gravity at the Planck scale. In the domain where it works, QFT is reasonably rigorously defined, I’d say.
Thanks for that; it’s quite an interesting article, and I’m still trying to absorb it. However, one thing that seems pretty clear to me is that for EY’s intended philosophical purposes, there really is no important distinction between “wavefunction realism” (in the context of NRQM) and “spacetime state realism” (in the context of QFT). Especially since I consider this post to be mostly wrong: locality in configuration space is what matters, and configuration space is a vector space (specifically a Hilbert space) -- there is no preferred (orthonormal) basis.
If the “problem” is merely that certain integrals are divergent, then I agree. No one says that the fact that int_{0}{1}frac{1}{x},dx diverges shows a lack of rigor in real analysis!
What concerns me is whether any actual mathematical lies are being told—such as integrals being assumed to converge when they haven’t yet been proved to do so. Or something like the early history of the Dirac delta, when physicists unashamedly spoke of a “function” with properties that a function cannot, in fact, have.
If QFT is merely a physical lie—i.e., “not a completely accurate description of the universe”—and not a mathematical one, then that’s a different matter, and I wouldn’t call it an issue of “rigor”.
I’m a little unclear about what EY’s intended philosophical purposes are in this context, so this might well be true. One possible problem worth pointing out is that spacetime state realism involves an abandonment of a particular form of reductionism. Whether or not EY is committed to this form of reductionism somebody more familiar with the sequences than I would have to judge.
According to spacetime state realism, the physical state of a spacetime region is not supervenient on the physical states of its subregions, i.e. the physical state of a spacetime region could be different without any of its subregions being in different states. This is because subregions can be entangled with one another in different ways without altering their local states. This is not true of wavefunction realism set in configuration space. There, the only way a region of configuration space could have different physical properties is if some of its subregions had different properties.
Also, I think it’s possible that the fact that the different “worlds” in spacetime state realism are spatially overlapping (as opposed to wavefunction realism, where they are separated in configuration space) might lead to interesting conceptual differences between the two interpretations. I haven’t thought about this enough to give specific reasons for this suspicion, though.
I’m not sure exactly what you’re saying here, but if you’re rejecting the claim that MWI privileges a particular basis, I think you’re wrong. Of course, you could treat configuration space itself as if it had no preferred basis, but this would still amount to privileging position over momentum. You can’t go from position space to momentum space by a change of coordinates in configuration space. Configuration space is always a space of possible particle position configurations, no matter how you transform the coordinates.
I think you might be conflating configuration space with the Hilbert space of wavefunctions on configuration space. In this latter space, you can transform from a basis of position eigenstates to a basis of momentum eigenstates with a coordinate transformation. But this is not configuration space itself, it is the space of square integrable functions on configuration space. [I’m lying a little for simplicity: Position and momentum eigenstates aren’t actually square integrable functions on configuration space, but there are various mathematical tricks to get around this complication.]
If this is your standard for lack of rigor, then perhaps QFT hasn’t been rigorously formulated yet, but the same would hold of pretty much any physical theory. I think you can find places in pretty much every theory where some such “mathematical lie” is relied upon. There’s an example of a standard mathematical lie told in NRQM earlier in my post.
In many of these cases, mathematicians have formulated more rigorous versions of the relevant proofs, but I think most physicists tend to be blithely ignorant of these mathematical results. Maybe QFT isn’t rigorously formulated according to the mathematician’s standards of rigor, but it meets the physicist’s lower standards of rigor. There’s a reason most physicists working on QFT are uninterested in things like Algebraic Quantum Field Theory.
As I read him, he mainly wants to make the point that “simplicity” is not the same as “intuitiveness”, and the former trumps the latter. It may seem more “humanly natural” for there to be some magical process causing wavefunction collapse than for there to be a proliferation of “worlds”, but because the latter doesn’t require any additions to the equations, it is strictly simpler and thus favored by Occam’s Razor.
Yes, sorry. What I actually meant by “configuration space” was “the Hilbert space that wavefunctions are elements of”. That space, whatever you call it (“state space”?), is the one that matters in the context of “wavefunction realism”.
(This explains an otherwise puzzling passage in the article you linked, which contrasts the “configuration space” and “Hilbert space” formalisms; but on the other hand, it reduces my credence that EY knows what he’s talking about in the QM sequence, since he doesn’t seem to talk about the space-that-wavefunctions-are-elements-of much at all.)
This is contrary to my understanding. I was under the impression that classical mechanics, general relativity, and NRQM had all by now been given rigorous mathematical formulations (in terms of symplectic geometry, Lorentzian geometry, and the theory of operators on Hilbert space respectively).
The mathematician’s standards are what interests me, and are what I mean by “rigor”. I don’t consider it a virtue on the part of physicists that they are unaware of or uninterested in the mathematical foundations of physics, even if they are able to get away with being so uninterested. There is a reason mathematicians have the standards of rigor they do. (And it should of course be said that some physicists are interested in rigorous mathematics.)
This is a very good post, but I wonder: One of the authors in the paper you cite is David Wallace, perhaps the most prominent proponent of modern Everettian interpretation. He just published a new book called “The Emergent Multiverse” and he claims there is no problem unifying MWI with QFT because interactions within worlds are local and only states are nonlocal. I have yet to hear him mention any need for serious reformulation of anything in terms of MWI.
You said you suspect this is necessary, but that you hope we can recover a similar MWI, but isn’t it more reasonable to expect that at the planck scale something else will explain the quantum weirdness? After all if MWI fails both probability and relativity, then there is no good reason to suspect that this interpretation is correct.
Have you given Gerard ’t Hoofts idea of cellular automata which he claims salvage determinism, locality and realism any thought?
When I talk about recovering MWI, I really just mean absorbing the lesson that our theory does not need to deliver determinate measurement results, and ad hoc tools for satisfying this constraint (such as collapse or hidden variables) are otiose. Of course, the foundations of our eventual theory of quantum gravity might be different enough from those of quantum theory that the interpretational options don’t translate. How different the foundations will be depends on which program ends up working out, I suspect. If something like canonical quantum gravity or loop quantum gravity turns out to be the way to go, then I think a lot of the conceptual work done in interpreting NRQM and QFT will carry over. If string theory turns out to be on the right track, then maybe a more radical interpretational revision will be required. The foundations of string theory are now thought to lie in M-theory, and the nature of this theory is still pretty conceptually opaque. It’s worth noting though that Bousso and Susskind have actually suggested that string theory provides a solid foundation for MWI, and that the worlds in the string theory landscape are the same thing as the worlds in MWI. See here for more on this. The paper has been on my “to read” list for a while, but I haven’t gotten around to it yet. I’m skeptical but interested.
I know of ‘t Hooft’s cellular automata stuff, but I don’t know much about it. Speaking from a position of admitted ignorance, I’m skeptical. I suspect the only way to construct a genuinely deterministic local realist theory that reproduces quantum statistics is to embrace superdeterminism in some form, i.e. to place constraints on the boundary conditions of the universe that make the statistics work out by hand. This move doesn’t seem like good physics practice to me. Do you know if ’t Hooft’s strategy relies on some similar move?
’t Hooft’s latest paper is the first in which he maps a full QFT to a CA, and the QFT in question is a free field theory. So I think that in this case he evades Bell’s theorem, quantum complexity theorems, etc, by working in a theory where physical detectors, quantum computers, etc don’t exist, because interactions don’t exist. It’s like how you can evade the incompleteness theorems if your arithmetic only has addition but not multiplication. Elsewhere he does appeal to superselection / cosmological initial conditions as a way to avoid cat states (macroscopic superpositions), but I don’t see that playing a role here.
The mapping itself has something to do with focusing on the fractional part of particle momentum as finite, and avoiding divergences by focusing on a particular subspace. It’s not a trivial result. But extending it to interacting field theory will require new ideas, e.g. making the state space of each individual cell in the CA into a Fock space, or permitting CTCs in the CA grid. Surely you need radical ingredients like that in order to recover the full quantum state space…
Aha, I see. So you do not share EY’s view that MWI is “correct” then and the only problem it faces is recovering the Born Rule? I agree that obviously what will end up working will depend on what the foundations are :) I remember that paper by Buosso and Susskind, I even remember sending a mail to Susskind about it, while at the same time asking him about his opinion of ‘t Hoofts work. If I remember correctly the paper was discussed at some length over at physicsforums.com (can’t remember the post) and it seemed that the consensus was that the authors have misinterpreted decoherence in some way. I don’t remember the details, but the fact that the paper itself has not been mentioned or cited in any article I have read since then indicates to me that there has had to have been some serious error in it. Also Susskinds answer regarding ’t Hoofts work was illuminating. To paraphrase he said he felt that ’t Hooft might be correct, but due to there not being any predictions it was hard to hold a strong opinion either way on the matter. So it seems Susskind was not very sold on his own idea.
Gerard ‘t Hooft actually does rely on what people call “superdeterminism”, which I just call “full determinism”, which I think is also a term ’t Hooft likes more. At least that is what his papers indicate. He discuss this some in a article from 2008 in response to Simon Kochen and John Conway’s Free Will Theorem. You might want to read the article: http://www.sciencenews.org/view/generic/id/35391/title/Math_Trek__Do_subatomic_particles_have_free_will%3F After that you might want to head on over to arxiv, ’t Hooft has published a 3 papers the last 6 months on this issue and he seem more and more certain of it. He also adress the objections in some notes in those papers. Link: http://arxiv.org/find/quant-ph/1/au:+Hooft_G/0/1/0/all/0/1
Depends on what you mean by rigorous. (OTOH, it’s not fully compatible with general relativity, so we know it doesn’t exactly describe the world—or that GR doesn’t, or that neither does.)
If you bug physicists enough, they will admit that the standard model has some problems, like the Landau pole. However, there are toy QFTs in 2 spacial dimension that have models rigorous enough for mathematicians. That should be adequate for philosophical purposes.
I don’t think the Landau pole can be characterized as an actual problem. It was considered a problem for strong interactions, but we now know that quantum chronodynamics is asymptotically free, so it does not have a Landau pole. The Landau pole for quantum electrodynamics is at an energy scale much much higher than the Planck energy. We already know that we need new physics at the Planck scale, so the lack of asymptotic freedom in the Standard Model is not a real practical (or even conceptual) problem.
The Landau pole for QED goes away when coupled with QCD, but I believe another one appears with the Higgs field.
If you don’t like the question I’m answering, complain to Komponisto, not me.
But what would you count as a conceptual problem?
I wasn’t complaining to anyone. And I don’t dislike the question. I was just adding some relevant information. Anyway, I did reply directly to komponisto as well. See the end of my long comment above.
If we did not have independent evidence that QFT breaks down at the Planck scale (since gravity is not renormalizable), I might have considered the Landau pole a conceptual problem for QFT. But since it is only a problem in a domain where we already know QFT doesn’t work, I don’t see it that way.
I don’t think that’s the normal use of “conceptual problem.”
If physicists believe, as their verbiage seems to indicate, that QED is a real theory that is an approximation to reality, and they compute approximations to the numbers in QED, while QED is actually inconsistent, I would say that is an error and a paradigmatic example of a conceptual error.
What does it mean to interpret an inconsistent theory?
There is the standard MWI advocacy that matches Elieser’s views. This is a critique of this advocacy, point by point. See especially Q14, re QFT. This gives a reason why MWI is not a useful object of study.
The first critique seems to criticize something different that Eliezer says. It seems like the person quoted by the author did not express themselves clearly, and the critique takes a wrong explanation. For example this part:
For me the Eliezer’s explanation of “blobs of amplitude” makes sense. There is a set of possible configurations, which at the beginning are all very similar, but because some interactions make the differences grow, the set gradually separates into smaller subsets. When exactly? Well, in theory the parts are connected forever, but the connection only has epsilon size related to the subsets, so it can be ignored. But asking when exactly is like asking “what exactly is the largest number that can be considered ‘almost zero’?”. If you want to be exact, only zero is exactly zero. On the other hand, 1/3^^^3 is for all practical purposes zero. I would feel uncomfrotable picking one number and saying “ok, this X is ‘almost zero’, but 1.000001 X is not ‘almost zero’”.
The quoted person seems to say something similar, just less clearly, which allows the critic to use the word “subjective” and jump to a wrong conclusion that author is saying that mathematics is observer-dependent. (Analogically, just because you and me can have different interpretations of ‘almost zero’, that does not mean mathematics is subjective and observer-depended. It just means that ‘almost zero’ is not exactly defined, but in real life we care whether e.g. the water we drink contains ‘almost zero’ poison.)
So generally for me it means that once someone famous says a wrong (or just ambiguous) explanation of MWI, that explanation will be forever used as an argument against anything similar to MWI.
Well, not quite. Someone ought to be thinking about this sort of stuff, and the claim that link makes is that MWI isn’t worth considering because it goes against the “scientific ethos.”
The reason I would tell people why MWI is not a useful object of study (for them) is because until you make it a disagreement about the territory, disagreeing about maps cashes out as squabbling. How you interpret QM should not matter, so don’t waste time on it.
Tell that to EY.
MWI doesn’t say anything like that. Nothing in physics says anything about “badness” or “goodness”.
Well, except insofar as humans run on physics, and as such can be described by physics.
Wrong (even when assuming there is an exact definition of goodness).
You can’t fix all branches of the universe, because (1) in most branches you don’t exist, and (2) in a very few branches totally random events may prevent your actions. But this does not mean that your actions don’t increase the amount of goodness.
First, you are responsible only for the branches where you existed, so let’s just remove the other branches from our moral equation. Second, the exceptionally random events happen only in exceptionally small proportion of branches. So even if some kind of Maxwell’s demon can ruin your actions in 0.000 … … … 001 of branches, there are stil 0.999 … … … 999 of branches where your actions worked normally. And improving such majority of branches is a good thing.
More info here:
Well, lets say we posit some starting condition, say the condition of the universe on the day I turned 17. I am down one path from that initial condition, and a great many other worlds exist in which things went a little differently. I take it that it’s not (unfortunately) a physical or logical impossibility that in one or more of those branches, I have ten years down the line committed a murder.
Now, there are a finite number of murder-paths, and a finite number of non-murder-paths, and my path is identical to one of them. But it seems to me that whether or not I murder someone, the total number of murder-paths and the total number of non-murder-paths is the same? Is this totally off base? I hope that it is.
Anyway, if that’s true, then by not murdering, all I’ve done is put myself off of a murder-path. There’s one less murder in my world, but not one less murder absolutely. So, fine, live in my world and don’t worry about the others. But whence that rule? That seems arbitrary, and I’m not allowed to apply it in order to localize my ethical considerations in any other case.
On a macro level, a Many Worlds model should be mathematically equal to One World + Probabilities model. Being unhappy that in 0.01% of Many Worlds you are a murderer, is like being unhappy that with probability 0.01% you are a murderer in One World. The difference is that in One World you can later say “I was lucky” or “I was unlucky”, while in the Many Worlds model you can just say “this is a lucky branch” or “this is an unlucky branch”.
At this point it seems to me that you are mixing a Many Worlds model with a naive determinism, and the problem is with the naive determinism. Imagine saying this: “on the day I turned 17, there is one fixed path towards the future, where I either commit a murder or don’t, and the result is the same whatever I do”. Is this right, or wrong, or confused, or...? Because this is what you are saying, just adding Many Worlds. The difference is that in One World model, if you say “I will flip a coin, and based on the result I will kill him or not” and you mean it, then you are a murderer with probability 50%, while in Many Worlds you are a murderer in 50% of branches. (Of course with the naive determinism the probability is also only in mind—you were already determined to throw the coin with given direction and speed.)
Simply speaking, in Many Worlds model all probabilities happen, but higher probabilities happen “more” and lower probabilities happen “less”. You don’t want to be a murderer? Then behave so that your probability of murdering someone is as small as possible! This is equally valid advice for One World and Many Worlds.
Because you can’t influence what happen in the other branches. However, if you did something that could lead with some probability to other person’s death (e.g. shooting at them and missing them), you should understand that it was a bad thing which made you (in some other branch) a murderer, so you should not do that again (but neither should you do that again in One World). On the other hand, if you did something that could lead to a good outcome, but you randomly failed, you did (in some other branch) a good thing. (Careful! You have a big bias to overestimate the probability of the good outcome. So don’t reward yourself too much for trying.)
That doesn’t seem plausible. If there’s a 0.01% probability that I’m a murderer (and there is only one world) then if I’m not in fact a murderer, I have committed no murders. If there are many worlds, then I have committed no murders in this world, but the ‘me’ in another world (who’se path approximates mine to the extent that would call that person ‘me’) in fact is a murderer. It seems like a difference between some murders and no murders.
I’m saying that depending on what I do, I end up in a non-murder path or a murder path. But nothing I do can change the number of non-murder or murder paths. So it’s not deterministic as regards my position in this selection, just deterministic as regards the selection itself. I can’t causally interact with other worlds, so my not murdering in one world has no effect on any other worlds. If there are five murder worlds branching off from myself at 17, then there are five no matter what. Maybe I can adjust that number prior to the day I turn 17, but there’s still a fixed number of murder worlds extending from the day I was born, and there’s nothing I can do to change that. Is that a faulty case of determinism?
That’s a good point. Would you be willing to commit to an a priori ethical principle such that ought implies can?
That’s equivalent to saying “if at the moment of my 17th birthday there is a probability 5% that I will murder someone, then in that moment there is a probability 5% that I will murder someone no matter what”. I agree with this.
That’s equivalent to saying “if at the day I was born there is an X% chance that I will become a murderer, there is nothing I can do to change that probability on that day”. True; you can’t travel back in time and create a counterfactual universe.
It is explained here, without the Many Words.
Short summary: You are mixing together two different views—timeful and timeless view. In timeful view you can say “today at 12:00 I decided to kill my neighbor”, and it makes sense. Then you switch to a position of a ceiling cat, an independent observer outside of our universe, outside of our time, and say “I cannot change the fact that today at 12:00 I killed my neighbor”. Yes, it also makes sense; if something happened, it cannot non-happen. But we confusing two narrators here: the real you, and the ceiling cat. You decided to kill your neighbor. The ceiling cat cannot decide that you didn’t, because the ceiling cat does not live in this universe; it can only observe what you did. The reason you killed your neighbor is that you, existing in this universe, have decided to do so. You are the cause. The ceiling cat sees your action as determined, because it is outside of the universe.
If we apply it to Many World hypothesis, there are 100 different yous, and one ceiling cat. From those, 5 yous commit murder (because they decided to do so), and 95 don’t (because they decided otherwise, or just failed to murder successfully). Inside the universes, the 5 yous are murderers, the 95 are not. The ceiling cat may decide to blame those 95 for the actions of those 5, but that’s the ceiling cat’s decision. It should at least give you credit for keeping the ratio 5:95 instead of e.g. 50:50.
That’s tricky. In some sense, we can’t do anything unless the atoms in our bodies do it; and our atoms are following that laws of physics. In some sense, there is no such thing as “can”, if we want to examine things on the atom level. (And that’s equally true in Many Worlds as in One World; only in One World there is also a randomness in the equations.) In other sense, humans are decision-makers. But we are decision-makers built from atoms, not decision-makers about the atoms we are built from.
So my answer would be that “ought” implies psychological “can”; not atomic “can”. (Because the whole ethics exists on psychological level, not on atomic level.)
This sounds right to me, and I think your subsequent analysis is on target. So we have two views, the timeless view and the timeful view and we can’t (at least directly) translate ethical principles like ‘minimize evils’ across the views. So say we grant this and move on from here. Maybe my question is just that the timeless view is one in which ethics seems to make no sense (or at least not the same kind of sense), and the timeful view is a view in which it is a pressing concern. Would you object to that?
I didn’t fully realize that previously, but yes—in the timeless view there is no time, no change, no choice. Ethics is all about choices.
Ethical reasoning only makes sense in time, because the process of ethical reasoning is moving the particles in your brain, and the physical consequence of that can be a good or evil action. Ethics can have an influence on universe only if it is a part of the universe. The whole universe is determined only by its laws and its contents. The only way ethics can act is through the brains of people who contemplate it. Ethics is a human product (though we can discuss how much freedom did we have in creating this product; whether it would be different if we had a different history or biology) and it makes sense only on the human level, not on the level of particles.
I just stick with the timeless view and don’t have any trouble with ethics in it, but that’s because I’ve got all the phenomena of time fully embedded in the timeless view, including choice and morality. :)
I’m happy with the idea that ethics is a human product (since this doesn’t imply that it’s arbitrary or illusory or anything like that). I take this to mean, basically, that ethics concerns the relation of some subsystems with others. There’s no ethical language which makes sense from the ‘top-down’ or from a global perspective. But there’s also nothing to prevent (this is Eliezer’s meaning, I guess) a non-global perspective from being worked out in which ethical language does make sense. And this perspective isn’t arbitrary, because the subsystems working it out have always occupied that perspective as subsystems. To see an algorithm from the inside is to see world as a whole by seeing it as potentially involved in this algorithm. And this is what leads to the confusion between the global, timeless view from the (no less global, in some sense) timeful inside-an-algorithm view.
If that’s all passably normal (as skeptical as I am at the coherence of the idea of ‘adding up to normality’) then the question that remains is what I should do with my idea of things mattering ethically. Maybe the answer here is to see ethical agents as ontologically fundamental or something, though that sounds dangerously anthropocentric. But I don’t know how to justify the idea that physically-fundamental = ontologically-fundamental either.
I’m not Viliam Bur, but I wouldn’t quite agree with this, in that time matters. It’s not incoherent to talk about a system that can’t do X, could have done X, and ought to have done X, for example. It’s similarly not incoherent to talk about a system that can’t do X now but ought to have acted in the past so as to be able to do X now.
But yes, in general I would say the purpose of ethics is to determine right action. If we’re talking about the ethical status of a system with respect to actions we are virtually certain the system could not have taken, can not take, and will not be able to take, then we’re no longer talking about ethics in any straightforward sense.
Okay, so let’s adopt ‘ought implies can’ then, and restrict it to the same tense: if I ought to do X, I can do X. If I could have done (but can no longer do) X, then I ought to have done (but no longer ought to do) X.
How does this, in connection with MW, interact with consequentialism? The consequences of my actions can’t determine how much murdering I do (in the big world sense), just whether or not I fall on a murder-path. In the big world sense, I can’t (and therefore ought not) change the number of murder-paths. The consequence at which I should aim is the nature of the path I inhabit, because that’s what I can change.
Maybe this is right, but if it is, it seems to me to be an oddly subjective form of consequentialism. I’m not sure if this captures my thought, but it seems that it’s not as if I’m making the world a better place, I’m just putting myself in a better world.
It seems like you are not making world a better place because you think about fixed probability of becoming a murderer, which your decisions cannot change. But the probability of you becoming a murderer is a result of your decisions.
You have reversed the causality, because you imagine the probability of you ever being a murderer as something that existed sooner, and your decisions about murdering as something that happens later.
You treat probability of something happening in future as a fact that happened in the past. (Which is a common error. When humans talk about “outside of time”, they always imagine it in the past. No, the past is not outside of time; it is a part of time.)
I’m not at all convinced that I endorse what you are doing with the word “I” here.
If we want to say that there exists some entity I, such that I commit murders on multiple branches, then to also talk about “the nature of the path I inhabit” seems entirely incoherent. There is no single path I inhabit, I (as defined here) inhabits all paths.
Conversely, if we want to say that there exists a single path that I inhabit (a much more conventional way of speaking), then murders committed on other branches are not murders I commit.
I’m not sure if that affects your point or not, but I have trouble refactoring your point to eliminate that confusion, so it seems relevant.
True, good point. That seems to be salt on the wound though. What I meant by ‘I’ is this: say I’m in path A. I have a parallel ‘I’ in path B if the configuration of something in B is such that, were it in A at some time past or future, I would consider it to be a (perhaps surprising) continuation of my existence in A.
If the Ai and the Bi are the same person, then I’m ethically responsible for the behavior of Bi for the same reasons I’m ethically responsible for myself (Ai). If Ai and Bi are not the same person (even if they’re very similar people) then I’m not responsible for Bi at all, but I’m also no longer de-coherent: there is always only one world with me in it. I take it neither of these options is true, and that some middle ground is to be preferred: Bi is not the same person as me, but something like a counterpart. Am I not responsible for the actions of my counterparts?
That’s a hard question to answer, but say I get uploaded and copied a bunch of times. A year later, some large percentage of my copies have become serial killers, while others have not. Are the peaceful copies morally responsible for the serial killing? If we say ‘no’ then it seems like we’re committed to at least some kind of libertarianism as regards free will. I understood the compatibilist view around here to be that you are responsible for your actions by way of being constituted in such and such a way. But my peaceful copies are constituted in largely the same way as the killer copies are. We only count them as numerically different on the basis of seemingly trivial distinctions like the fact that they’re embodied in different hardware.
Well, OK. We are, of course, free to consider any entity we like an extension of our own identity in the sense you describe here. (I might similarly consider some other entity in my own path to be a “parallel me” if I wish. Heck, I might consider you a parallel me.)
It is not at all clear that I know what the reasons are that I’m ethically responsible for myself, if I am the sort of complex mostly-ignorant-of-its-own-activities entity scattered across multiple branches that you are positing I am. Again, transplanting an ethical intuition (like “I am ethically responsible for my actions”) unexamined from one context to a vastly different one is rarely justified.
So a good place to start might be to ask why I’m ethically responsible for myself, and why it matters.
Can you say more about that preference? I don’t share it, myself. I would say, rather, that I have some degree of confidence in the claim “Ai and Bi are the same person” and some degree of confidence that “Ai and Bi are different people,” and that multiple observers can have different degrees of confidence in these claims about a given (Ai, Bi) pair, and there’s no fact of the matter.
Say I belong to a group of distinct individuals, who are born and raised in the usual way, with no copying involved. A year later, some large percentage of the individuals in my group become serial killers, while others do not. Are the peaceful individuals morally responsible for the serial killing?
Almost all of the relevant factors governing my answer to your example seem to apply to mine as well. (My own answer to both questions is “Yes, within limits,” those limits largely being a function of the degree to which observations of Ai can serve as evidence about Bi.
Good news! It is totally off base. There is nothing in quantum mechanics requiring that the number of branches corresponding to an arbitrary macroscopic event and its negation must be equal.
Aww, you had my hopes up. There’s nothing in my set-up that requires them to be equal either, just that the numbers be fixed.
That feeling of arbitrariness is, IMHO, worth exploring more carefully.
Suppose, for example, it turns out that we don’t live in a Big World… that this is all there is, and that events either happen in this world or they don’t happen at all. Suppose you somehow were to receive confirmation of this. Big relief, right? Now you really can reduce the total amount of whatever in all of existence everywhere, so actions have meaning again.
But then you meet someone who says “But what about hypothetical people? No matter how many people I don’t actually murder, there’s still countless hypothetical people being hypothetically murdered! And, sure, you can tell me to just worry about actual people and don’t worry about the other, but whence that rule? It seems arbitrary.”
Would you find their position reasonable?
What would you say to them, if not?
Well put. This actually does come up in a philosophical view known as modal realism. Roughly, if we can make true or false claims about possible worlds, then those worlds must be actual in order to be truth-makers. So all possible worlds are actual.
If my someone said what you said he said, suppose I ask this in reply:
E:”Wait, are those hypothetical people being hypothetically murdered? Is that true?”
S: “Yes! And there’s nothing you can do!”
E:”And there’s some reality to which this part of the map, the hypothetical-people-being-murdered corresponds? Such that the hypothetical murder of these people is a real part of our world?”
S: “Well, sure.”
E: “Okay, well if we’re going to venture into modal realism then this just conflicts in the same way.”
S: Suppose we’re not modal realists then. Suppose there’s just not really a fact of the matter about whether or not hypothetical, and therefore non-existant people are being murdered.
E: No problem. I’m just interested in reducing real evils.
S: Isn’t that an arbitrary determination?
E: No, it’s the exact opposite of arbitrary. I also don’t take non-existant evidence as evidence, I don’t eat non-existant fruit, etc. If we call this arbitrary, then what isn’t?
I would certainly say you’re justified in not caring about hypothetical murders. I would also say you’re justified in not caring about murders in other MW branches.
What you seem to want to say here is that because murders in other MW branches are “actual”, you care about them, but since murders in my imagination are not “actual”, you don’t.
I have no idea what the word “actual” could possibly refer to so as to do the work you want it to do here.
There are certainly clusters of consistent experience to which a hypothetical murder of a hypothetical person corresponds. Those clusters might, for example, take the form of certain patterns of neural activation in my brain… that’s how I usually model it, anyway. I’m happy to say that those are “actual” patterns of neural activation. I would not say that they are “actual” murdered human beings.
That said, I’m not really sure it matters if they are. I mean, if they are, then… hold on, let me visualize… there: I just “actually” resurrected them and they are now “actually” extremely happy. Was their former murder still evil? At best, it seems all of my preconceived notions about murder (e.g., that it’s a permanent state change of some kind) have just been thrown out the window, and I should give some serious thought to why I think murder is evil in the first place.
It seems something similar is true about existence in a Big World… if I want to incorporate that into my thinking, it seems I ought to rethink all of my assumptions. Transplanting a moral intuition about murder derived in a small world into a big world without any alteration seems like a recipe for walking off conceptual cliffs.
Right, exactly. I’m taking this sense of ‘actual’ (not literally) from the sequences. This is from ‘On being Decoherent’:
Later on in this post EY says that the Big World is already at issue in spatial terms: somewhere far away, there is another Esar (or someone enough like me to count as me). The implication is that existing in another world is analogous to existing in another place. And I certainly don’t think I’m allowed to apply the ‘keep your own corner clean’ principle to spatial zones.
In ’Living in Many Worlds”, EY says:
I take him to mean that there are really, actually many other people who exist (just in different worlds) and that I’m responsible for the quality of life for some sub-set of those people. And that there really are, actually, many people in other worlds who have discovered or know things I might take myself to have discovered or be the first to know. Such that it’s a small but real overturning of normality that I can’t really be the first to know something. (That, I assume, is what an ethical implication of MW for ethics amounts to, some overturning of some ethical normality).
If you modeled it to the point that you fully modeled a human being in your brain, and then murdered them, it seems obvious that you did actually kill someone. Hypothetical murders (but considered) fail to be murders because they fail to be good enough models.
Yes...obviously!
Ordinarily, I would describe someone who is uncertain about obvious things as a fool. It’s not clear to me that I’m a fool, but it is also not at all clear to me that murder as you’ve defined it in this conversation is evil.
If you could explain that obvious truth to me, I might learn something.
I didn’t mean to call you a fool, only I don’t think the disruption of your intuitions is a disruption of your ethical intuitions. It’s unintuitive to think of a human-being as something fully emulated within another human being’s brain, but if this is actually possible, it’s not unintuitive that ending this neural activity would be murder (if it weren’t some other form of killing-a-human-being). My point was just that the distinction in hardware can’t make a difference to the question of whether or not ending a neural activity is killing, and given a set of constants, murder.
Since I don’t think we’re any longer talking about my original question, I think I’ll tap out.
It all adds up to normality.
How do you know it all adds up to normality? What should I anticipate if it does, and what should I anticipate if it doesn’t? Or is this an a priori principle?
Which means that your ethics should not depends on the potential existence of other worlds we have no way of interacting with. In other words, while it might well be simpler (for some people) to reason your ethics by using the many worlds paradigm, the outcome of this reasoning should not depend on the number of worlds.
So, I’ve been thinking about this, and say I and everyone I know believes that it’s possible to be the first one, absolutely, to whistle a tune. This is, for our strange culture, an important ethical belief. That belief is part of what I would call ‘normality’. Now, some jerk comes a long and proves MW, and so I learn that for any tune I would consider novel, odds are that it’s been whistled before in another world (I’m taking this example from EY in the sequences). So, depending on my normal, MW may add up to normality, and it may not. In a much more obvious sense, if my normal is Newtonian physics, MW doesn’t add up to normality either.
So what does adding up to normal mean? Consider that my other stupid question. Egan’s law seems to go un-argued for and unexplained. If it just means what the paragraph you cite says, then MW may well abolish or come into conflict with our ethical ideas, since apparently it comes into conflict with all kinds of other ideas (like false physical theories) and none of this requires the destruction of the solar system or flying apples.
It means that if you do not observe pink unicorns daily, no new weird and wonderful theory should claim that you should have. Or, as EY puts it “apples didn’t stop falling, planets didn’t swerve into the Sun”. Another name for this is the correspondence principle.
If your ethics requires for you to be the first tune whistler in the multiverse, not just in this world, it’s not a useful ethics.
The usefulness of the ethics (if that’s the right standard to apply to an ethical idea) is not relevant to the example.
That is, unless you want to posit (and we should be super, super clear about this) that there is an a priori principle that any ethics capable of being contradicted by a true physical theory is not useful. But I very much doubt you want to say that.
I think modern physics pretty obviously doesn’t add up to normality in a number of cases. Long debates about cryonics took place because part of many people’s normal understanding of personal identity (an ethical category if there ever was one) involved a conception of material constituants like atoms such that there can be my atoms versus your atoms. This just turned out to be nonsense, as we discovered through investigation of physics. The fact that atoms no more have identities qua particular instances than do numbers overturned some element of normality.
Given cases like that, how does one actually argue for Egan’s law? It’s not enough to just state it.
It means that if in your branch you are the first one to whistle the tune, there is no one else in your branch to contradict you. (Just as you would expect in One World.) In some other branch someone else was first, and in that branch you don’t think that you were the first, so again no conflict.
Then “adding up to normal” means that even when Einstein ruins your model, all things will behave the same way as they always did. Things that within given precision obeyed the Newtonian physics, will continue to do it. You will only see exceptions in unusual situations, such as GPS satellites. (But if you had GPS satellites before Einstein invented his theory, you would have seen those exceptions too. You just didn’t know that would happen.)
In case of morality it means that if you had a rule “X is good” because it usually has good consequences (or because it follows the rules, or whatever), then “X is good” even with Many Worlds. The exception is if you try to apply moral significance to a photon moving through a double slit.
An explanation may change: for example it was immoral to say “if the coin ends this side up, I will kill you”, and it is still immoral to do so, but the previous explanation was that “it is bad to kill people with 50% probability” and the new explanation is “it is bad to kill people in 50% of branches” (which means killing them with 50% probability in a random branch).
Okay, so on reflection, I think the idea that it all adds up to normality is just junk. It doesn’t mean anything. I’ll try to explain:
A: MW comes into conflict with this ethical principle.
B: It can’t come into conflict. Physics always adds up to normality.
A: Really? Suppose I see an apple falling, and you discover that there’s no such thing as an apple, but that what we called apples are actually a sub-species of blueberries. Now I’ve learned that I’ve in fact never seen an apple fall, since by ‘apple’ I meant the fruit of an independent species of plant. So, normality overturned.
B: No, that’s not an overturning of normality, that’s just a change of explanation. What you saw was this greenish round thing falling, and you explained this as an ‘apple’. Now your explanation is different, but the thing you observed is the same.
A: Ah, but lets say science discovers that the green round thing I saw isn’t green at all. In fact, green is just the color that bounces off the thing. If it’s any color, it’s the color of the wavelengths of light it absorbs. Normality overturned.
B: But that’s just what being ‘green’ now means. What you saw was some light your receptors in way that varied over time, and you explained this as a green thing moving. The observation, the light hitting your eye over time, is the same. The explanation has shifted.
A: Now say that it turns out that (bear with me) there is no motion or time. What I thought was some light hitting my retina over time is just my own brain co-evolving with a broader wave-function. Now that’s overturning normality.
B: No, what you experienced qualitatively is the same, but the explanation has changed.
A: What did I experience qualitatively?
B: If you’re willing to go into plausible but hypothetical discoveries, I can’t give it any description that is basic enough that it can’t be ‘overturned’. Even ‘experience’ is probably overturnable.
A: That’s why ‘it all adds up to normality’ is junk. By that standard, nothing is normal. If anything I can describe as a phenomenon is normal, then it can be overturned under that description.
You’re decisions aren’t random! If you decide to do something then the vast majority of other selves you have will decide the same thing. When you do good you do indeed do good in all universes branching from this one.
(But what if what I just said wasn’t the case? Would you let your sense of ethics override the physical evidence? Look at the causal history of your morality: it comes from evolution. Do you think that if MW was true then evolution would be forced to happen differently, in order to give you different morals?
This is a good question, but I think it’s important to understand that it’s a good question. Evidence from the physical sciences doesn’t have some fixed priority over other kinds of evidence. One could argue that its an unusually good source of evidence, of course, but I’m not sure how to make the comparison in this case.
This question used to worry me a lot too, and at one point I also considered the idea that we can’t “change the fundamental amount of goodness” but just choose a path through the branching worlds.
The view that’s currently prevalent among LWers who study decision theory is that you should think of yourself as being able to change mathematical facts, because decisions are themselves mathematical facts and by making decisions you determine other mathematical facts via logical implication. So for example the amount of goodness in a deterministic universe like MWI, given some initial conditions, is a mathematical fact that you can change through your decisions.
Hmm, I don’t think I understand that at all: how can one change a mathematical fact? Aren’t mathematical facts fixed? Is there something you could point me to, which explains this?
Try Towards a New Decision Theory and Controlling Constant Programs. Also, I used the word “change” in my comment since you were asking the question in terms of “change”, but perhaps a better term is “control”, which is what Nesov uses.
The issue is that the MWI does not address the phenomenon of single path being empirically special (your path). The theories as in the code that you would have when you use Solomonoff induction on your sensory input, have to address this phenomenon—they predict (or guess) sensory input not produce something which merely contains sensory input somewhere in the middle of enormous stream of alternatives. [putting aside for the moment that Solomonoff induction with Turing machine would have troubles with rotational and other symmetries]
That is true of physics in general—it is by design is concerned with predicting our sensory input NOT ‘explaining it away’ by producing an enormous body of things within which the input can be found, and this is why MWI, the way it is now, is seen as unsatisfactory, and why having the un-physical collapse of CI is acceptable. The goal is to guess the sensory input the best, and thus choice of path—even if made randomly—has to be part of theory.
Furthermore, if one is to seek the shortest ‘explanatory’ theory which contains you and your input somewhere within it, but doesn’t have to include the ‘guess where you are’ part, the MWI is not the winner, a program that iterates over all theories of physics and simulates them, is—you get other sort of multiverse.
edit: On a more general note, one shouldn’t be convinced simply because one can’t see a simpler alternative. It’s very hard to see alternatives in physics. Here is a good article about the issue.