The quantum interpretation debate is merely the illustrative starting point for this article, so perhaps it is boorish of me to focus on it. But…
the obvious absurdity of attempts to make quantum mechanics yield a single world.
Even if some form of many worlds is correct, this is going too far.
Eliezer, so far as I can see, in your life you have gone from a belief in objective-collapse theories, derived from Penrose, to a belief in many worlds, derived perhaps from your extropian peers; and you always pose the theoretical choice as a choice between “collapse” and “no-collapse”. You do not seem to have ever seriously considered—for example—whether the wavefunction is not real at all, but simply a construct like a probability distribution.
There are at least two ways of attempting “to make quantum mechanics yield a single world” which are obviously not absurd: zigzag interpretations and quantum causal histories. Since these hyperbolic assertions of yours about the superiority and obviousness of many-worlds frequently show up in your discourses on rationality, you really need at some point to reexamine your thinking on this issue. Perhaps you have managed to draw useful lessons for yourself even while getting it wrong, or perhaps there are new lessons to be found in discovering how you got to this point, but you are getting it wrong.
We have seemingly “fundamentally unpredictable” events at exactly the point where ordinary quantum mechanics predicts the world ought to split, and there’s no way to have those events take place in a single global world without violating Special Relativity. Leaving aside other considerations such as having the laws of physical causality be local in the configuration space, the interpretation of the above evidence is obvious and there’s simply no reason whatsoever at all to privilege the hypothesis of a single world. I call it for many-worlds. It’s over.
Ordinary quantum mechanics does not predict that the world splits, no more than does ordinary probability theory.
The zigzag interpretations are entirely relativistic, since the essence of a zigzag interpretation is that you have ordinary space-time with local causality, but you have causal chains that run backwards as well as forwards in time, and a zigzag in time is what gives you unusual spacelike correlation.
A “quantum causal history” (see previous link) is something like a cellular automaton with no fixed grid structure, no universal time, and locally evolving Hilbert spaces which fuse and join.
These three ideas—many worlds, zigzag, QCH—all define research programs rather than completed theories. The latter two are single-world theories and they even approximate locality in spacetime (and not just “in configuration space”). You should think about them some time.
Suppose Omega or one of its ilk says to you, “Here’s a game we can play. I have an
infinitely large deck of cards here. Half of them have a star on them, and one-tenth of them have a skull on them. Every time you draw a card with a star, I’ll double your utility for the rest of your life. If you draw a card with a skull, I’ll kill you.”
How many cards do you draw?
I think that someone who believes in many-worlds will keep drawing cards until they die. Someone who believes in one world might not. An expected-utility maximizer would; but I’m uncomfortable about playing the lottery with the universe if it’s the only one we’ve got.
If a rational, ethical one-worlds believer doesn’t continue drawing cards as long as they can, in a situation where the many-worlds believer would, then we have an asymmetry in the cost of error. Building an FAI that believes in one world, when many worlds is true, causes (possibly very great) inefficiency and repression to delay the destruction all life. Building an FAI that believes in many worlds, when one world is true, results in annihilating all life in short order. This large asymmetry is enough to compensate for a large asymmetry in probabilities.
(My gut instinct is that there is no asymmetry, and that having a lot of worlds shouldn’t make you less careless with any of them. But that’s just my gut instinct.)
Also, I also think that you can’t, at present, both be rational about updating in response to the beliefs of others, and dismiss one-world theory as dead.
Not only is “What do we believe?” a theoretically distinct question from “What do I do about it?”, but by your logic we should also refuse to believe in spatially infinite universes and inflationary universes, since they also have lots of copies of us.
Not only is “What do we believe?” a theoretically distinct question from “What do I do about it?”
“What do we believe?” is a distinct question; and asking it is comitting an error of rationality. The limitations of our minds often force us to use “belief” as a heuristic; but we should remember that it is fundamentally an error, particularly when the consequences are large.
You don’t do the expected-cost analysis when investigating a theory; you should do it before dismissing a theory. Because, If someday you build an AI, and hardcode in the many-worlds assumption because many years before you dismissed the one-world hypothesis from your mind and have not considered it since, you will be committing a grave Bayesian error, with possibly disastrous consequences.
(My cost-of-error statements above are for you specifically. Most people aren’t planning to build a singleton.)
I can’t speak for Eliezer, but if I was building a singleton I probably wouldn’t hard-code my own particular scientific beliefs into it, and even if I did I certainly wouldn’t program any theory at 100% confidence.
I think that someone who believes in many-worlds will keep drawing cards until they die. Someone who believes in one world might not. An expected-utility maximizer would; but I’m uncomfortable about playing the lottery with the universe if it’s the only one we’ve got.
Omega clearly has more than one universe up his sleeve. It doesn’t take too many doublings of my utility function before a further double would require more entropy than is contained in this one. Just how many galaxies worth of matter perfectly optimised for my benefit do I really need?
The problem here is that is hard to imagine Omega actually being able to double utility. Doubling utility is hard. It really would be worth the risk of gambling indefinitely if Omega actually had the power to do what he promised. If it isn’t, then you by definition have your utility function wrong. In fact, if exactly half of the cards killed you and the other half doubled utility it would still be worth gambling unless you assign exactly 0 utility to anything else in the universe in the case of your death.
Omega knows you’ll draw a skull before you get that many doublings.
That would be a different problem. Either the participant is informed that the probability distribution in question has anthropic bias based on the gamemaster’s limits or the gamemaster is not Omega-like.
I think that someone who believes in many-worlds will keep drawing cards until they die.
You have to include the presumption that there is a quantum variable that conditions the skull card, and there is a question about whether a non-quantum event strongly conditioned on a quantum event counts for quantum immortality … but assume Omega can do this.
The payoff, then, looks like it favors going to an arbitrarily high number given that quantum immortality is true. Honestly, my gut response is that I would go to either 3 draws, 9 draws, or 13 draws depending on how risk-averse I felt and how much utility I expected as my baseline (a twice-as-high utility before doubling lets me go one doubling less).
I think this says that my understanding of utility falls prey to diminishing returns when it shouldn’t (partially a problem with utility itself), and that I don’t really believe in quantum immortality—because I am choosing a response that is optimal for a non-quantum immortality scenario.
But in any reasonable situation where I encounter this scenario, my response is accurate: it takes into account my uncertainty about immortality (requires a few more things than just the MWI) and also accounts for me updating my beliefs about quantum immortality based on evidence from the bet. That any agent, even an arbitrarily powerful one, is willing to bet an arbitrarily large number of doublings of my utility against quantum immortality is phenomenal evidence against it. Phenomenal. Utility is so complicated, and doubling just gets insane so quickly.
You have to include the presumption that there is a quantum variable that conditions the skull card, and there is a question about whether a non-quantum event strongly conditioned on a quantum event counts for quantum immortality … but assume Omega can do this.
Neither the problem itself nor this response need make any mention of quantum immortality. Given an understanding of many-worlds ‘belief in quantum immortality’ is just a statement about preferences given a certain type of scenario. There isn’t some kind of special phenomenon involved, just a matter of choosing what sort of preferences you have over future branches.
That any agent, even an arbitrarily powerful one, is willing to bet an arbitrarily large number of doublings of my utility against quantum immortality is phenomenal evidence against it. Phenomenal.
No, no, no! Apart from being completely capricious with essentially arbitrary motivations they aren’t betting against quantum immortality. They are betting a chance of killing someone against a chance of making ridiculous changes to the universe. QI just doesn’t play a part in their payoffs at all.
I particularly wanted people to see a zigzag explanation of quantum computing. Anyway, see John Cramer, Mark Hadley, Huw Price, Wheeler-Feynman for examples. Like many worlds, the idea comes in various flavors.
Hrm… looking at QCH… it doesn’t seem to even claim to be a single world model in the first place. (just began reading it, though)
Actually, for that matter, I think I’m misunderstanding the notion of inextendibility. Wouldn’t local finiteness ensure that every non finite directed path is future inextendible?
QCH is arguably a formalism rather than a philosophy. I think you could actually repurpose the QCH formalism for many worlds, by way of “consistent histories”. A “consistent history” is coarse-grained by classical standards; a space-time in which properties are specified only here and there, rather than everywhere. If you added a causal structure connecting those specified patches, you’d get something like a QCH. However, the authors are not many-worlders and an individual QCH should be a self-sufficient thing.
I think you are right. But the notion of inextendibility is meant to apply both to causal histories that continue forever and to causal histories that have terminal future events. It’s a step toward defining the analogue of past and future light-cones in a discrete event-geometry.
Well, what precisely is meant to me excluded when interceptions of non extendible futures are the only ones that matter? (ugh… I just had the thought that this conversation might summon the spambots :)) anyways, why only the non extendible paths?
As far as the rest, I want to read through and understand the paper before I comment on it. I came to a halt right early on due to being confused about extendibility.
However, my overall question is if the idea on its own naturally produce a single history, or does it still need some sort of “collapse” or other contrived mechanism to do so?
The big picture is that we are constructing something like special relativity’s notion of time, for any partially ordered set. For any p and q in the set, either p comes before q, p comes after q, or p and q have no order relation. That last possibility is the analogue of spacelike separation. If you then have p, q, r, s..., all of which are pairwise spacelike, you have something resembling a spacelike slice. But you want it to be maximal for the analogy to be complete. The bit about intersecting all inextendible paths is one form of maximality—it’s saying that every maximal timelike path intersects your spacelike slice.
Then, having learnt to think of a poset as a little space-time, you then associate Hilbert spaces with the elements of the poset, and something like a local Schrodinger evolution with each succession step. But the steps can be one-to-many or many-to-one, which is why they use the broader notion of “completely positive mapping” rather than unitary mapping. You can also define the total Hilbert space on a spacelike slice by taking the tensor product of the little Hilbert spaces, and the evolution from one spacelike slice to a later one by algebraically composing all the individual mappings. All in all, it’s like quantum field theory constructed on a directed graph rather than on a continuous space.
my overall question is if the idea on its own naturally produce a single history, or does it still need some sort of “collapse” or other contrived mechanism to do so?
I find it hard to say how naturally it does so. The paper is motivated by the problem that the Wheeler-deWitt equation in quantum cosmology only applies to “globally hyperbolic” spacetimes. It’s an exercise in developing a more general formalism. So it’s not written in order to promote a particular quantum interpretation. It’s written in the standard way—“observables” are what’s real, quantum states are just guides to what the observables will do.
A given history will attach a quantum state to every node in the causal graph. Under the orthodox interpretation, the reality at each node does not consist of the associated quantum state vector, but rather local observables taking specific values. Just to be concrete, since this must sound very abstract, let’s talk in terms of qubits. Suppose we have a QCH with a qubit state at every node. Orthodoxy says that these qubit “states” are not the actual states, the actuality everywhere is just 0 or 1. A many-worlds interpretation would have to say those maximal spacelike tensor products are the real states. But when we evolve that state to the next spacelike slice, it should usually become an unfactorizable superposition. This is in contradiction with the QCH philosophy of specifying a definite qubit state at each node. So it’s as if there’s a collapse assumption built in—only I don’t think it’s a necessary assumption. You should be able to talk about a reduced density matrix at each node instead, and still use the formalism.
For me the ontological significance of QCH is not that it inherently prefers a single-world interpretation, but just that it shows an alternative midway between many worlds and classical spacetime—a causal grid of quasi-local state vectors. But the QCH formalism is still a long way from actually giving us quantum gravity, which was the objective. So it has to be considered unproven work in progress.
The quantum interpretation debate is merely the illustrative starting point for this article, so perhaps it is boorish of me to focus on it. But…
Even if some form of many worlds is correct, this is going too far.
Eliezer, so far as I can see, in your life you have gone from a belief in objective-collapse theories, derived from Penrose, to a belief in many worlds, derived perhaps from your extropian peers; and you always pose the theoretical choice as a choice between “collapse” and “no-collapse”. You do not seem to have ever seriously considered—for example—whether the wavefunction is not real at all, but simply a construct like a probability distribution.
There are at least two ways of attempting “to make quantum mechanics yield a single world” which are obviously not absurd: zigzag interpretations and quantum causal histories. Since these hyperbolic assertions of yours about the superiority and obviousness of many-worlds frequently show up in your discourses on rationality, you really need at some point to reexamine your thinking on this issue. Perhaps you have managed to draw useful lessons for yourself even while getting it wrong, or perhaps there are new lessons to be found in discovering how you got to this point, but you are getting it wrong.
We have seemingly “fundamentally unpredictable” events at exactly the point where ordinary quantum mechanics predicts the world ought to split, and there’s no way to have those events take place in a single global world without violating Special Relativity. Leaving aside other considerations such as having the laws of physical causality be local in the configuration space, the interpretation of the above evidence is obvious and there’s simply no reason whatsoever at all to privilege the hypothesis of a single world. I call it for many-worlds. It’s over.
Ordinary quantum mechanics does not predict that the world splits, no more than does ordinary probability theory.
The zigzag interpretations are entirely relativistic, since the essence of a zigzag interpretation is that you have ordinary space-time with local causality, but you have causal chains that run backwards as well as forwards in time, and a zigzag in time is what gives you unusual spacelike correlation.
A “quantum causal history” (see previous link) is something like a cellular automaton with no fixed grid structure, no universal time, and locally evolving Hilbert spaces which fuse and join.
These three ideas—many worlds, zigzag, QCH—all define research programs rather than completed theories. The latter two are single-world theories and they even approximate locality in spacetime (and not just “in configuration space”). You should think about them some time.
I haven’t seen you take into account the relative costs of error of the two beliefs.
A few months ago, I asked:
I think that someone who believes in many-worlds will keep drawing cards until they die. Someone who believes in one world might not. An expected-utility maximizer would; but I’m uncomfortable about playing the lottery with the universe if it’s the only one we’ve got.
If a rational, ethical one-worlds believer doesn’t continue drawing cards as long as they can, in a situation where the many-worlds believer would, then we have an asymmetry in the cost of error. Building an FAI that believes in one world, when many worlds is true, causes (possibly very great) inefficiency and repression to delay the destruction all life. Building an FAI that believes in many worlds, when one world is true, results in annihilating all life in short order. This large asymmetry is enough to compensate for a large asymmetry in probabilities.
(My gut instinct is that there is no asymmetry, and that having a lot of worlds shouldn’t make you less careless with any of them. But that’s just my gut instinct.)
Also, I also think that you can’t, at present, both be rational about updating in response to the beliefs of others, and dismiss one-world theory as dead.
Not only is “What do we believe?” a theoretically distinct question from “What do I do about it?”, but by your logic we should also refuse to believe in spatially infinite universes and inflationary universes, since they also have lots of copies of us.
“What do we believe?” is a distinct question; and asking it is comitting an error of rationality. The limitations of our minds often force us to use “belief” as a heuristic; but we should remember that it is fundamentally an error, particularly when the consequences are large.
You don’t do the expected-cost analysis when investigating a theory; you should do it before dismissing a theory. Because, If someday you build an AI, and hardcode in the many-worlds assumption because many years before you dismissed the one-world hypothesis from your mind and have not considered it since, you will be committing a grave Bayesian error, with possibly disastrous consequences.
(My cost-of-error statements above are for you specifically. Most people aren’t planning to build a singleton.)
I can’t speak for Eliezer, but if I was building a singleton I probably wouldn’t hard-code my own particular scientific beliefs into it, and even if I did I certainly wouldn’t program any theory at 100% confidence.
Omega clearly has more than one universe up his sleeve. It doesn’t take too many doublings of my utility function before a further double would require more entropy than is contained in this one. Just how many galaxies worth of matter perfectly optimised for my benefit do I really need?
The problem here is that is hard to imagine Omega actually being able to double utility. Doubling utility is hard. It really would be worth the risk of gambling indefinitely if Omega actually had the power to do what he promised. If it isn’t, then you by definition have your utility function wrong. In fact, if exactly half of the cards killed you and the other half doubled utility it would still be worth gambling unless you assign exactly 0 utility to anything else in the universe in the case of your death.
Omega knows you’ll draw a skull before you get that many doublings.
That would be a different problem. Either the participant is informed that the probability distribution in question has anthropic bias based on the gamemaster’s limits or the gamemaster is not Omega-like.
You have to include the presumption that there is a quantum variable that conditions the skull card, and there is a question about whether a non-quantum event strongly conditioned on a quantum event counts for quantum immortality … but assume Omega can do this.
The payoff, then, looks like it favors going to an arbitrarily high number given that quantum immortality is true. Honestly, my gut response is that I would go to either 3 draws, 9 draws, or 13 draws depending on how risk-averse I felt and how much utility I expected as my baseline (a twice-as-high utility before doubling lets me go one doubling less).
I think this says that my understanding of utility falls prey to diminishing returns when it shouldn’t (partially a problem with utility itself), and that I don’t really believe in quantum immortality—because I am choosing a response that is optimal for a non-quantum immortality scenario.
But in any reasonable situation where I encounter this scenario, my response is accurate: it takes into account my uncertainty about immortality (requires a few more things than just the MWI) and also accounts for me updating my beliefs about quantum immortality based on evidence from the bet. That any agent, even an arbitrarily powerful one, is willing to bet an arbitrarily large number of doublings of my utility against quantum immortality is phenomenal evidence against it. Phenomenal. Utility is so complicated, and doubling just gets insane so quickly.
Neither the problem itself nor this response need make any mention of quantum immortality. Given an understanding of many-worlds ‘belief in quantum immortality’ is just a statement about preferences given a certain type of scenario. There isn’t some kind of special phenomenon involved, just a matter of choosing what sort of preferences you have over future branches.
No, no, no! Apart from being completely capricious with essentially arbitrary motivations they aren’t betting against quantum immortality. They are betting a chance of killing someone against a chance of making ridiculous changes to the universe. QI just doesn’t play a part in their payoffs at all.
Your reference for “zigzag interpretations” is your own blog comment?!?
I particularly wanted people to see a zigzag explanation of quantum computing. Anyway, see John Cramer, Mark Hadley, Huw Price, Wheeler-Feynman for examples. Like many worlds, the idea comes in various flavors.
Hrm… looking at QCH… it doesn’t seem to even claim to be a single world model in the first place. (just began reading it, though)
Actually, for that matter, I think I’m misunderstanding the notion of inextendibility. Wouldn’t local finiteness ensure that every non finite directed path is future inextendible?
QCH is arguably a formalism rather than a philosophy. I think you could actually repurpose the QCH formalism for many worlds, by way of “consistent histories”. A “consistent history” is coarse-grained by classical standards; a space-time in which properties are specified only here and there, rather than everywhere. If you added a causal structure connecting those specified patches, you’d get something like a QCH. However, the authors are not many-worlders and an individual QCH should be a self-sufficient thing.
I think you are right. But the notion of inextendibility is meant to apply both to causal histories that continue forever and to causal histories that have terminal future events. It’s a step toward defining the analogue of past and future light-cones in a discrete event-geometry.
Well, what precisely is meant to me excluded when interceptions of non extendible futures are the only ones that matter? (ugh… I just had the thought that this conversation might summon the spambots :)) anyways, why only the non extendible paths?
As far as the rest, I want to read through and understand the paper before I comment on it. I came to a halt right early on due to being confused about extendibility.
However, my overall question is if the idea on its own naturally produce a single history, or does it still need some sort of “collapse” or other contrived mechanism to do so?
This earlier paper might also help.
The big picture is that we are constructing something like special relativity’s notion of time, for any partially ordered set. For any p and q in the set, either p comes before q, p comes after q, or p and q have no order relation. That last possibility is the analogue of spacelike separation. If you then have p, q, r, s..., all of which are pairwise spacelike, you have something resembling a spacelike slice. But you want it to be maximal for the analogy to be complete. The bit about intersecting all inextendible paths is one form of maximality—it’s saying that every maximal timelike path intersects your spacelike slice.
Then, having learnt to think of a poset as a little space-time, you then associate Hilbert spaces with the elements of the poset, and something like a local Schrodinger evolution with each succession step. But the steps can be one-to-many or many-to-one, which is why they use the broader notion of “completely positive mapping” rather than unitary mapping. You can also define the total Hilbert space on a spacelike slice by taking the tensor product of the little Hilbert spaces, and the evolution from one spacelike slice to a later one by algebraically composing all the individual mappings. All in all, it’s like quantum field theory constructed on a directed graph rather than on a continuous space.
I find it hard to say how naturally it does so. The paper is motivated by the problem that the Wheeler-deWitt equation in quantum cosmology only applies to “globally hyperbolic” spacetimes. It’s an exercise in developing a more general formalism. So it’s not written in order to promote a particular quantum interpretation. It’s written in the standard way—“observables” are what’s real, quantum states are just guides to what the observables will do.
A given history will attach a quantum state to every node in the causal graph. Under the orthodox interpretation, the reality at each node does not consist of the associated quantum state vector, but rather local observables taking specific values. Just to be concrete, since this must sound very abstract, let’s talk in terms of qubits. Suppose we have a QCH with a qubit state at every node. Orthodoxy says that these qubit “states” are not the actual states, the actuality everywhere is just 0 or 1. A many-worlds interpretation would have to say those maximal spacelike tensor products are the real states. But when we evolve that state to the next spacelike slice, it should usually become an unfactorizable superposition. This is in contradiction with the QCH philosophy of specifying a definite qubit state at each node. So it’s as if there’s a collapse assumption built in—only I don’t think it’s a necessary assumption. You should be able to talk about a reduced density matrix at each node instead, and still use the formalism.
For me the ontological significance of QCH is not that it inherently prefers a single-world interpretation, but just that it shows an alternative midway between many worlds and classical spacetime—a causal grid of quasi-local state vectors. But the QCH formalism is still a long way from actually giving us quantum gravity, which was the objective. So it has to be considered unproven work in progress.