Theory of everything as I see it (and apparently Wikipedia agrees ) would allow us (in principle—given full information and enough resources) to predict every outcome. So every other aspect of physical universe would be (again, in principle) derivable from it.
I think I’m saying that there will be parts of a theory of everything which just won’t compress small enough to fit into human minds, not just that the consequences of a TOE will be too hard to compute.
Parts that won’t compress? Almost certainly, the expansions of small parts of a system can have much higher Kolmogorov complexity than the entire theory of everything.
The Tegmark IV multiverse is so big that a human brain can’t comprehend nearly any of it, but the theory as a whole can be written with four words: “All mathematical structures exist”. In terms of Kolmogorov complexity, it doesn’t get much simpler than those four words.
For anyone reading this that hasn’t read any of Tegmark’s writing, you should. http://space.mit.edu/home/tegmark/crazy.html Tegmark is one of the best popular science writers out there, so the popular versions he has posted aren’t dumbed down, they are just missing most of the math.
Tegmark predicts that in 50 years you will be able to buy a t-shirt with the theory of everything printed on it.
The Tegmark IV multiverse is so big that a human brain can’t comprehend nearly any of it, but the theory as a whole can be written with four words: “All mathematical structures exist”. In terms of Kolmogorov complexity, it doesn’t get much simpler than those four words.
To be fair, every one of those words is hiding a substantial amount of complexity. Not as much hidden complexity as “A wizard did it” (even shorter!), but still.
(I do still find the Level IV Multiverse plausible, and it is probably the most parsimonious explanation of why the universe happens to exist; I only mean to say that to convey a real understanding of it still takes a bit more than four words.)
The Tegmark IV multiverse is so big that a human brain can’t comprehend nearly any of it, but the theory as a whole can be written with four words: “All mathematical structures exist”. In terms of Kolmogorov complexity, it doesn’t get much simpler than those four words.
To be fair, every one of those words is hiding a substantial amount of complexity. Not as much hidden complexity as “A wizard did it” (even shorter!), but still.
Actually, I’m quite unclear about what the statement “All mathematical structures exist” could mean, so I have a hard time evaluating its Kolmogorov complexity. I mean, what does it mean to say that a mathematical structure exists, over and above the assertion that the mathematical structure was, in some sense, available for its existence to be considered in the first place?
ETA: When I try to think about how I would fully flesh out the hypothesis that “All mathematical structures exist”, all I can imagine is that you would have the source code for program that recursively generates all mathematical structures, together with the source code of a second program that applies the tag “exists” to all the outputs of the first program.
Two immediate problems:
(1) To say that we can recursively generate all mathematical structures is to say that the collection of all mathematical structures is denumerable. Maintaining this position runs into complications, to say the least.
(2) More to the point that I was making above, nothing significant really follows from applying the tag “exists” to things. You would have functionally the same overall program if you applied the tag “is blue” to all the outputs of the first program instead. You aren’t really saying anything just by applying arbitrary tags to things. But what else are you going to do?
Don’t we live in a multiverse? Doesn’t our Universe splits in two after every quantum event?
How then Tegmark & Co. can predict something for the next 50 years? Almost certainly will happen—somewhere in the Multiverse. Just as almost everything opposite, only on the other side of the Multiverse.
According to Tegmark, at least.
Now he predicts a T shirt in 50 years time! Isn’t it a little weird?
Don’t we live in a multiverse? Doesn’t our Universe splits in two after every quantum
event?
How then Tegmark & Co. can predict something for the next 50 years? Almost
certainly will happen—somewhere in the Multiverse. Just as almost everything
opposite, only on the other side of the Multiverse.
All predictions in a splitting multiverse setting have to understood as saying something like “in the majority of resulting branches, the following will be true.” Otherwise predictions become meaningless. This fits in nicely with a probabilistic understanding. The correct probability of the even occurring is the fraction of multiverses descended from this current universe that satisfy the condition.
Edit: This isn’t quite true. If I flip a coin, the probability of it coming up heads is in some sense 1⁄2 even though if I flip it right now, any quantum effects might be too small to have any effect on the flip. There’s a distinction probability due to fundamentally probabilistic aspects of the universe and probability due to ignorance.
Let’s remember that if we’re talking about a multiverse in the MWI sense, then universes have to be weighted by the squared norm of their amplitude. Otherwise you get, well, the ridiculous consequences being talked about here… (as well as being able to solve problems in PP in polynomial time on a quantum computer).
Right ok. So in that case, even if we have more new universes being created by a given specific descendant universe, the total measure of that set of universes won’t be any higher than that of the original descendant universe, yes? So that makes this problem go away.
Not off the top of my head. It follows from having the squared norm and from the transformations being unitary. Sniffnoy may have a direct source for the point.
in the majority of resulting branches, the following will be true
How do you know that something will be included in the majority of branches. Suppose that a nuclear war starts in a branch. A lot of radioactivity will be around, a lot of quantum events, a lot of splittings and a lot of “postnuclear” parallel worlds. The majority? Maybe, I don’t know. Tegmark knows? I don’t think so.
The small amount of additional radioactivity shouldn’t substantially alter how many branches there are. Keep in mind that in the standard multiverse model for quantum mechanics, a split occurs for a lot of events that have nothing to do with radioactivity. For example, a lot of behavior with electrons will also cause splitting. The additional radioactivity from a nuclear exchange simply won’t matter much.
Hmm, that’s a valid point. It doesn’t increase linearly with the number of splitting. I still don’t think it should matter. Every atom that isn’t simple hydrogen atom is radioactive to some extent (the probability of decay is just really, really, tiny). I’m not at all sure that a radioactive planet (in the sense of having a lot of atoms with non-negligible chance of decay) will actually produce more branches than one which does not. Can someone who knows more about the relevant physics comment? I’m not sure I know enough to make a confident statement about this.
It might help if you read the relevant sections of the conversation before you make accusations about something being a “religion.” Note that Sniffnoy’s remark above already resolved this.
Everything is weighted by squared-norm of the amplitude. And, y’know, quantum mechanics is unitary. What needs to be preserved, is preserved.
More generally, we might imagine that we lived in a world where physics was just probabilistic in the ordinary way, rather than quantum (in the sense of based on amplitudes); MWI might also be a natural way to think if we lived in that world (though not as natural as it is in the world of actual QM, as in that world we wouldn’t have any real need for MWI); then, well, everything would be weighted by probability, and everything would be stochastic rather than unitary. Of course if you don’t require preservation of whatever the appropriate weighting is, you’ll get an absurd result.
You do seem to be pretty confused about what MWI says; it does not, as you seem to suggest, posit a finite number of universes, which split at discrete points, and where the probability of an event is the proportion of universes it occurs in. “Universes” here are just identified with the states that we’re looking at a wave function over, or perhaps trajectories through such, so there are infinitely many. And having the universes split and not interfere with each other, would work with ordinary probability, but it won’t work with quantum amplitudes—if that were the case we’d just see probabilistic effects, not quantum effects. The many worlds of MWI do interfere with each other. When decoherence occurs the result is to effectively split collections of universes off from each other so they don’t interfere anymore, but in a coherent quantum system the notion of splitting doesn’t make much sense.
Remember, the key suppositions of MWI are just that A. the equations of quantum mechanics are literally true all the time—there is no magical waveform collapse; and B. the wavefunction is a complete description of reality; it’s not guiding any hidden variables. (And I suppose, C., decoherence is responsible for the appearance of collapse, etc., but that’s more of a conclusion than a suppostion.) Hence why it’s claimed here that MWI wins by Occam’s Razor. It really is the minimal interpretation of QM!
If there is an actual problem with MWI, I’d say it’s the one Scott Aaronson points out here (I doubt this observation is original to him, but not being too familiar of the history of this, it’s the first place I’d seen it; does anyone know the history of this?); the virtue of MWI is its minimality, but unfortunately it seems to be too minimal to answer this question! Assuming the question is meaningful, anyway. But the alternatives still seem distinctly unsatisfactory...
Remember, the key suppositions of MWI are just that [...] Hence why it’s claimed here that MWI wins by Occam’s Razor.
You can’t get the probabilities from those suppositions. And without the probabilities, MWI has no predictive power; it’s just a metaphysics which says “Everything that can happen does happen”, and which then gives wrong predictions if you count the worlds the way you would count anything else.
But even if you can justify the required probability measure, there is another problem. John Bell once wrote of Bohmian theories (see last paragraph here):
As with relativity before Einstein, there is a preferred reference frame in the formulation of the theory…but it is experimentally indistinguishable.
In a Bohmian theory, you take the classical theory that is to be quantized, and add to the classical equations of motion a nonlocal term, dependent on the wavefunction, which adds an extra wiggle to the motion, giving you quantum behavior. The nonlocality means that you need a notion of objective simultaneity in order to define that term. So when you construct the Bohmian counterpart of a relativistic quantum theory (i.e. of a quantum field theory), you will still see relativistic effects like length contraction and time dilation (since they are in the classical counterpart of the quantum field theory), but you have to pick a reference frame in order to make the Bohmian construction—which might be seen as an indication of its artificiality.
The same thing happens in MWI. In MWI you reify the wavefunction—you assume it is a real thing—and then you divide it up into worlds. To perform this division, you need a universal time coordinate, so relativity disappears at the fundamental level. Furthermore, since there is no particular connection between the worlds of the wavefunction in one moment, and the worlds of the wavefunction in the next moment, you don’t even have persistence of a world in time, so you can’t even think about performing a Lorentz transformation. Instead, you have a set of disconnected world-moments, with mysterious nonstandard probabilities attached to them in order to make predictions turn out right.
All of that says to me that the MWI construction is just as artificial as the Bohmian one.
You can’t get the probabilities from those suppositions. And without the probabilities, MWI has no predictive power; it’s just a metaphysics which says “Everything that can happen does happen”, and which then gives wrong predictions if you count the worlds the way you would count anything else.
Sorry, yes. I took weighting things by squared-norm of amplitude as implicit, seeing as we’re discussing QM in the first place.
The weighting quantity is conserved. So far as I can tell, that entirely answers the objection you raised. I’m really not seeing where it fails. Could you explain?
If I understand you correctly, there is an equal number of world splits every second in every branch. They are all weighted, so that no branch can explode?
Worlds are weighted by squared-norm of amplitude, a quantity that is conserved. If two worlds are really not interfering with each other any more, then amplitude will not somehow vanish from the future of one and appear in the future in the other.
I think a relatively simple theory of everything is possible. This is however not based on anything solid—I’m a Math/CS student and my knowledge of physics does not (yet!) exceed high school level.
Theory of everything as I see it (and apparently Wikipedia agrees ) would allow us (in principle—given full information and enough resources) to predict every outcome. So every other aspect of physical universe would be (again, in principle) derivable from it.
I think I’m saying that there will be parts of a theory of everything which just won’t compress small enough to fit into human minds, not just that the consequences of a TOE will be too hard to compute.
Do you think a theory of everything is possible?
Parts that won’t compress? Almost certainly, the expansions of small parts of a system can have much higher Kolmogorov complexity than the entire theory of everything.
The Tegmark IV multiverse is so big that a human brain can’t comprehend nearly any of it, but the theory as a whole can be written with four words: “All mathematical structures exist”. In terms of Kolmogorov complexity, it doesn’t get much simpler than those four words.
For anyone reading this that hasn’t read any of Tegmark’s writing, you should. http://space.mit.edu/home/tegmark/crazy.html Tegmark is one of the best popular science writers out there, so the popular versions he has posted aren’t dumbed down, they are just missing most of the math.
Tegmark predicts that in 50 years you will be able to buy a t-shirt with the theory of everything printed on it.
To be fair, every one of those words is hiding a substantial amount of complexity. Not as much hidden complexity as “A wizard did it” (even shorter!), but still.
(I do still find the Level IV Multiverse plausible, and it is probably the most parsimonious explanation of why the universe happens to exist; I only mean to say that to convey a real understanding of it still takes a bit more than four words.)
Actually, I’m quite unclear about what the statement “All mathematical structures exist” could mean, so I have a hard time evaluating its Kolmogorov complexity. I mean, what does it mean to say that a mathematical structure exists, over and above the assertion that the mathematical structure was, in some sense, available for its existence to be considered in the first place?
ETA: When I try to think about how I would fully flesh out the hypothesis that “All mathematical structures exist”, all I can imagine is that you would have the source code for program that recursively generates all mathematical structures, together with the source code of a second program that applies the tag “exists” to all the outputs of the first program.
Two immediate problems:
(1) To say that we can recursively generate all mathematical structures is to say that the collection of all mathematical structures is denumerable. Maintaining this position runs into complications, to say the least.
(2) More to the point that I was making above, nothing significant really follows from applying the tag “exists” to things. You would have functionally the same overall program if you applied the tag “is blue” to all the outputs of the first program instead. You aren’t really saying anything just by applying arbitrary tags to things. But what else are you going to do?
What are the Tegmark multiverses relevant to? Why should I try to understand them?
Really? In which parallel universe? Every one? This one?
This one.
Don’t we live in a multiverse? Doesn’t our Universe splits in two after every quantum event?
How then Tegmark & Co. can predict something for the next 50 years? Almost certainly will happen—somewhere in the Multiverse. Just as almost everything opposite, only on the other side of the Multiverse.
According to Tegmark, at least.
Now he predicts a T shirt in 50 years time! Isn’t it a little weird?
All predictions in a splitting multiverse setting have to understood as saying something like “in the majority of resulting branches, the following will be true.” Otherwise predictions become meaningless. This fits in nicely with a probabilistic understanding. The correct probability of the even occurring is the fraction of multiverses descended from this current universe that satisfy the condition.
Edit: This isn’t quite true. If I flip a coin, the probability of it coming up heads is in some sense 1⁄2 even though if I flip it right now, any quantum effects might be too small to have any effect on the flip. There’s a distinction probability due to fundamentally probabilistic aspects of the universe and probability due to ignorance.
Let’s remember that if we’re talking about a multiverse in the MWI sense, then universes have to be weighted by the squared norm of their amplitude. Otherwise you get, well, the ridiculous consequences being talked about here… (as well as being able to solve problems in PP in polynomial time on a quantum computer).
Right ok. So in that case, even if we have more new universes being created by a given specific descendant universe, the total measure of that set of universes won’t be any higher than that of the original descendant universe, yes? So that makes this problem go away.
Any credible reference to that?
Not off the top of my head. It follows from having the squared norm and from the transformations being unitary. Sniffnoy may have a direct source for the point.
How do you know that something will be included in the majority of branches. Suppose that a nuclear war starts in a branch. A lot of radioactivity will be around, a lot of quantum events, a lot of splittings and a lot of “postnuclear” parallel worlds. The majority? Maybe, I don’t know. Tegmark knows? I don’t think so.
The small amount of additional radioactivity shouldn’t substantially alter how many branches there are. Keep in mind that in the standard multiverse model for quantum mechanics, a split occurs for a lot of events that have nothing to do with radioactivity. For example, a lot of behavior with electrons will also cause splitting. The additional radioactivity from a nuclear exchange simply won’t matter much.
ANY increase, from whatever reason, in the number of splittings, would trigger an exponential surge of that particular branch.
The number of splitting is the dominant fitness factor. Those universes which split the most, inherit the Multiverse.
If you buy this Multiverse theory of course, I don’t.
Hmm, that’s a valid point. It doesn’t increase linearly with the number of splitting. I still don’t think it should matter. Every atom that isn’t simple hydrogen atom is radioactive to some extent (the probability of decay is just really, really, tiny). I’m not at all sure that a radioactive planet (in the sense of having a lot of atoms with non-negligible chance of decay) will actually produce more branches than one which does not. Can someone who knows more about the relevant physics comment? I’m not sure I know enough to make a confident statement about this.
MWI is almost the default religion of this list members. And as in every religion, awkward questions are ignored. Downvoted, maybe.
It might help if you read the relevant sections of the conversation before you make accusations about something being a “religion.” Note that Sniffnoy’s remark above already resolved this.
What Sniffnoy’s remark resolves this?
Everything is weighted by squared-norm of the amplitude. And, y’know, quantum mechanics is unitary. What needs to be preserved, is preserved.
More generally, we might imagine that we lived in a world where physics was just probabilistic in the ordinary way, rather than quantum (in the sense of based on amplitudes); MWI might also be a natural way to think if we lived in that world (though not as natural as it is in the world of actual QM, as in that world we wouldn’t have any real need for MWI); then, well, everything would be weighted by probability, and everything would be stochastic rather than unitary. Of course if you don’t require preservation of whatever the appropriate weighting is, you’ll get an absurd result.
You do seem to be pretty confused about what MWI says; it does not, as you seem to suggest, posit a finite number of universes, which split at discrete points, and where the probability of an event is the proportion of universes it occurs in. “Universes” here are just identified with the states that we’re looking at a wave function over, or perhaps trajectories through such, so there are infinitely many. And having the universes split and not interfere with each other, would work with ordinary probability, but it won’t work with quantum amplitudes—if that were the case we’d just see probabilistic effects, not quantum effects. The many worlds of MWI do interfere with each other. When decoherence occurs the result is to effectively split collections of universes off from each other so they don’t interfere anymore, but in a coherent quantum system the notion of splitting doesn’t make much sense.
Remember, the key suppositions of MWI are just that A. the equations of quantum mechanics are literally true all the time—there is no magical waveform collapse; and B. the wavefunction is a complete description of reality; it’s not guiding any hidden variables. (And I suppose, C., decoherence is responsible for the appearance of collapse, etc., but that’s more of a conclusion than a suppostion.) Hence why it’s claimed here that MWI wins by Occam’s Razor. It really is the minimal interpretation of QM!
If there is an actual problem with MWI, I’d say it’s the one Scott Aaronson points out here (I doubt this observation is original to him, but not being too familiar of the history of this, it’s the first place I’d seen it; does anyone know the history of this?); the virtue of MWI is its minimality, but unfortunately it seems to be too minimal to answer this question! Assuming the question is meaningful, anyway. But the alternatives still seem distinctly unsatisfactory...
You can’t get the probabilities from those suppositions. And without the probabilities, MWI has no predictive power; it’s just a metaphysics which says “Everything that can happen does happen”, and which then gives wrong predictions if you count the worlds the way you would count anything else.
But even if you can justify the required probability measure, there is another problem. John Bell once wrote of Bohmian theories (see last paragraph here):
In a Bohmian theory, you take the classical theory that is to be quantized, and add to the classical equations of motion a nonlocal term, dependent on the wavefunction, which adds an extra wiggle to the motion, giving you quantum behavior. The nonlocality means that you need a notion of objective simultaneity in order to define that term. So when you construct the Bohmian counterpart of a relativistic quantum theory (i.e. of a quantum field theory), you will still see relativistic effects like length contraction and time dilation (since they are in the classical counterpart of the quantum field theory), but you have to pick a reference frame in order to make the Bohmian construction—which might be seen as an indication of its artificiality.
The same thing happens in MWI. In MWI you reify the wavefunction—you assume it is a real thing—and then you divide it up into worlds. To perform this division, you need a universal time coordinate, so relativity disappears at the fundamental level. Furthermore, since there is no particular connection between the worlds of the wavefunction in one moment, and the worlds of the wavefunction in the next moment, you don’t even have persistence of a world in time, so you can’t even think about performing a Lorentz transformation. Instead, you have a set of disconnected world-moments, with mysterious nonstandard probabilities attached to them in order to make predictions turn out right.
All of that says to me that the MWI construction is just as artificial as the Bohmian one.
Sorry, yes. I took weighting things by squared-norm of amplitude as implicit, seeing as we’re discussing QM in the first place.
That doesn’t excuse the MWI at all. Could very well be, that something else is needed to resolve the dilemmas.
And you haven’t answer my question, maybe something else.
The weighting quantity is conserved. So far as I can tell, that entirely answers the objection you raised. I’m really not seeing where it fails. Could you explain?
Edit: s/preserved/conserved/
If I understand you correctly, there is an equal number of world splits every second in every branch. They are all weighted, so that no branch can explode?
Is that correct?
Worlds are weighted by squared-norm of amplitude, a quantity that is conserved. If two worlds are really not interfering with each other any more, then amplitude will not somehow vanish from the future of one and appear in the future in the other.
In this remark. His expansion below should make it clear what the relevant points are.
I think a relatively simple theory of everything is possible. This is however not based on anything solid—I’m a Math/CS student and my knowledge of physics does not (yet!) exceed high school level.