Well, hmmm. I wonder if this qualifies as “stupid”.
Could someone help me summarize the evidence for MWI in the quantum physics sequence? I tried once, and only came up with 1) the fact that collapse postulates are “not nice” (i.e., nonlinear, nonlocal, and so on) and 2) the fact of decoherence. However, the following quote from Many Worlds, One Best Guess (emphasis added):
The debate should already be over. It should have been over fifty years ago. The state of evidence is too lopsided to justify further argument. There is no balance in this issue. There is no rational controversy to teach. The laws of probability theory are laws, not suggestions; there is no flexibility in the best guess given this evidence. Our children will look back at the fact that we were STILL ARGUING about this in the early 21st-century, and correctly deduce that we were nuts.
Is there other evidence as well, then? 1) seems depressingly weak, and as for 2)...
As was mentioned in Decoherence is Falsifiable and Testable, and brought up in the comments, the existence of so-called “microscopic decoherence” (which we have evidence for) is independent from so-called “macroscopic decoherence” (which—as far as I know, and I would like to be wrong about this—we do not have empirical evidence for). Macroscopic decoherence seems to imply MWI, but the evidence given in the decoherence subsequence deals only with microscopic decoherence.
I would rather not have this devolve into a debate on MWI and friends—EY above to the contrary, I don’t think we can classify that question as a “stupid” one. I’m focused entirely in EY’s argument for MWI and possible improvements that can be made to it.
Unless I’m missing something, EY argues that evidence against random collapse is evidence for MWI. See that long analogy on Maxwell’s equations with angels mediating the electromagnetic force.
I think 1) should probably be split into two arguments, then. One of them is that Many World is strictly simpler (by any mathematical formalization of Occam’s Razor.) The other one is that collapse postulates are problematic (which could itself be split into sub-arguments, but that’s probably unnecessary).
Grouping those makes no sense. They can stand (or fall) independently, they aren’t really connected to each other, and they look at the problem from different angles.
I think 1) should probably be split into two arguments, then.
Ah, okay, that makes more sense. 1a) (that MWI is simpler than competing theories) would be vastly more convincing than 1b) (that collapse is bad, mkay). I’m going to have to reread the relevant subsequence with 1a) in mind.
I really don’t think 1a) is addressed by Eliezer; no offense meant to him, but I don’t think he knows very much about interpretations besides MWI (maybe I’m wrong and he just doesn’t discuss them for some reason?). E.g. AFAICT the transactional interpretation has what people ’round these parts might call an Occamian benefit in that it doesn’t require an additional rule that says “ignore advanced wave solutions to Maxwell’s equations”. In general these Occamian arguments aren’t as strong as they’re made out to be.
If you read Decoherence is Simple while keeping in mind that EY treats decoherence and MWI as synonymous, and ignore the superfluous references to MML, Kolmogorov and Solomonoff, then 1a) is addressed there.
One of them is that Many World is strictly simpler (by any mathematical formalization of Occam’s Razor.)
The claim in parentheses isn’t obvious to me and seems to be probably wrong. If one replaced any with “many” or “most” it seems more reasonable. Why do you assert this applies to any formalization?
Kolmogorov Complexity/Solmanoff Induction and Minimum Message Length have been proven equivalent in their most-developed forms. Essentially, correct mathematical formalizations of Occam’s Razor are all the same thing.
The whole point is superfluous, because nobody is going to sit around and formally write out the axioms of these competing theories. It may be a correct argument, but it’s not necessarily convincing.
This is a pretty unhelpful way of justifying this sort of thing. Kolmogorv complexity doesn’t give a unique result. What programming system one uses as one’s basis can change things up to a constant. So simply looking at the fact that Solomonoff induction is equivalent to a lot of formulations isn’t really that helpful for this purpose.
Moreover, there are other formalizations of Occam’s razor which are not formally equivalent to Solomonoff induction. PAC learning is one natural example.
Is it really so strange that people are still arguing over “interpretations of quantum mechanics” when the question of whether atoms existed wasn’t settled until one hundred years after John Dalton published his work?
Mathematician Michael Ikeda and astronomer William H. Jefferys have argued that [, upon pre-supposing MWI,] the anthropic principle resolves the entire issue of fine-tuning, as does philosopher of science Elliott Sober. Philosopher and theologian Richard Swinburne reaches the opposite conclusion using Bayesian probability.
(Ikeda & Jeffrey are linked at note 21.)
In a nutshell, MWI provides a mechanism whereby a spectrum of universes are produced, some life-friendly and some life-unfriendly. Consistent with the weak anthropic principle, life can only exist in the life-friendly (hence fine-tuned) universes. So, MWI provides an explanation of observed fine-tuning, whereas the standard QM interpretation does not.
That line of reasoning puzzles me, because the anthropic-principle explanation of fine tuning works just fine without MWI: Out of all the conceivable worlds, of course we find ourselves in one that is habitable.
This only works if all worlds that follow the same fundamental theory exist in the same way our local neighborhood exists. If all of space has just one set of constants even though other values would fit the same theory of everything equally well, the anthropic principle does not apply, and so the fact that the universe is habitable is ordinary Bayesian evidence for something unknown going on.
The word “exist” doesn’t do any useful work here. There are conceivable worlds that are different from this one, and whether they exist depends on the definition of “exist”. But they’re still relevant to an anthropic argument.
The habitability of the universe is not evidence of anything because the probability of observing a habitable universe is practically unity.
Can you clarify why a conceivable world that doesn’t exist in the conventional sense of existing is relevant to an anthropic argument?
I mean, if I start out as part of a group of 2^10 people, and that group is subjected to an iterative process whereby we split the group randomly into equal subgroups A and B and kill group B, then at every point along the way I ought to expect to have a history of being sorted into group A if I’m alive, but I ought not expect to be alive very long. This doesn’t seem to depend in any useful way on the definition of “alive.”
I mean, if I start out as part of a group of 2^10 people, and that group is subjected to an iterative process whereby we split the group randomly into equal subgroups A and B and kill group B, then at every point along the way I ought to expect to have a history of being sorted into group A if I’m alive, but I ought not expect to be alive very long. This doesn’t seem to depend in any useful way on the definition of “alive.”
I agree with all that. I don’t quite see where that thought experiment fits into the discussion here. I see that the situation where we have survived that iterative process is analogous to fine-tuning with MWI, and I agree that fine-tuning is unsurprising given MWI. I further claim that fine-tuning is unsurprising even in a non-quantum universe. Let me describe the though experiment I have in mind:
Imagine a universe with very different physics. (1) Suppose the universe, by nature, splits into many worlds shortly after the beginning of time, each with different physical constants, only one of which allows for life. The inhabitants of that one world ought not to be surprised at the fine-tuning they observe. This is analogous to fine-tuning with MWI.
(2) Now suppose the universe consists of many worlds at its inception, and these other worlds can be observed only with great difficulty. Then the inhabitants still ought not to be surprised by fine-tuning.
(3) Now suppose the universe consists of many worlds from its inception, but they are completely inaccessible, and their existence can only be inferred from the simplest scientific model of the universe. The inhabitants still ought not to be surprised by fine-tuning.
(4) Now suppose the simplest scientific model describes only one world, but the physical constants are free parameters. You can easily construct a parameterless model that says “a separate world exists for every choice of parameters somehow”, but whether this means that those other worlds “exist” is a fruitless debate. The inhabitants still ought not to be surprised by fine-tuning. This is what I mean when I say that fine-tuning is not surprising even without MWI.
In cases (1)-(4), the inhabitants can make an anthropic argument: “If the physical constants were different, we wouldn’t be here to wonder about them. We shouldn’t be surprised that they allow us to exist.” Does that makes sense?
Yes, I agree: as long as there’s some mechanism for the relevant physical constants to vary over time, anthropic arguments for the “fined-tuned” nature of those constants can apply; anthropic arguments don’t let us select among such mechanisms.
Hm, only the first of the four scenarios in the grandparent involves physical constants varying over time. But yes, anthropic arguments don’t distinguish between the scenarios.
Huh. Then I guess I didn’t understand you after all.
You’re saying that in scenario 4, the relevant constants don’t change once set for the first time?
In that case this doesn’t fly. If setting the constants is a one-time event in scenario 4, and most possible values don’t allow for life, then while I ought not be surprised by the fine-tuning given that I observe something (agreed), I ought to be surprised to observe anything at all.
That’s why I brought up the small-scale example. In that example, I ought not be surprised by the history of A’s given that I observe something, but I ought to be surprised to observing anything in the first place. If you’d asked me ahead of time whether I would survive I’d estimate a .0001 chance… a low-probability event.
If my current observed environment can be explained by positing scenarios 1-4, and scenario 4 requires assuming a low-probability event that the others don’t, that seems like a reason to choose 1-3 instead.
You’re saying that in scenario 4, the relevant constants don’t change once set for the first time?
I’m saying that in all four scenarios, the physical constants don’t change once set for the first time. And in scenarios (2)-(4), they are set at the very beginning of time.
I was confused as to why you started talking about changing constants, but it occurs to me that we may have different ideas about how the MWI explanation of fine-tuning is supposed to run. I admit I’m not familiar with cosmology. I imagine the Big Bang occurs, the universal wavefunction splits locally into branches, the branches cool down and their physical constants are fixed, and over the next 14 billion years they branch further but their constants do not change, and then life evolves in some of them. Were you imagining our world constantly branching into other worlds with slightly different constants?
No, I wasn’t; I don’t think that’s our issue here.
Let me try it this way. If you say “I’m going to roll a 4 on this six-sided die”, and then you roll a 4 on a six-sided die, and my observations of you are equally consistent with both of the following theories: Theory T1: You rolled the die exactly once, and it came up a 4 Theory T2: You rolled the die several times, and stopped rolling once it came up 4 ...I should choose T2, because the observed result is less surprising given T2 than T1.
Would you agree? (If you don’t agree, the rest of this comment is irrelevant: that’s an interesting point of disagreement I’d like to explore further. Stop reading here.)
OK, good. Just to have something to call it, let’s call that the Principle of Least Surprise.
Now, suppose that in all scenarios constants are set shortly after the creation of a world, and do not subsequently change, but that the value of a constant is indeterminate prior to being set. Suppose further that life-supporting values of constants are extremely unlikely. (I think that’s what we both have been supposing all along, I just want to say it explicitly.)
In scenario 1-3, we have multiple worlds with different constants. Constants that support life are unlikely, but because there are multiple worlds, it is not surprising that at least one world exists with constants that support life. We’d expect that, just like we’d expect a six-sided die to come up ‘4’ at least once if tossed ten times. We should not be surprised that there’s an observer in some world, and that world has constants that support life, in any of these cases.
In scenario 4, we have one world with one set of constants. It is surprising that that world has life-supporting constants. We ought not expect that, just like we ought not expect a six-sided die to come up ‘4’ if tossed only once. We should be surprised that there’s an observer in some world.
So. If I look around, and what I observe is equally consistent with scenarios 1-4, the Principle of Least Surprise tells me I should reject scenario 4 as an explanation.
Let me try it this way. If you say “I’m going to roll a 4 on this six-sided die”, and then you roll a 4 on a six-sided die, and my observations of you are equally consistent with both of the following theories: Theory T1: You rolled the die exactly once, and it came up a 4 Theory T2: You rolled the die several times, and stopped rolling once it came up 4 ...I should choose T2, because the observed result is less surprising given T2 than T1.
Would you agree? (If you don’t agree, the rest of this comment is irrelevant: that’s an interesting point of disagreement I’d like to explore further. Stop reading here.)
This bit is slightly ambiguous. I would agree if Theory T1 were replaced by “You decided to roll the die exactly once and then show me the result”, and Theory T2 were replaced by “You decided to roll the die until it comes up ‘4’, and then show me the result”, and the two theories have equal prior probability. I think this is probably what you meant, so I’ll move on.
In scenario 1-3, we have multiple worlds with different constants. Constants that support life are unlikely, but because there are multiple worlds, it is not surprising that at least one world exists with constants that support life. We’d expect that, just like we’d expect a six-sided die to come up ‘4’ at least once if tossed ten times. We should not be surprised that there’s an observer in some world, and that world has constants that support life, in any of these cases.
I agree that we should not be surprised. Although I have reservations about drawing this analogy, as I’ll explain below.
In scenario 4, we have one world with one set of constants. It is surprising that that world has life-supporting constants. We ought not expect that, just like we ought not expect a six-sided die to come up ‘4’ if tossed only once. We should be surprised that there’s an observer in some world.
If we take scenario 4 as I described it — there’s a scientific model where the constants are free parameters, and a straightforward parameterless modification of the model (of equal complexity) that posits one universe for every choice of constants — then I disagree; we should not be surprised. I disagree because I think the die-rolling scenario is not a good analogy for scenarios 1-4, and scenario 4 resembles Theory T2 at least as much as Theory T1.
Scenario 4 as I described it basically is scenario 3. The theory with free parameters isn’t a complete theory, and the parameterless theory sorta does talk about other universes which kind of exist, in the sense that a straightforward interpretation of the parameterless theory talks about other universes. So scenario 4 resembles Theory T2 at least as much as it resembles Theory T1.
You could ask why we can’t apply the same argument in the previous bullet point to the die-rolling scenario and conclude that Theory T1 is just as plausible as Theory T2. (If you don’t want to ask that, please ignore the rest of this bullet point, as it could spawn an even longer discussion.) We can’t because the scenarios differ in essential ways. To explain further I’ll have to talk about Solomonoff induction, which makes me uncomfortable. The die-rolling scenario comes with assumptions about a larger universe with a causal structure such that (Theory T1 plus the observation ‘4’) has greater K-complexity than (Theory T2 plus the observation ‘4’). But the hack that turns the theory in scenario 4 into a parameterless theory doesn’t require much additional K-complexity.
It seems to follow from what you’re saying that the assertions “a world containing an observer exists in scenario 4” and “a world containing an observer doesn’t exist in scenario 4″ don’t make meaningful different claims about scenario 4, since we can switch from a model that justifies the first to a model that justifies the second without any cost worth considering.
If that’s right, then I guess it follows from the fact that I should be surprised to observe an environment in scenario 4 that I should not be surprised to observe an environment in scenario 4, and vice-versa, and there’s not much else I can think of to say on the subject.
By ‘explain observed fine-tuning’, I mean ‘answer the question why does there exist a universe (which we inhabit) which is fine-tuned to be life-friendly.’ The anthropic principle, while tautologically true, does not answer this question, in my view.
In other words, the existence of life does not cause our universe to be life-friendly (of course it implies that the universe is life friendly); rather, the life-friendliness of our universe is a prerequisite for the existence of life.
We may have different ideas of what sort of answers a “why does this phenomenon occur?” question deserves. You seem to be looking for a real phenomenon that causes fine-tuning, or which operates at a more fundamental level of nature. I would be satisfied with a simple, plausible fact that predicts the phenomenon. In practice, the scientific hypotheses with the greatest parsimony and predictive power tend to be causal ones, or hypotheses that explain observed phenomena as arising from more fundamental laws. But the question of where the fundamental constants of nature come from will be an exception if they are truly fundamental and uncaused.
You’re right that observing that we’re in a habitable universe doesn’t tell us anything. However, there are a lot more observations about the universe that we use in discussions about quantum mechanics. And some observations suit the idea that we’re know what’s going on better than others. “Know what’s going on” here means that a theory that is sufficient to explain all of reality in our local neighborhood is also followed more globally.
I glanced at Ikeda & Jefferys, and they seem to explicitly not presuppose MWI:
our argument is not dependent on the notion that there are many other universes.
At first glance, they seem to render the fine-tuning phenomenon unsurprising using only an anthropic argument, without appealing to multiverses or a simulator. I am satisfied that someone has written this down.
As a step toward this goal, I would really appreciate someone rewriting the post you mentioned to sound more like science and less like advocacy. I tried to do that, but got lost in the forceful emotional assertions about how collapse is a gross violation of Bayes, and how “The discussion should simply discard those particular arguments and move on.”
The interpretations of quantum mechanics that this sort of experiment tests are not all of the same ones as the ones Eliezer argues against. You can have “one world” interpretations that appear exactly identical to many-worlds, and indeed that’s pretty typical.
Maybe I should have written this in reply to the original post.
Well, hmmm. I wonder if this qualifies as “stupid”.
Could someone help me summarize the evidence for MWI in the quantum physics sequence? I tried once, and only came up with 1) the fact that collapse postulates are “not nice” (i.e., nonlinear, nonlocal, and so on) and 2) the fact of decoherence. However, the following quote from Many Worlds, One Best Guess (emphasis added):
Is there other evidence as well, then? 1) seems depressingly weak, and as for 2)...
As was mentioned in Decoherence is Falsifiable and Testable, and brought up in the comments, the existence of so-called “microscopic decoherence” (which we have evidence for) is independent from so-called “macroscopic decoherence” (which—as far as I know, and I would like to be wrong about this—we do not have empirical evidence for). Macroscopic decoherence seems to imply MWI, but the evidence given in the decoherence subsequence deals only with microscopic decoherence.
I would rather not have this devolve into a debate on MWI and friends—EY above to the contrary, I don’t think we can classify that question as a “stupid” one. I’m focused entirely in EY’s argument for MWI and possible improvements that can be made to it.
(There are two different argument sets here: 1) against random collapse, and 2) for MWI specifically. It’s important to keep these distinct.)
Unless I’m missing something, EY argues that evidence against random collapse is evidence for MWI. See that long analogy on Maxwell’s equations with angels mediating the electromagnetic force.
It’s also evidence for a bunch of other interpretations though, right? I meant “for MWI specifically”; I’ll edit my comment to be clearer.
I agree, which is one of the reasons why I feel 1) alone isn’t enough to substantiate “There is no rational controversy to teach” and etc.
Quantum mechanics can be described by a set of postulates. (Sometimes five, sometimes four. It depends how you write them.)
In the “standard” Interpretation, one of these postulates invokes something called “state collapse”.
MWI can be described by the same set of postulates without doing that.
When you have two theories that describe the same data, the simpler one is usually the right one.
This falls under 1) above, and is also covered here below. Was there something new you wanted to convey?
I think 1) should probably be split into two arguments, then. One of them is that Many World is strictly simpler (by any mathematical formalization of Occam’s Razor.) The other one is that collapse postulates are problematic (which could itself be split into sub-arguments, but that’s probably unnecessary).
Grouping those makes no sense. They can stand (or fall) independently, they aren’t really connected to each other, and they look at the problem from different angles.
Ah, okay, that makes more sense. 1a) (that MWI is simpler than competing theories) would be vastly more convincing than 1b) (that collapse is bad, mkay). I’m going to have to reread the relevant subsequence with 1a) in mind.
I really don’t think 1a) is addressed by Eliezer; no offense meant to him, but I don’t think he knows very much about interpretations besides MWI (maybe I’m wrong and he just doesn’t discuss them for some reason?). E.g. AFAICT the transactional interpretation has what people ’round these parts might call an Occamian benefit in that it doesn’t require an additional rule that says “ignore advanced wave solutions to Maxwell’s equations”. In general these Occamian arguments aren’t as strong as they’re made out to be.
If you read Decoherence is Simple while keeping in mind that EY treats decoherence and MWI as synonymous, and ignore the superfluous references to MML, Kolmogorov and Solomonoff, then 1a) is addressed there.
The claim in parentheses isn’t obvious to me and seems to be probably wrong. If one replaced any with “many” or “most” it seems more reasonable. Why do you assert this applies to any formalization?
Kolmogorov Complexity/Solmanoff Induction and Minimum Message Length have been proven equivalent in their most-developed forms. Essentially, correct mathematical formalizations of Occam’s Razor are all the same thing.
The whole point is superfluous, because nobody is going to sit around and formally write out the axioms of these competing theories. It may be a correct argument, but it’s not necessarily convincing.
This is a pretty unhelpful way of justifying this sort of thing. Kolmogorv complexity doesn’t give a unique result. What programming system one uses as one’s basis can change things up to a constant. So simply looking at the fact that Solomonoff induction is equivalent to a lot of formulations isn’t really that helpful for this purpose.
Moreover, there are other formalizations of Occam’s razor which are not formally equivalent to Solomonoff induction. PAC learning is one natural example.
Is it really so strange that people are still arguing over “interpretations of quantum mechanics” when the question of whether atoms existed wasn’t settled until one hundred years after John Dalton published his work?
From the Wikipedia fined-tuned universe page
(Ikeda & Jeffrey are linked at note 21.)
In a nutshell, MWI provides a mechanism whereby a spectrum of universes are produced, some life-friendly and some life-unfriendly. Consistent with the weak anthropic principle, life can only exist in the life-friendly (hence fine-tuned) universes. So, MWI provides an explanation of observed fine-tuning, whereas the standard QM interpretation does not.
That line of reasoning puzzles me, because the anthropic-principle explanation of fine tuning works just fine without MWI: Out of all the conceivable worlds, of course we find ourselves in one that is habitable.
This only works if all worlds that follow the same fundamental theory exist in the same way our local neighborhood exists. If all of space has just one set of constants even though other values would fit the same theory of everything equally well, the anthropic principle does not apply, and so the fact that the universe is habitable is ordinary Bayesian evidence for something unknown going on.
The word “exist” doesn’t do any useful work here. There are conceivable worlds that are different from this one, and whether they exist depends on the definition of “exist”. But they’re still relevant to an anthropic argument.
The habitability of the universe is not evidence of anything because the probability of observing a habitable universe is practically unity.
Can you clarify why a conceivable world that doesn’t exist in the conventional sense of existing is relevant to an anthropic argument?
I mean, if I start out as part of a group of 2^10 people, and that group is subjected to an iterative process whereby we split the group randomly into equal subgroups A and B and kill group B, then at every point along the way I ought to expect to have a history of being sorted into group A if I’m alive, but I ought not expect to be alive very long. This doesn’t seem to depend in any useful way on the definition of “alive.”
Is it different for universes? Why?
I agree with all that. I don’t quite see where that thought experiment fits into the discussion here. I see that the situation where we have survived that iterative process is analogous to fine-tuning with MWI, and I agree that fine-tuning is unsurprising given MWI. I further claim that fine-tuning is unsurprising even in a non-quantum universe. Let me describe the though experiment I have in mind:
Imagine a universe with very different physics. (1) Suppose the universe, by nature, splits into many worlds shortly after the beginning of time, each with different physical constants, only one of which allows for life. The inhabitants of that one world ought not to be surprised at the fine-tuning they observe. This is analogous to fine-tuning with MWI.
(2) Now suppose the universe consists of many worlds at its inception, and these other worlds can be observed only with great difficulty. Then the inhabitants still ought not to be surprised by fine-tuning.
(3) Now suppose the universe consists of many worlds from its inception, but they are completely inaccessible, and their existence can only be inferred from the simplest scientific model of the universe. The inhabitants still ought not to be surprised by fine-tuning.
(4) Now suppose the simplest scientific model describes only one world, but the physical constants are free parameters. You can easily construct a parameterless model that says “a separate world exists for every choice of parameters somehow”, but whether this means that those other worlds “exist” is a fruitless debate. The inhabitants still ought not to be surprised by fine-tuning. This is what I mean when I say that fine-tuning is not surprising even without MWI.
In cases (1)-(4), the inhabitants can make an anthropic argument: “If the physical constants were different, we wouldn’t be here to wonder about them. We shouldn’t be surprised that they allow us to exist.” Does that makes sense?
Ah, I see.
Yes, I agree: as long as there’s some mechanism for the relevant physical constants to vary over time, anthropic arguments for the “fined-tuned” nature of those constants can apply; anthropic arguments don’t let us select among such mechanisms.
Thanks for clarifying.
Hm, only the first of the four scenarios in the grandparent involves physical constants varying over time. But yes, anthropic arguments don’t distinguish between the scenarios.
Huh. Then I guess I didn’t understand you after all.
You’re saying that in scenario 4, the relevant constants don’t change once set for the first time?
In that case this doesn’t fly. If setting the constants is a one-time event in scenario 4, and most possible values don’t allow for life, then while I ought not be surprised by the fine-tuning given that I observe something (agreed), I ought to be surprised to observe anything at all.
That’s why I brought up the small-scale example. In that example, I ought not be surprised by the history of A’s given that I observe something, but I ought to be surprised to observing anything in the first place. If you’d asked me ahead of time whether I would survive I’d estimate a .0001 chance… a low-probability event.
If my current observed environment can be explained by positing scenarios 1-4, and scenario 4 requires assuming a low-probability event that the others don’t, that seems like a reason to choose 1-3 instead.
I’m saying that in all four scenarios, the physical constants don’t change once set for the first time. And in scenarios (2)-(4), they are set at the very beginning of time.
I was confused as to why you started talking about changing constants, but it occurs to me that we may have different ideas about how the MWI explanation of fine-tuning is supposed to run. I admit I’m not familiar with cosmology. I imagine the Big Bang occurs, the universal wavefunction splits locally into branches, the branches cool down and their physical constants are fixed, and over the next 14 billion years they branch further but their constants do not change, and then life evolves in some of them. Were you imagining our world constantly branching into other worlds with slightly different constants?
No, I wasn’t; I don’t think that’s our issue here.
Let me try it this way. If you say “I’m going to roll a 4 on this six-sided die”, and then you roll a 4 on a six-sided die, and my observations of you are equally consistent with both of the following theories:
Theory T1: You rolled the die exactly once, and it came up a 4
Theory T2: You rolled the die several times, and stopped rolling once it came up 4
...I should choose T2, because the observed result is less surprising given T2 than T1.
Would you agree? (If you don’t agree, the rest of this comment is irrelevant: that’s an interesting point of disagreement I’d like to explore further. Stop reading here.)
OK, good. Just to have something to call it, let’s call that the Principle of Least Surprise.
Now, suppose that in all scenarios constants are set shortly after the creation of a world, and do not subsequently change, but that the value of a constant is indeterminate prior to being set. Suppose further that life-supporting values of constants are extremely unlikely. (I think that’s what we both have been supposing all along, I just want to say it explicitly.)
In scenario 1-3, we have multiple worlds with different constants. Constants that support life are unlikely, but because there are multiple worlds, it is not surprising that at least one world exists with constants that support life. We’d expect that, just like we’d expect a six-sided die to come up ‘4’ at least once if tossed ten times. We should not be surprised that there’s an observer in some world, and that world has constants that support life, in any of these cases.
In scenario 4, we have one world with one set of constants. It is surprising that that world has life-supporting constants. We ought not expect that, just like we ought not expect a six-sided die to come up ‘4’ if tossed only once. We should be surprised that there’s an observer in some world.
So. If I look around, and what I observe is equally consistent with scenarios 1-4, the Principle of Least Surprise tells me I should reject scenario 4 as an explanation.
Would you agree?
This bit is slightly ambiguous. I would agree if Theory T1 were replaced by “You decided to roll the die exactly once and then show me the result”, and Theory T2 were replaced by “You decided to roll the die until it comes up ‘4’, and then show me the result”, and the two theories have equal prior probability. I think this is probably what you meant, so I’ll move on.
I agree that we should not be surprised. Although I have reservations about drawing this analogy, as I’ll explain below.
If we take scenario 4 as I described it — there’s a scientific model where the constants are free parameters, and a straightforward parameterless modification of the model (of equal complexity) that posits one universe for every choice of constants — then I disagree; we should not be surprised. I disagree because I think the die-rolling scenario is not a good analogy for scenarios 1-4, and scenario 4 resembles Theory T2 at least as much as Theory T1.
Scenario 4 as I described it basically is scenario 3. The theory with free parameters isn’t a complete theory, and the parameterless theory sorta does talk about other universes which kind of exist, in the sense that a straightforward interpretation of the parameterless theory talks about other universes. So scenario 4 resembles Theory T2 at least as much as it resembles Theory T1.
You could ask why we can’t apply the same argument in the previous bullet point to the die-rolling scenario and conclude that Theory T1 is just as plausible as Theory T2. (If you don’t want to ask that, please ignore the rest of this bullet point, as it could spawn an even longer discussion.) We can’t because the scenarios differ in essential ways. To explain further I’ll have to talk about Solomonoff induction, which makes me uncomfortable. The die-rolling scenario comes with assumptions about a larger universe with a causal structure such that (Theory T1 plus the observation ‘4’) has greater K-complexity than (Theory T2 plus the observation ‘4’). But the hack that turns the theory in scenario 4 into a parameterless theory doesn’t require much additional K-complexity.
I didn’t really follow this, I’m afraid.
It seems to follow from what you’re saying that the assertions “a world containing an observer exists in scenario 4” and “a world containing an observer doesn’t exist in scenario 4″ don’t make meaningful different claims about scenario 4, since we can switch from a model that justifies the first to a model that justifies the second without any cost worth considering.
If that’s right, then I guess it follows from the fact that I should be surprised to observe an environment in scenario 4 that I should not be surprised to observe an environment in scenario 4, and vice-versa, and there’s not much else I can think of to say on the subject.
By ‘explain observed fine-tuning’, I mean ‘answer the question why does there exist a universe (which we inhabit) which is fine-tuned to be life-friendly.’ The anthropic principle, while tautologically true, does not answer this question, in my view.
In other words, the existence of life does not cause our universe to be life-friendly (of course it implies that the universe is life friendly); rather, the life-friendliness of our universe is a prerequisite for the existence of life.
We may have different ideas of what sort of answers a “why does this phenomenon occur?” question deserves. You seem to be looking for a real phenomenon that causes fine-tuning, or which operates at a more fundamental level of nature. I would be satisfied with a simple, plausible fact that predicts the phenomenon. In practice, the scientific hypotheses with the greatest parsimony and predictive power tend to be causal ones, or hypotheses that explain observed phenomena as arising from more fundamental laws. But the question of where the fundamental constants of nature come from will be an exception if they are truly fundamental and uncaused.
You’re right that observing that we’re in a habitable universe doesn’t tell us anything. However, there are a lot more observations about the universe that we use in discussions about quantum mechanics. And some observations suit the idea that we’re know what’s going on better than others. “Know what’s going on” here means that a theory that is sufficient to explain all of reality in our local neighborhood is also followed more globally.
I glanced at Ikeda & Jefferys, and they seem to explicitly not presuppose MWI:
At first glance, they seem to render the fine-tuning phenomenon unsurprising using only an anthropic argument, without appealing to multiverses or a simulator. I am satisfied that someone has written this down.
As a step toward this goal, I would really appreciate someone rewriting the post you mentioned to sound more like science and less like advocacy. I tried to do that, but got lost in the forceful emotional assertions about how collapse is a gross violation of Bayes, and how “The discussion should simply discard those particular arguments and move on.”
Here’s some evidence for macroscopic decoherence.
The interpretations of quantum mechanics that this sort of experiment tests are not all of the same ones as the ones Eliezer argues against. You can have “one world” interpretations that appear exactly identical to many-worlds, and indeed that’s pretty typical.
Maybe I should have written this in reply to the original post.
Actually, this is evidence for making a classical object behave in a quantum way, which seems like the opposite of decoherence.
I don’t understand your point. How would you demonstrate macroscopic decoherence without creating a coherent object which then decoheres?