I disagree on five points. The first is my conclusion too; the second leads to the third and the third explains the fourth. The fifth one is the most interesting.
1) In contrast with the title, you did not show that the MWI is falsifiable nor testable; I know the title mentions decoherence (which is falsifiable and testable), but decoherence is very different from the MWI and for the rest of the article you talked about the MWI, though calling it decoherence. You just showed that MWI is “better” according to your “goodness” index, but that index is not so good. Also, the MWI is not at all a consequence of the superposition principle: it is rather an ad-hoc hypothesis made to “explain” why we don’t experience a macroscopic superposition, despite we would expect it because macroscopic objects are made of microscopic ones. But, as I will mention in the last point, the superposition of macroscopic objects in not an inevitable consequence of the superposition principle applied to microscopic objects.
2) You say that postulating a new object is better than postulating a new law: so why teach Galileo’s relativity by postulating its transformations, while they could be derived as a special case of Lorents transformations for slow speeds? The answer is because they are just models, which gotta be easy enough for us to understand them: in order to well understand relativity you first have to understand non-relativistic mechanics, and you can only do it observing and measuring slow objects and then making the simplest theory which describes that (i.e., postulating the shortest mathematical rules experimentally compatible with the “slow” experiences: Galileo’s); THEN you can proceed in something more difficult and more accurate, postulating new rules to get a refined theory.
You calculate the probability of a theory and use this as an index of the “truthness” of it, but that’s confusing the reality with the model of it. You can’t measure how a theory is “true”, maybe there is no “Ultimate True Theory”: you can just measure how a theory is effective and clean in describing the reality and being understood. So, in order to index how good a theory is, you should instead calculate the probability that a person understands that theory and uses it to correctly make anticipations about reality: that means P(Galileo) >> P(first Lorentz, then show Galileo as a special case); and also P(first Galileo, after Lorentz) != P(first Lorentz, after Galileo), because you can’t expect people to be perfect rationalists: they can be just as rational as possible. The model is just an approximation of the reality, so you can’t force the reality of people to be the “perfect rational person” model, you gotta take in account that nobody’s perfect.
3) Because nobody’s perfect, you must take in account the needed RAM too. You said in the previous post that “Occam’s Razor was raised as an objection to the suggestion that nebulae were actually distant galaxies—it seemed to vastly multiply the number of entities in the universe”, in order to justify that the RAM account is irrelevant. But that argument is not valid: we rejected the hypothesis that nebulae are not distant galaxies not because the Occam’s Razor is irrelevant, but because we measured their distance and found that they are inside our galaxy; without this information, the simpler hypothesis would be that they are distant galaxies.
The Occam’s Razor IS relevant not only about the laws, but about the objects too. Yes, given a limited amount of information, it could shift toward a “simpler yet wrong model”, but it doesn’t annihilate the probability of the “right” model: with new information you would find out that you were previously wrong. But how often does the Occam’s Razor induce you to neglect a good model, as opposed to how often it let us neglect bad models? Also, Occam’s Razor may mislead you not only when applied to objects, but when applied to laws too, so your argument discriminating Occam’s Razor applicability doesn’t stand.
4) The collapse of the wave function is a way to represent a fact: if a microscopic system S is in an eigenstate of some observable A and you measure on S an observable B which is non commuting with A, your apparatus doesn’t end up in a superposition of states but gives you a unique result, and the system S ends up in the eigenstate of B corresponding to the result the apparatus gave you. That’s the fact.
As the classical behavior of macroscopic objects and the stochastic irreversible collapse seems in contradiction with the linearity, predictability and reversibility of the Schrödinger equation ruling the microscopic systems, it appears as if there’s an uncomfortable demarcation line between microscopic and macroscopic physics. So, attempts have been made in order to either find this demarcation line, or show a mechanism for the emergence of the classical behavior from the quantum mechanics, or solve or formalize this problem however.
The Copenhagen interpretation (CI) just says: “there are classical behaving macroscopic objects, and quantum behaving microscopic ones, the interaction of a microscopic object with a macroscopic apparatus causes the stochastic and irreversible collapse of the wave function, whose probabilities are given by the Born rule, now shut up and do the math”; it is a rather unsatisfactory answer, primarily because it doesn’t explain what gives rise to this demarcation line and where should it be drawn; but indeed it is useful to represent effectively what are the results of the typical educational experiments, where the difference between “big” and “small” is in no way ambiguous, and allows you to familiarize fast with the bra-ket math.
The Many Worlds Interpretation (MWI) just says: “there is indeed the superposition of states in the macroscopic scale too, but this is not seen because the other parts of the wave function stay in parallel invisible universes”.
Now imagine Einstein did not develop the General Relativity, but we anyway developed the tools to measure the precession of Mercury and we have to face the inconsistency with our predictions through Newton’s Laws: the analogous of the CI would be “the orbit of Mercury is not the one anticipated by Newton’s Laws but this other one, now if you want to calculate the transits of Mercury as seen from the Earth for the next million years you gotta do THIS math and shut up”; the analogous of the MWI would be something like “we expect the orbit of Mercury to precede at this rate X but we observe this rate Y; well, there is another parallel universe in which the preceding rate of Mercury is Z such that the average between Y and Z is the expected X due to our beautiful indefeasible Newton’s Law”. Both are unsatisfactory and curiosity stoppers, but the first one avoids to introduce new objects. The MWI, instead, while explaining exactly the same experimental results, introduces not only other universes: it also introduces the concept itself that there are other universes which proliferate at each electron’s cough attack. And it does just for the sake of human pursuit of beauty and loyalty to a (yes, beautiful, but that’s not the point) theory.
5) you talk of MWI and of decoherence as they are the same thing, but they are quite different. Decoherence is about the loss of coherence that a microscopic system (an electron, for instance) experiences when interacting with a macroscopic chaotic environment. As this sounds rather relevant to the demarcation line and interaction between microscopic and macroscopic, it has been suggested that maybe these are related phenomenons, that is: maybe the classical behavior of macroscopic objects and the collapse of the wave function of a microscopic object interacting with a macroscopic apparatus are emergent phenomenons, which arise from the microscopic quantum one through some interaction mechanism. Of course this is not an answer to the problem: it is just a road to be walked in order to find a mechanism, but we gotta find it. As you say, “emergence” without an underlying mechanism is like “magic”. Anyway, decoherence has nothing to do with MWI, though both try (or pretend) to “explain” the (apparent?) collapse of the wave function.
In the last decades decoherence has been probed and the results look promising. Though I’m not an expert in the field, I took a course about it last year and made a seminar as exam for the course, describing the results of an article I read (http://arxiv.org/abs/1107.2138v1). They presented a toy model of a Curie-Weiss apparatus (a magnet in a thermal bath), prepared in an initial isotropic metastable state, measuring the z-axis spin component of a 1/2 spin particle through induced symmetry breaking. Though I wasn’t totally persuaded by the Hamiltonian they wrote and I’m sure there are better toy models, the general ideas behind it were quite convincing. In particular, they computationally showed HOW the stochastic indeterministic collapse can emerge from just:
a) Schrödinger’s equation;
b) statistical effects due to the “large size” of the apparatus (a magnet composed by a large number N of elementary magnets, coupled to a thermal bath);
c) an appropriate initial state of the apparatus.
They did not postulate neither new laws nor new objects: they just made a model of a measurement apparatus within the framework of quantum mechanics (without the postulation of the collapse) and showed how the collapse naturally arose from it. I think that’s a pretty impressive result worth of further research, more than the MWI. This explains the collapse without postulating it, nor postulating unseen worlds.
In contrast with the title, you did not show that the MWI is falsifiable nor testable.
I agree that he didn’t show testable, but rather the possibility of it (and the formalization of it).
You just showed that MWI is “better” according to your “goodness” index, but that index is not so good
There’s a problem with choosing the language for Solomonoff/MML, so the index’s goodness can be debated. However, I think in general index is sound.
You calculate the probability of a theory and use this as an index of the “truthness” of it, but that’s confusing the reality with the model of it.
I don’t think he’s saying that theories fundamentally have probabilities. Rather, as a Bayesian, he gives some priors to each theory. As more evidences accumulate, the right theory will update and its probability approaches 1.
The reason human understanding can’t be part of the equations is, as EY says, shorter “programs” are more likely to govern the universe than longer “programs,” essentially because these “programs” are more likely to be written if you throw down some random bits to make a program that governs the universe.
So I don’t buy your arguments in the next section.
But that argument is not valid: we rejected the hypothesis that nebulae are not distant galaxies not because the Occam’s Razor is irrelevant, but because we measured their distance and found that they are inside our galaxy; without this information, the simpler hypothesis would be that they are distant galaxies.
EY is comparing the angel explanation with the galaxies explanation; you are supposed to reject the angels and usher in the galaxies. In that case, the anticipations are truly the same. You can’t really prove whether there are angels.
But how often does the Occam’s Razor induce you to neglect a good model, as opposed to how often it let us neglect bad models?
What do you mean by “good”? Which one is “better” out of 2 models that give the same prediction? (By “model” I assume you mean “theory”)
but indeed it is useful to represent effectively what are the results of the typical educational experiments, where the difference between “big” and “small” is in no way ambiguous, and allows you to familiarize fast with the bra-ket math.
You admit that Copenhagen is unsatisfactory but it is useful for education. I don’t see any reason not to teach MWI in the same vein.
Now imagine Einstein did not develop the General Relativity, but we anyway developed the tools to measure the precession of Mercury and we have to face the inconsistency with our predictions through Newton’s Laws: the analogous of the CI would be “the orbit of Mercury is not the one anticipated by Newton’s Laws but this other one, now if you want to calculate the transits of Mercury as seen from the Earth for the next million years you gotta do THIS math and shut up”; the analogous of the MWI would be something like “we expect the orbit of Mercury to precede at this rate X but we observe this rate Y; well, there is another parallel universe in which the preceding rate of Mercury is Z such that the average between Y and Z is the expected X due to our beautiful indefeasible Newton’s Law”.
If indeed the expectation value of observable V of mercury is X but we observe Y with Y not= X (that is to say that the variance of V is nonzero), then there isn’t a determinate formula for predict V exactly in your first Newton/random formula scenario. At the same time, someone who has the Copenhagen interpretation would have the same expectation value X, but instead of saying there’s another world he says there’s a wave function collapse. I still think that the parallel world is a deduced result from universal wave function, superposition, decoherence, and etc that Copenhagen also recognizes. So the Copenhagen view essentially say “actually, even though the equations say there’s another world, there is none, and on top of that we are gonna tell you how this collapsing business works”. This extra sentence is what causes the Razor to favor MWI.
Much of what you are arguing seems to stem from your dissatisfaction of the formalization of Occam’s Razor. Do you still feel that we should favor something like human understanding of a theory over the probability of a theory being true based on its length?
You admit that Copenhagen is unsatisfactory but it is useful for education. I don’t see any reason not to teach MWI in the same vein.
Because it sets people up to think that QM can be understood in terms of wavefunctions that exist and contain parallel realities; yet when the time comes to calculate anything, you have to go back to Copenhagen and employ the Born rule.
Also, real physics is about operator algebras of observables. Again, this is something you don’t get from pure Schrodinger dynamics.
QM should be taught in the Copenhagen framework, and then there should be some review of proposed ontologies and their problems.
There’s a problem with choosing the language for Solomonoff/MML, so the index’s goodness can be debated. However, I think in general index is sound.
When I hear about Solomonoff Induction, I reach for my gun :)
The point is that you can’t use Solomonoff Induction or MML to discriminate between interpretations of quantum mechanics: these are formal frameworks for inductive inference, but they are underspecified and, in the case of Solomonoff Induction, uncomputable.
Yudkowsky and other people here seem to use the terms informally, which is an usage I object to: it’s just a fancy way of saying Occam’s razor, and it’s an attempt to make their arguments more compelling that they actually are by dressing them in pseudomathematics.
The reason human understanding can’t be part of the equations is, as EY says, shorter “programs” are more likely to govern the universe than longer “programs,” essentially because these “programs” are more likely to be written if you throw down some random bits to make a program that governs the universe.
That assumes that Solomonoff Induction is the ideal way of performing inductive reasoning, which is debateable.
But even assuming that, and ignoring the fact that Solomonoff Induction is underspecified, there is still a fundamental problem:
The hypotheses considered by Solomonoff Induction are probability distributions on computer programs that generate observations, how do you map them to interpretations of quantum mechanics?
What program corresponds to Everett’s interpretation? What programs correspond to Copenhagen, objective collapse, hidden variable, etc.?
Unless you can answer these questions, any reference to Solomonoff Induction in a discussion about interpretations of quantum mechanics is a red herring.
So the Copenhagen view essentially say “actually, even though the equations say there’s another world, there is none, and on top of that we are gonna tell you how this collapsing business works”. This extra sentence is what causes the Razor to favor MWI.
Actually Copenhagen doesn’t commit to collapse being objective. People here seem to conflate Copenhagen with objective collapse, which is a popular misconception.
Objective collapse intepretations generally predict deviations from standard quantum mechanics in some extreme cases, hence they are in principle testable.
You can’t measure how a theory is “true”, maybe there is no “Ultimate True Theory”: you can just measure how a theory is effective and clean in describing the reality and being understood.
Do you have some notion of the truth of a statement, other than effectively describing reality? If so, I would very much like to hear it.
No, I don’t: actually we probably agree about that, with that sentence I was just trying to underline the “being understood” requirement for an effective theory. That was meant to introduce my following objection that the order in which you teach or learn two facts is not irrelevant. The human brain has memory, so a Markovian model for the effectiveness of theories is too simple.
I doubt that you will be successful in convincing EY of the non-privileged position of the MWI. Having spent a lot of time, dozens of posts and tons of karma on this issue, I have regretfully concluded that he is completely irrational with regards to instrumentalism in general and QM interpretations in in particular. In his objections he usually builds and demolishes a version of a straw Copenhagen, something that, in his mind, violates locality/causality/relativity.
One would expect that, having realized that he is but a smart dilettante in the subject matter, he would at least allow for the possibility of being wrong, alas it’s not the case.
I disagree on five points. The first is my conclusion too; the second leads to the third and the third explains the fourth. The fifth one is the most interesting.
1) In contrast with the title, you did not show that the MWI is falsifiable nor testable; I know the title mentions decoherence (which is falsifiable and testable), but decoherence is very different from the MWI and for the rest of the article you talked about the MWI, though calling it decoherence. You just showed that MWI is “better” according to your “goodness” index, but that index is not so good. Also, the MWI is not at all a consequence of the superposition principle: it is rather an ad-hoc hypothesis made to “explain” why we don’t experience a macroscopic superposition, despite we would expect it because macroscopic objects are made of microscopic ones. But, as I will mention in the last point, the superposition of macroscopic objects in not an inevitable consequence of the superposition principle applied to microscopic objects.
2) You say that postulating a new object is better than postulating a new law: so why teach Galileo’s relativity by postulating its transformations, while they could be derived as a special case of Lorents transformations for slow speeds? The answer is because they are just models, which gotta be easy enough for us to understand them: in order to well understand relativity you first have to understand non-relativistic mechanics, and you can only do it observing and measuring slow objects and then making the simplest theory which describes that (i.e., postulating the shortest mathematical rules experimentally compatible with the “slow” experiences: Galileo’s); THEN you can proceed in something more difficult and more accurate, postulating new rules to get a refined theory. You calculate the probability of a theory and use this as an index of the “truthness” of it, but that’s confusing the reality with the model of it. You can’t measure how a theory is “true”, maybe there is no “Ultimate True Theory”: you can just measure how a theory is effective and clean in describing the reality and being understood. So, in order to index how good a theory is, you should instead calculate the probability that a person understands that theory and uses it to correctly make anticipations about reality: that means P(Galileo) >> P(first Lorentz, then show Galileo as a special case); and also P(first Galileo, after Lorentz) != P(first Lorentz, after Galileo), because you can’t expect people to be perfect rationalists: they can be just as rational as possible. The model is just an approximation of the reality, so you can’t force the reality of people to be the “perfect rational person” model, you gotta take in account that nobody’s perfect.
3) Because nobody’s perfect, you must take in account the needed RAM too. You said in the previous post that “Occam’s Razor was raised as an objection to the suggestion that nebulae were actually distant galaxies—it seemed to vastly multiply the number of entities in the universe”, in order to justify that the RAM account is irrelevant. But that argument is not valid: we rejected the hypothesis that nebulae are not distant galaxies not because the Occam’s Razor is irrelevant, but because we measured their distance and found that they are inside our galaxy; without this information, the simpler hypothesis would be that they are distant galaxies. The Occam’s Razor IS relevant not only about the laws, but about the objects too. Yes, given a limited amount of information, it could shift toward a “simpler yet wrong model”, but it doesn’t annihilate the probability of the “right” model: with new information you would find out that you were previously wrong. But how often does the Occam’s Razor induce you to neglect a good model, as opposed to how often it let us neglect bad models? Also, Occam’s Razor may mislead you not only when applied to objects, but when applied to laws too, so your argument discriminating Occam’s Razor applicability doesn’t stand.
4) The collapse of the wave function is a way to represent a fact: if a microscopic system S is in an eigenstate of some observable A and you measure on S an observable B which is non commuting with A, your apparatus doesn’t end up in a superposition of states but gives you a unique result, and the system S ends up in the eigenstate of B corresponding to the result the apparatus gave you. That’s the fact. As the classical behavior of macroscopic objects and the stochastic irreversible collapse seems in contradiction with the linearity, predictability and reversibility of the Schrödinger equation ruling the microscopic systems, it appears as if there’s an uncomfortable demarcation line between microscopic and macroscopic physics. So, attempts have been made in order to either find this demarcation line, or show a mechanism for the emergence of the classical behavior from the quantum mechanics, or solve or formalize this problem however. The Copenhagen interpretation (CI) just says: “there are classical behaving macroscopic objects, and quantum behaving microscopic ones, the interaction of a microscopic object with a macroscopic apparatus causes the stochastic and irreversible collapse of the wave function, whose probabilities are given by the Born rule, now shut up and do the math”; it is a rather unsatisfactory answer, primarily because it doesn’t explain what gives rise to this demarcation line and where should it be drawn; but indeed it is useful to represent effectively what are the results of the typical educational experiments, where the difference between “big” and “small” is in no way ambiguous, and allows you to familiarize fast with the bra-ket math. The Many Worlds Interpretation (MWI) just says: “there is indeed the superposition of states in the macroscopic scale too, but this is not seen because the other parts of the wave function stay in parallel invisible universes”. Now imagine Einstein did not develop the General Relativity, but we anyway developed the tools to measure the precession of Mercury and we have to face the inconsistency with our predictions through Newton’s Laws: the analogous of the CI would be “the orbit of Mercury is not the one anticipated by Newton’s Laws but this other one, now if you want to calculate the transits of Mercury as seen from the Earth for the next million years you gotta do THIS math and shut up”; the analogous of the MWI would be something like “we expect the orbit of Mercury to precede at this rate X but we observe this rate Y; well, there is another parallel universe in which the preceding rate of Mercury is Z such that the average between Y and Z is the expected X due to our beautiful indefeasible Newton’s Law”. Both are unsatisfactory and curiosity stoppers, but the first one avoids to introduce new objects. The MWI, instead, while explaining exactly the same experimental results, introduces not only other universes: it also introduces the concept itself that there are other universes which proliferate at each electron’s cough attack. And it does just for the sake of human pursuit of beauty and loyalty to a (yes, beautiful, but that’s not the point) theory.
5) you talk of MWI and of decoherence as they are the same thing, but they are quite different. Decoherence is about the loss of coherence that a microscopic system (an electron, for instance) experiences when interacting with a macroscopic chaotic environment. As this sounds rather relevant to the demarcation line and interaction between microscopic and macroscopic, it has been suggested that maybe these are related phenomenons, that is: maybe the classical behavior of macroscopic objects and the collapse of the wave function of a microscopic object interacting with a macroscopic apparatus are emergent phenomenons, which arise from the microscopic quantum one through some interaction mechanism. Of course this is not an answer to the problem: it is just a road to be walked in order to find a mechanism, but we gotta find it. As you say, “emergence” without an underlying mechanism is like “magic”. Anyway, decoherence has nothing to do with MWI, though both try (or pretend) to “explain” the (apparent?) collapse of the wave function. In the last decades decoherence has been probed and the results look promising. Though I’m not an expert in the field, I took a course about it last year and made a seminar as exam for the course, describing the results of an article I read (http://arxiv.org/abs/1107.2138v1). They presented a toy model of a Curie-Weiss apparatus (a magnet in a thermal bath), prepared in an initial isotropic metastable state, measuring the z-axis spin component of a 1/2 spin particle through induced symmetry breaking. Though I wasn’t totally persuaded by the Hamiltonian they wrote and I’m sure there are better toy models, the general ideas behind it were quite convincing. In particular, they computationally showed HOW the stochastic indeterministic collapse can emerge from just: a) Schrödinger’s equation; b) statistical effects due to the “large size” of the apparatus (a magnet composed by a large number N of elementary magnets, coupled to a thermal bath); c) an appropriate initial state of the apparatus. They did not postulate neither new laws nor new objects: they just made a model of a measurement apparatus within the framework of quantum mechanics (without the postulation of the collapse) and showed how the collapse naturally arose from it. I think that’s a pretty impressive result worth of further research, more than the MWI. This explains the collapse without postulating it, nor postulating unseen worlds.
I agree that he didn’t show testable, but rather the possibility of it (and the formalization of it).
There’s a problem with choosing the language for Solomonoff/MML, so the index’s goodness can be debated. However, I think in general index is sound.
I don’t think he’s saying that theories fundamentally have probabilities. Rather, as a Bayesian, he gives some priors to each theory. As more evidences accumulate, the right theory will update and its probability approaches 1.
The reason human understanding can’t be part of the equations is, as EY says, shorter “programs” are more likely to govern the universe than longer “programs,” essentially because these “programs” are more likely to be written if you throw down some random bits to make a program that governs the universe.
So I don’t buy your arguments in the next section.
EY is comparing the angel explanation with the galaxies explanation; you are supposed to reject the angels and usher in the galaxies. In that case, the anticipations are truly the same. You can’t really prove whether there are angels.
What do you mean by “good”? Which one is “better” out of 2 models that give the same prediction? (By “model” I assume you mean “theory”)
You admit that Copenhagen is unsatisfactory but it is useful for education. I don’t see any reason not to teach MWI in the same vein.
If indeed the expectation value of observable V of mercury is X but we observe Y with Y not= X (that is to say that the variance of V is nonzero), then there isn’t a determinate formula for predict V exactly in your first Newton/random formula scenario. At the same time, someone who has the Copenhagen interpretation would have the same expectation value X, but instead of saying there’s another world he says there’s a wave function collapse. I still think that the parallel world is a deduced result from universal wave function, superposition, decoherence, and etc that Copenhagen also recognizes. So the Copenhagen view essentially say “actually, even though the equations say there’s another world, there is none, and on top of that we are gonna tell you how this collapsing business works”. This extra sentence is what causes the Razor to favor MWI.
Much of what you are arguing seems to stem from your dissatisfaction of the formalization of Occam’s Razor. Do you still feel that we should favor something like human understanding of a theory over the probability of a theory being true based on its length?
Because it sets people up to think that QM can be understood in terms of wavefunctions that exist and contain parallel realities; yet when the time comes to calculate anything, you have to go back to Copenhagen and employ the Born rule.
Also, real physics is about operator algebras of observables. Again, this is something you don’t get from pure Schrodinger dynamics.
QM should be taught in the Copenhagen framework, and then there should be some review of proposed ontologies and their problems.
When I hear about Solomonoff Induction, I reach for my gun :)
The point is that you can’t use Solomonoff Induction or MML to discriminate between interpretations of quantum mechanics: these are formal frameworks for inductive inference, but they are underspecified and, in the case of Solomonoff Induction, uncomputable.
Yudkowsky and other people here seem to use the terms informally, which is an usage I object to: it’s just a fancy way of saying Occam’s razor, and it’s an attempt to make their arguments more compelling that they actually are by dressing them in pseudomathematics.
That assumes that Solomonoff Induction is the ideal way of performing inductive reasoning, which is debateable. But even assuming that, and ignoring the fact that Solomonoff Induction is underspecified, there is still a fundamental problem:
The hypotheses considered by Solomonoff Induction are probability distributions on computer programs that generate observations, how do you map them to interpretations of quantum mechanics?
What program corresponds to Everett’s interpretation? What programs correspond to Copenhagen, objective collapse, hidden variable, etc.?
Unless you can answer these questions, any reference to Solomonoff Induction in a discussion about interpretations of quantum mechanics is a red herring.
Actually Copenhagen doesn’t commit to collapse being objective. People here seem to conflate Copenhagen with objective collapse, which is a popular misconception.
Objective collapse intepretations generally predict deviations from standard quantum mechanics in some extreme cases, hence they are in principle testable.
Do you have some notion of the truth of a statement, other than effectively describing reality? If so, I would very much like to hear it.
No, I don’t: actually we probably agree about that, with that sentence I was just trying to underline the “being understood” requirement for an effective theory. That was meant to introduce my following objection that the order in which you teach or learn two facts is not irrelevant. The human brain has memory, so a Markovian model for the effectiveness of theories is too simple.
I doubt that you will be successful in convincing EY of the non-privileged position of the MWI. Having spent a lot of time, dozens of posts and tons of karma on this issue, I have regretfully concluded that he is completely irrational with regards to instrumentalism in general and QM interpretations in in particular. In his objections he usually builds and demolishes a version of a straw Copenhagen, something that, in his mind, violates locality/causality/relativity.
One would expect that, having realized that he is but a smart dilettante in the subject matter, he would at least allow for the possibility of being wrong, alas it’s not the case.