When I hear the phrase “many worlds interpretation,” I cringe. This is not because I know something about the science (I know nothing about the science), it’s because of confusing things I’ve heard in science popularizations. This reaction has kept me from reading Eliezer’s sequence thus far, but I pledge to give it a fair shot soon.
Above you gave me a substitute phrase to use when I hear “observation.” Is there a similar substitute phrase to use for MWI? Should I, for example, think “probability distribution over a Hilbert space” when I hear “many worlds”, or is it something else?
Edit: Generally, can anyone suggest a lexicon that translates QM terminology into probability terminology?
I’m not sure I’m addressing your question, but I advocate in place of “many worlds interpretation” the phrase “no collapse interpretation.”
That’s very helpful. It will help me read the sequence without being prejudiced by other things I’ve heard. If all we’re talking about here is the wavefunction evolving according to Schr\:odinger’s equation, I’ve got no problems, and I would call the “many worlds” terminology extremely distracting. (e.g. to me it implies a probability distribution over some kind of “multiverse”, whatever that is).
I am curious how exactly would this aproach work outside of quantum physics, specifically in areas more simple or more close to our intuition.
I think we should be use the same basic cognitive algorithms for thinking about all knowledge, not make quantum physics a “separate magisterium”. So if the “no interpretation” approach is correct, seems to me that it should be correct everywhere. I would like to see it applied to a simple physics or even mathematics (perhaps even such as 2+2=4, but I don’t want to construct a strawman example here).
I was describing instrumentalism in my comment, and so far it has been working well for me in other areas as well. In mathematics, I would avoid arguing whether a theorem that is unprovable in a certain framework is true or false. In condensed matter physics, I would avoid arguing whether pseudo-particles, such as holes and phonons, are “real”. In general, when people talk about a “description of reality” they implicitly assume the map-territory model, without admitting that it is only a (convenient and useful) model. It is possible to talk about observable phenomena without using this model. Specifically, one can describe research in natural science as building a hierarchy of models, each more powerful than the one before, without mentioning the world “reality” even once. In this approach all models of the same power (known in QM as interpretations) are equivalent.
Personally, I advocate “no interpretation”, in a sense “no ontology should be assigned to a mere interpretation”.
Can you elaborate on this? (I’m not voting it down, yet anyway; but it has −3 right now)
I’m guessing that your point is that seeing and thinking about experimental results for Themselves is more important than telling stories about them, yes?
I think it is necessary to exercise some care in demanding probabilities from QM. Note that the fundamental thing is the wave function, and the development of the wave function is perfectly deterministic. Probabilities, although they are the thing that everyone takes away from QM, only appear after decoherence, or after collapse if you prefer that terminology; and we Do Not Know how the particular Born probabilities arise. This is one of the genuine mysteries of modern physics.
I was reflecting on this, and considering how statistics might look to a pure mathematician:
“Probability distribution, I know. Real number, I know. But what is this ‘rolling a die’/‘sampling’ that you are speaking about?”
Honest answer: Everybody knows what it means (come on man, it’s a die!), but nobody knows what it means mathematically. It has to do with how we interpret/model the data that we see that comes to us from experiments, and the most philosophically defensible way to give these models meaning involves subjective probability.
“Ah so you belong to that minority sect of Bayesians?”
Well, if you don’t like Bayesianism you can give meaning to sampling a random variable X=X(\omega) by treating the “sampled value” x as a peculiar notation for X(\omega), and if you consider many such random variables, the things we do with x often correspond to theorems for which you could prove that a result happens with high probability using the random variables.
I was reflecting on this, and considering how statistics might look to a pure mathematician:
“Probability distribution, I know. Real number, I know. But what is this ‘rolling a die’/‘sampling’ that you are speaking about?”
Reflecting some more here (I hope this schizophrenic little monologue doesn’t bother anyone), I notice that none of this would trouble a pure computer scientist / reductionist:
“Probability? Yeah, well, I’ve got pseudo-random number generators. Are they ‘random’? No, of course not, there’s a seed that maintains the state, they’re just really hard to predict if you don’t know the seed, but if there aren’t too many bits in the seed, you can crack them. That’s happened to casino slot machines before; now they have more bits.”
“Philosophy of statistics? Well, I’ve got two software packages here: one of them fits a penalized regression and tunes the penalty parameter by cross validation. The other one runs an MCMC. They both give pretty similarly useful answers most of the time [on some particular problem]. You can’t set the penalty on the first one to 0, though, unless n >> log(p), and I’ve got a pretty large number of parameters. The regression code is faster [on some problem], but the MCMC let’s me answer more subtle questions about the posterior.
Have you seen the Church language or Infer.Net? They’re pretty expressive, although the MCMC algorithms need some tuning.”
Ah, but what does it mean when you run those algorithms?
“Mean? Eh? They just work. There’s some probability bounds in the machine learning community, but usually they’re not tight enough to use.”
[He had me until that last bit, but I can’t fault his reasoning. Probably Savage or de Finnetti could make him squirm, but who needs philosophy when you’re getting things done.]
When I hear the phrase “many worlds interpretation,” I cringe. This is not because I know something about the science (I know nothing about the science), it’s because of confusing things I’ve heard in science popularizations. This reaction has kept me from reading Eliezer’s sequence thus far, but I pledge to give it a fair shot soon.
Above you gave me a substitute phrase to use when I hear “observation.” Is there a similar substitute phrase to use for MWI? Should I, for example, think “probability distribution over a Hilbert space” when I hear “many worlds”, or is it something else?
Edit: Generally, can anyone suggest a lexicon that translates QM terminology into probability terminology?
I’m not sure I’m addressing your question, but I advocate in place of “many worlds interpretation” the phrase “no collapse interpretation.”
That’s very helpful. It will help me read the sequence without being prejudiced by other things I’ve heard. If all we’re talking about here is the wavefunction evolving according to Schr\:odinger’s equation, I’ve got no problems, and I would call the “many worlds” terminology extremely distracting. (e.g. to me it implies a probability distribution over some kind of “multiverse”, whatever that is).
Personally, I advocate “no interpretation”, in a sense “no ontology should be assigned to a mere interpretation”.
I am curious how exactly would this aproach work outside of quantum physics, specifically in areas more simple or more close to our intuition.
I think we should be use the same basic cognitive algorithms for thinking about all knowledge, not make quantum physics a “separate magisterium”. So if the “no interpretation” approach is correct, seems to me that it should be correct everywhere. I would like to see it applied to a simple physics or even mathematics (perhaps even such as 2+2=4, but I don’t want to construct a strawman example here).
I was describing instrumentalism in my comment, and so far it has been working well for me in other areas as well. In mathematics, I would avoid arguing whether a theorem that is unprovable in a certain framework is true or false. In condensed matter physics, I would avoid arguing whether pseudo-particles, such as holes and phonons, are “real”. In general, when people talk about a “description of reality” they implicitly assume the map-territory model, without admitting that it is only a (convenient and useful) model. It is possible to talk about observable phenomena without using this model. Specifically, one can describe research in natural science as building a hierarchy of models, each more powerful than the one before, without mentioning the world “reality” even once. In this approach all models of the same power (known in QM as interpretations) are equivalent.
Can you elaborate on this? (I’m not voting it down, yet anyway; but it has −3 right now)
I’m guessing that your point is that seeing and thinking about experimental results for Themselves is more important than telling stories about them, yes?
You could go with what Everett wanted to call it in the first place, the relative state interpretation.
To answer your “Edit” question, no, the relative state interpretation does not include probabilities as fundamental.
Thanks! Getting back to original sources has always been good for me. Is that “Relative state” formulation of quantum mechanics?
I think it is necessary to exercise some care in demanding probabilities from QM. Note that the fundamental thing is the wave function, and the development of the wave function is perfectly deterministic. Probabilities, although they are the thing that everyone takes away from QM, only appear after decoherence, or after collapse if you prefer that terminology; and we Do Not Know how the particular Born probabilities arise. This is one of the genuine mysteries of modern physics.
I was reflecting on this, and considering how statistics might look to a pure mathematician:
“Probability distribution, I know. Real number, I know. But what is this ‘rolling a die’/‘sampling’ that you are speaking about?”
Honest answer: Everybody knows what it means (come on man, it’s a die!), but nobody knows what it means mathematically. It has to do with how we interpret/model the data that we see that comes to us from experiments, and the most philosophically defensible way to give these models meaning involves subjective probability.
“Ah so you belong to that minority sect of Bayesians?”
Well, if you don’t like Bayesianism you can give meaning to sampling a random variable X=X(\omega) by treating the “sampled value” x as a peculiar notation for X(\omega), and if you consider many such random variables, the things we do with x often correspond to theorems for which you could prove that a result happens with high probability using the random variables.
“Hmm. So what’s an experiment?”
Sigh.
Reflecting some more here (I hope this schizophrenic little monologue doesn’t bother anyone), I notice that none of this would trouble a pure computer scientist / reductionist:
“Probability? Yeah, well, I’ve got pseudo-random number generators. Are they ‘random’? No, of course not, there’s a seed that maintains the state, they’re just really hard to predict if you don’t know the seed, but if there aren’t too many bits in the seed, you can crack them. That’s happened to casino slot machines before; now they have more bits.”
“Philosophy of statistics? Well, I’ve got two software packages here: one of them fits a penalized regression and tunes the penalty parameter by cross validation. The other one runs an MCMC. They both give pretty similarly useful answers most of the time [on some particular problem]. You can’t set the penalty on the first one to 0, though, unless n >> log(p), and I’ve got a pretty large number of parameters. The regression code is faster [on some problem], but the MCMC let’s me answer more subtle questions about the posterior.
Have you seen the Church language or Infer.Net? They’re pretty expressive, although the MCMC algorithms need some tuning.”
Ah, but what does it mean when you run those algorithms?
“Mean? Eh? They just work. There’s some probability bounds in the machine learning community, but usually they’re not tight enough to use.”
[He had me until that last bit, but I can’t fault his reasoning. Probably Savage or de Finnetti could make him squirm, but who needs philosophy when you’re getting things done.]
Well, among others, someone who wonders whether the things I’m doing are the right things to do.
Fair point. Thanks, that hyperbole was ill advised.