I tend to agree with ‘dilution’ response, which considers branches with less Hilbert-space measure to be ‘less real’. Some justification: if you’re going to just count all the ways minds can be embedded in the wave function, why stop at “normal-looking” embeddings? what’s stopping you from finding an extremely convoluted mapping such that the coffee in your mug actually instantiates 10^23 different conscious experiences? But now it’s impossible to make any ethical comparisons at all since every state of the universe contains infinitely many copies of all possible experiences. Using a continuous measure on experiences, a la UDASSA, lets you consider arbitrary computations to be conscious without giving all the ethical weight to Boltzmann brains.
Another reason for preferring the Hilbert measure: you could consider weighting using the Hilbert measure to be part of the definition of MWI, since it’s only using such a weighting that it’s possible to make correct predictions about the real world.
Thanks for the response. I’m bumping up against my lack of technical knowledge here, but a few thoughts about the idea of a ‘measure of existence’ — I like how UDASSA tries to explain how the Born probabilities drop out of a kind of sampling rule, and why, intuitively, I should give more ‘weight’ to minds instantiated by brains rather than a mug of coffee. But this idea of ‘weight’ is ambiguous to me. Why should sampling weight (you’re more likely to find yourself as a real vs Boltzmann brain, or ‘thick’ vs ‘arbitrary’ computation) imply ethical weight (the experiences of Boltzmann brains matter far less than real brains)? Here’s Lev Vaidman, suggesting it shouldn’t: “there is a sense in which some worlds are larger than others”, but “note that I do not directly experience the measure of my existence. I feel the same weight, see the same brightness, etc. irrespectively of how tiny my measure of existence might be.” So in order to think that minds matter in proportion to the mesaure of the world they’re in, while recognising they ‘feel’ precisely the same, it looks like you end up having to say that something beyond what a conscious experience is subjectively like makes an enormous difference to how much it matters morally. There’s no contradiction, but that seems strange to me — I would have thought that all there is to how much a conscious experience matters is just what it feels like — because that’s all I mean by ‘conscious experience’. After all, if I’m understanding this right, you’re in a ‘branch’ right now that is many orders of magnitude less real than the larger, ‘parent’ branch you were in yesterday. Does that mean that your present welfare now matters orders of magnitude less than yesterday? Another approach might be to deny that arbitrary computations are conscious on independent grounds, and explain the observed Born probabilities without ‘diluting’ the weight of future experiences over time.
Also, presumably there’s some technical way of actually cashing out the idea of something being ‘less real’? Literally speaking, I’m guessing it’s best not to treat reality as a predicate at all (let alone one that comes in degrees). But that seems like a surmountable issue.
I’m afraid I’m confused by what you mean about including the Hilbert measure as part of the definition of MWI. My understanding was that MWI is something like what you get when you don’t add a collapse postulate, or any other definitional gubbins at all, to the bare formalism.
Why should sampling weight (you’re more likely to find yourself as a real vs Boltzmann brain, or ‘thick’ vs ‘arbitrary’ computation) imply ethical weight (the experiences of Boltzmann brains matter far less than real brains)?
I think the weights for prediction and moral value should be the same or at least related. Consider, if we’re trying to act selfishly, then we should make choices that lead to the best futures according to the sampling weight(conditioned on our experience so far), since the sampling weight is basically defined as our prior on future sense experiences. But then it seems strange to weigh other peoples’ experiences differently than our own.
So in order to think that minds matter in proportion to the measure of the world they’re in, while recognizing they ‘feel’ precisely the same, it looks like you end up having to say that something beyond what a conscious experience is subjectively like makes an enormous difference to how much it matters morally
I think of the measure as being a generalization of what it means to ‘count’ experiences, not a property of the experiences themselves. So this is more like how, in utilitarianism, the value of an experience has to be multiplied by the number of people having it to get the total moral value. Here we’re just multiplying by the measure instead.
My understanding was that MWI is something like what you get when you don’t add a collapse postulate, or any other definitional gubbins at all, to the bare formalism.
People like to claim that, but fundamentally you need to add some sort of axiom that describes how the wave function cashes out in terms of observations. The best you can get is an argument like “any other way of weighting the branches would be silly/mathematically inelegant”. Maybe, but you’re still gonna have to put it in if you want to actually predict anything. If you want to think of it in terms of writing a computer program, it simply won’t return predictions without adding the Born rule(what I’m calling the ‘Hilbert measure’ here)
I mean that it correctly predicts the results of experiments and our observations—which, yes, would be different if we were sampled from a different measure. That’s the point. I’m taking for granted that we have some pre-theoretical observations to explain here, and saying that the Hilbert measure is needed to explain them.
I’m saying that the classical notions of prediction, knowledge, observations and the need to explain them in classical sense should not be fundamental part of the theory with MWI. It is a plain consequence of QM equations that amplitudes of the branches, where frequency of repeated experiments contradicts Born rule, tends to zero. Theory just doesn’t tell us why Born probabilities are right for specific observables in absolute sense, because there are no probabilities or sampling on physical level and wavefunction containing all worlds continues to evolve as it did before. We can label “amplitudes of the branches, where x is wrong, tend to zero” situation as “we observe x”, but it would be arbitrary ethical decision. The Hilbert measure is correct only if you want to sum over branches, but there is nothing in the physics that forces you to want anything.
I think some notion of prediction/observation has to be included for a theory to qualify as physics. By your definition, studying the results of e.g. particle accelerator experiments wouldn’t be part of quantum mechanics, since you need the Born rule to make predictions about them.
It has some notion—that notion is just not classical and not fundamental. What happens when you study the results of any experiments or make predictions is described by the theory. It just doesn’t describe it in classical or probabilistic terms because they are not real. And doesn’t tell you how to maximize knowledge, because it’s ambiguous without specifying how to aggregate knowledge in different branches.
I think you’re misusing the word ‘real’ here. We only think QM is ‘real’ in the first place because it predicts our experimental results, so it seems backwards to say that those (classical, probabilistic) results are actually not real, while QM is real. What happens if we experimentally discover a deeper layer of physics beneath QM, will you then say “I thought QM was real, but it was actually fake the whole time”? But then, why would you change your notion of what ‘real’ is in response to something you don’t consider real?
The main reason is the double-slit experiment: if you start with a notion of reality that expects photon to travel through either one or the other slit, and then the nature is like ~_~, it is already a sufficient reason to rethink reality. Different parts of probability distribution don’t influence each other.
What happens if we experimentally discover a deeper layer of physics beneath QM
I mean, there is no need for hypotheticals—it’s not like we started with probabilistic reality—we started with gods. And then everyone already changed their notion of reality to the probabilistic one in response to QM. Point is, changing one’s ontology may not be easy, but if you prohibit continuous change then the Spirit of the Forest welcomes you. So yes, if we discover new better physics and it doesn’t include interference between worlds, then sure, we dodged this bullet. But until then I see no reason to not assume MWI without special status for any measure. We don’t even lose any observations that way—we just now know what it meant to observer something.
I tend to agree with ‘dilution’ response, which considers branches with less Hilbert-space measure to be ‘less real’. Some justification: if you’re going to just count all the ways minds can be embedded in the wave function, why stop at “normal-looking” embeddings? what’s stopping you from finding an extremely convoluted mapping such that the coffee in your mug actually instantiates 10^23 different conscious experiences? But now it’s impossible to make any ethical comparisons at all since every state of the universe contains infinitely many copies of all possible experiences. Using a continuous measure on experiences, a la UDASSA, lets you consider arbitrary computations to be conscious without giving all the ethical weight to Boltzmann brains.
Another reason for preferring the Hilbert measure: you could consider weighting using the Hilbert measure to be part of the definition of MWI, since it’s only using such a weighting that it’s possible to make correct predictions about the real world.
Thanks for the response. I’m bumping up against my lack of technical knowledge here, but a few thoughts about the idea of a ‘measure of existence’ — I like how UDASSA tries to explain how the Born probabilities drop out of a kind of sampling rule, and why, intuitively, I should give more ‘weight’ to minds instantiated by brains rather than a mug of coffee. But this idea of ‘weight’ is ambiguous to me. Why should sampling weight (you’re more likely to find yourself as a real vs Boltzmann brain, or ‘thick’ vs ‘arbitrary’ computation) imply ethical weight (the experiences of Boltzmann brains matter far less than real brains)? Here’s Lev Vaidman, suggesting it shouldn’t: “there is a sense in which some worlds are larger than others”, but “note that I do not directly experience the measure of my existence. I feel the same weight, see the same brightness, etc. irrespectively of how tiny my measure of existence might be.” So in order to think that minds matter in proportion to the mesaure of the world they’re in, while recognising they ‘feel’ precisely the same, it looks like you end up having to say that something beyond what a conscious experience is subjectively like makes an enormous difference to how much it matters morally. There’s no contradiction, but that seems strange to me — I would have thought that all there is to how much a conscious experience matters is just what it feels like — because that’s all I mean by ‘conscious experience’. After all, if I’m understanding this right, you’re in a ‘branch’ right now that is many orders of magnitude less real than the larger, ‘parent’ branch you were in yesterday. Does that mean that your present welfare now matters orders of magnitude less than yesterday? Another approach might be to deny that arbitrary computations are conscious on independent grounds, and explain the observed Born probabilities without ‘diluting’ the weight of future experiences over time.
Also, presumably there’s some technical way of actually cashing out the idea of something being ‘less real’? Literally speaking, I’m guessing it’s best not to treat reality as a predicate at all (let alone one that comes in degrees). But that seems like a surmountable issue.
I’m afraid I’m confused by what you mean about including the Hilbert measure as part of the definition of MWI. My understanding was that MWI is something like what you get when you don’t add a collapse postulate, or any other definitional gubbins at all, to the bare formalism.
Still don’t know what to think about all this!
I think the actual reason is more like: there is nothing you can do to improve the average experience of Boltzman brains.
I think the weights for prediction and moral value should be the same or at least related. Consider, if we’re trying to act selfishly, then we should make choices that lead to the best futures according to the sampling weight(conditioned on our experience so far), since the sampling weight is basically defined as our prior on future sense experiences. But then it seems strange to weigh other peoples’ experiences differently than our own.
I think of the measure as being a generalization of what it means to ‘count’ experiences, not a property of the experiences themselves. So this is more like how, in utilitarianism, the value of an experience has to be multiplied by the number of people having it to get the total moral value. Here we’re just multiplying by the measure instead.
People like to claim that, but fundamentally you need to add some sort of axiom that describes how the wave function cashes out in terms of observations. The best you can get is an argument like “any other way of weighting the branches would be silly/mathematically inelegant”. Maybe, but you’re still gonna have to put it in if you want to actually predict anything. If you want to think of it in terms of writing a computer program, it simply won’t return predictions without adding the Born rule(what I’m calling the ‘Hilbert measure’ here)
Got it, thanks very much for explaining.
“Correct” only in the sense that the measure of branches where it’s not correct approaches zero. So only matters if you already value such a measure.
I mean that it correctly predicts the results of experiments and our observations—which, yes, would be different if we were sampled from a different measure. That’s the point. I’m taking for granted that we have some pre-theoretical observations to explain here, and saying that the Hilbert measure is needed to explain them.
I’m saying that the classical notions of prediction, knowledge, observations and the need to explain them in classical sense should not be fundamental part of the theory with MWI. It is a plain consequence of QM equations that amplitudes of the branches, where frequency of repeated experiments contradicts Born rule, tends to zero. Theory just doesn’t tell us why Born probabilities are right for specific observables in absolute sense, because there are no probabilities or sampling on physical level and wavefunction containing all worlds continues to evolve as it did before. We can label “amplitudes of the branches, where x is wrong, tend to zero” situation as “we observe x”, but it would be arbitrary ethical decision. The Hilbert measure is correct only if you want to sum over branches, but there is nothing in the physics that forces you to want anything.
I think some notion of prediction/observation has to be included for a theory to qualify as physics. By your definition, studying the results of e.g. particle accelerator experiments wouldn’t be part of quantum mechanics, since you need the Born rule to make predictions about them.
It has some notion—that notion is just not classical and not fundamental. What happens when you study the results of any experiments or make predictions is described by the theory. It just doesn’t describe it in classical or probabilistic terms because they are not real. And doesn’t tell you how to maximize knowledge, because it’s ambiguous without specifying how to aggregate knowledge in different branches.
I think you’re misusing the word ‘real’ here. We only think QM is ‘real’ in the first place because it predicts our experimental results, so it seems backwards to say that those (classical, probabilistic) results are actually not real, while QM is real. What happens if we experimentally discover a deeper layer of physics beneath QM, will you then say “I thought QM was real, but it was actually fake the whole time”? But then, why would you change your notion of what ‘real’ is in response to something you don’t consider real?
The main reason is the double-slit experiment: if you start with a notion of reality that expects photon to travel through either one or the other slit, and then the nature is like ~_~, it is already a sufficient reason to rethink reality. Different parts of probability distribution don’t influence each other.
I mean, there is no need for hypotheticals—it’s not like we started with probabilistic reality—we started with gods. And then everyone already changed their notion of reality to the probabilistic one in response to QM. Point is, changing one’s ontology may not be easy, but if you prohibit continuous change then the Spirit of the Forest welcomes you. So yes, if we discover new better physics and it doesn’t include interference between worlds, then sure, we dodged this bullet. But until then I see no reason to not assume MWI without special status for any measure. We don’t even lose any observations that way—we just now know what it meant to observer something.