Thanks for the response. I’m bumping up against my lack of technical knowledge here, but a few thoughts about the idea of a ‘measure of existence’ — I like how UDASSA tries to explain how the Born probabilities drop out of a kind of sampling rule, and why, intuitively, I should give more ‘weight’ to minds instantiated by brains rather than a mug of coffee. But this idea of ‘weight’ is ambiguous to me. Why should sampling weight (you’re more likely to find yourself as a real vs Boltzmann brain, or ‘thick’ vs ‘arbitrary’ computation) imply ethical weight (the experiences of Boltzmann brains matter far less than real brains)? Here’s Lev Vaidman, suggesting it shouldn’t: “there is a sense in which some worlds are larger than others”, but “note that I do not directly experience the measure of my existence. I feel the same weight, see the same brightness, etc. irrespectively of how tiny my measure of existence might be.” So in order to think that minds matter in proportion to the mesaure of the world they’re in, while recognising they ‘feel’ precisely the same, it looks like you end up having to say that something beyond what a conscious experience is subjectively like makes an enormous difference to how much it matters morally. There’s no contradiction, but that seems strange to me — I would have thought that all there is to how much a conscious experience matters is just what it feels like — because that’s all I mean by ‘conscious experience’. After all, if I’m understanding this right, you’re in a ‘branch’ right now that is many orders of magnitude less real than the larger, ‘parent’ branch you were in yesterday. Does that mean that your present welfare now matters orders of magnitude less than yesterday? Another approach might be to deny that arbitrary computations are conscious on independent grounds, and explain the observed Born probabilities without ‘diluting’ the weight of future experiences over time.
Also, presumably there’s some technical way of actually cashing out the idea of something being ‘less real’? Literally speaking, I’m guessing it’s best not to treat reality as a predicate at all (let alone one that comes in degrees). But that seems like a surmountable issue.
I’m afraid I’m confused by what you mean about including the Hilbert measure as part of the definition of MWI. My understanding was that MWI is something like what you get when you don’t add a collapse postulate, or any other definitional gubbins at all, to the bare formalism.
Why should sampling weight (you’re more likely to find yourself as a real vs Boltzmann brain, or ‘thick’ vs ‘arbitrary’ computation) imply ethical weight (the experiences of Boltzmann brains matter far less than real brains)?
I think the weights for prediction and moral value should be the same or at least related. Consider, if we’re trying to act selfishly, then we should make choices that lead to the best futures according to the sampling weight(conditioned on our experience so far), since the sampling weight is basically defined as our prior on future sense experiences. But then it seems strange to weigh other peoples’ experiences differently than our own.
So in order to think that minds matter in proportion to the measure of the world they’re in, while recognizing they ‘feel’ precisely the same, it looks like you end up having to say that something beyond what a conscious experience is subjectively like makes an enormous difference to how much it matters morally
I think of the measure as being a generalization of what it means to ‘count’ experiences, not a property of the experiences themselves. So this is more like how, in utilitarianism, the value of an experience has to be multiplied by the number of people having it to get the total moral value. Here we’re just multiplying by the measure instead.
My understanding was that MWI is something like what you get when you don’t add a collapse postulate, or any other definitional gubbins at all, to the bare formalism.
People like to claim that, but fundamentally you need to add some sort of axiom that describes how the wave function cashes out in terms of observations. The best you can get is an argument like “any other way of weighting the branches would be silly/mathematically inelegant”. Maybe, but you’re still gonna have to put it in if you want to actually predict anything. If you want to think of it in terms of writing a computer program, it simply won’t return predictions without adding the Born rule(what I’m calling the ‘Hilbert measure’ here)
Thanks for the response. I’m bumping up against my lack of technical knowledge here, but a few thoughts about the idea of a ‘measure of existence’ — I like how UDASSA tries to explain how the Born probabilities drop out of a kind of sampling rule, and why, intuitively, I should give more ‘weight’ to minds instantiated by brains rather than a mug of coffee. But this idea of ‘weight’ is ambiguous to me. Why should sampling weight (you’re more likely to find yourself as a real vs Boltzmann brain, or ‘thick’ vs ‘arbitrary’ computation) imply ethical weight (the experiences of Boltzmann brains matter far less than real brains)? Here’s Lev Vaidman, suggesting it shouldn’t: “there is a sense in which some worlds are larger than others”, but “note that I do not directly experience the measure of my existence. I feel the same weight, see the same brightness, etc. irrespectively of how tiny my measure of existence might be.” So in order to think that minds matter in proportion to the mesaure of the world they’re in, while recognising they ‘feel’ precisely the same, it looks like you end up having to say that something beyond what a conscious experience is subjectively like makes an enormous difference to how much it matters morally. There’s no contradiction, but that seems strange to me — I would have thought that all there is to how much a conscious experience matters is just what it feels like — because that’s all I mean by ‘conscious experience’. After all, if I’m understanding this right, you’re in a ‘branch’ right now that is many orders of magnitude less real than the larger, ‘parent’ branch you were in yesterday. Does that mean that your present welfare now matters orders of magnitude less than yesterday? Another approach might be to deny that arbitrary computations are conscious on independent grounds, and explain the observed Born probabilities without ‘diluting’ the weight of future experiences over time.
Also, presumably there’s some technical way of actually cashing out the idea of something being ‘less real’? Literally speaking, I’m guessing it’s best not to treat reality as a predicate at all (let alone one that comes in degrees). But that seems like a surmountable issue.
I’m afraid I’m confused by what you mean about including the Hilbert measure as part of the definition of MWI. My understanding was that MWI is something like what you get when you don’t add a collapse postulate, or any other definitional gubbins at all, to the bare formalism.
Still don’t know what to think about all this!
I think the actual reason is more like: there is nothing you can do to improve the average experience of Boltzman brains.
I think the weights for prediction and moral value should be the same or at least related. Consider, if we’re trying to act selfishly, then we should make choices that lead to the best futures according to the sampling weight(conditioned on our experience so far), since the sampling weight is basically defined as our prior on future sense experiences. But then it seems strange to weigh other peoples’ experiences differently than our own.
I think of the measure as being a generalization of what it means to ‘count’ experiences, not a property of the experiences themselves. So this is more like how, in utilitarianism, the value of an experience has to be multiplied by the number of people having it to get the total moral value. Here we’re just multiplying by the measure instead.
People like to claim that, but fundamentally you need to add some sort of axiom that describes how the wave function cashes out in terms of observations. The best you can get is an argument like “any other way of weighting the branches would be silly/mathematically inelegant”. Maybe, but you’re still gonna have to put it in if you want to actually predict anything. If you want to think of it in terms of writing a computer program, it simply won’t return predictions without adding the Born rule(what I’m calling the ‘Hilbert measure’ here)
Got it, thanks very much for explaining.