Why should sampling weight (you’re more likely to find yourself as a real vs Boltzmann brain, or ‘thick’ vs ‘arbitrary’ computation) imply ethical weight (the experiences of Boltzmann brains matter far less than real brains)?
I think the weights for prediction and moral value should be the same or at least related. Consider, if we’re trying to act selfishly, then we should make choices that lead to the best futures according to the sampling weight(conditioned on our experience so far), since the sampling weight is basically defined as our prior on future sense experiences. But then it seems strange to weigh other peoples’ experiences differently than our own.
So in order to think that minds matter in proportion to the measure of the world they’re in, while recognizing they ‘feel’ precisely the same, it looks like you end up having to say that something beyond what a conscious experience is subjectively like makes an enormous difference to how much it matters morally
I think of the measure as being a generalization of what it means to ‘count’ experiences, not a property of the experiences themselves. So this is more like how, in utilitarianism, the value of an experience has to be multiplied by the number of people having it to get the total moral value. Here we’re just multiplying by the measure instead.
My understanding was that MWI is something like what you get when you don’t add a collapse postulate, or any other definitional gubbins at all, to the bare formalism.
People like to claim that, but fundamentally you need to add some sort of axiom that describes how the wave function cashes out in terms of observations. The best you can get is an argument like “any other way of weighting the branches would be silly/mathematically inelegant”. Maybe, but you’re still gonna have to put it in if you want to actually predict anything. If you want to think of it in terms of writing a computer program, it simply won’t return predictions without adding the Born rule(what I’m calling the ‘Hilbert measure’ here)
I think the weights for prediction and moral value should be the same or at least related. Consider, if we’re trying to act selfishly, then we should make choices that lead to the best futures according to the sampling weight(conditioned on our experience so far), since the sampling weight is basically defined as our prior on future sense experiences. But then it seems strange to weigh other peoples’ experiences differently than our own.
I think of the measure as being a generalization of what it means to ‘count’ experiences, not a property of the experiences themselves. So this is more like how, in utilitarianism, the value of an experience has to be multiplied by the number of people having it to get the total moral value. Here we’re just multiplying by the measure instead.
People like to claim that, but fundamentally you need to add some sort of axiom that describes how the wave function cashes out in terms of observations. The best you can get is an argument like “any other way of weighting the branches would be silly/mathematically inelegant”. Maybe, but you’re still gonna have to put it in if you want to actually predict anything. If you want to think of it in terms of writing a computer program, it simply won’t return predictions without adding the Born rule(what I’m calling the ‘Hilbert measure’ here)
Got it, thanks very much for explaining.