And one of Wallace’s axioms, which he calls ‘branching indifference’, essentially says that it doesn’t matter how many branches there are, since macroscopic differences are all that we care about for decisions..
The macroscopically different branches and their weights?
Focussing on the weight isn’t obviously correct , ethically. You cant assume that the answer to “what do I expect to see” will work the same as the answer to “what should I do”. Is-ought gap and all that.
Its tempting to think that you can apply a standard decision theory in terms of expected value to Many Worlds, since it is a matter of multiplying subjective value by probability. It seems reasonable to assess the moral weight of someone else’s experiences and existence from their point of view. (Edit: also, our experiences seem fully real to us, although we are unlikely to be in a high measure world) That is the intuition behind the common rationalist/utilitarian/EA view that human lives don’t decline in moral worth with distance. So why should they decline with lower quantum mechanical measure?
There is quandary here: sticking to the usual “adds up to normality” principle,as an apriori axiom means discounting the ethical importance of low-measure worlds in order to keep your favourite decision theory operating in the usual single-universe way...even if you are in a multiverse. But sticking to the equally usual universalist axiom, that you dont get to discount someone’s moral worth on the basis of factors that aren’t intrinsic to them, means you should not discount..and that the usual decision theory does not apply.
Basically, there is a tension between four things Rationalists are inclined to believe in:-
Some kind of MWI is true.
Some kind of utilitarian and universalist ethics is true.
Subjective things like suffering are ethically relevant. It’s not all about number of kittens
It’s all business as normal...it all adds up to normality.. fundamental ontological differences should not affect your decision theory.
That is the intuition behind the common rationalist/utilitarian/EA view that human lives don’t decline in moral worth with distance. So why should they decline with lower quantum mechanical measure?
For the same reason that they decline with classical measure. Two people are worth more than one. And with classical probability measure. A 100% chance of someone surviving something is better than a 50% chance.
They are not the same things though. Quantum mechanical measure isn’t actually a head count, like classical measure. The theory doesn’t say that—it’s an extraneous assumption. It might be convenient if it worked that way, but that would be assuming your conclusion.
QM measure isn’t probability—the probability of something occurring or not—because all possible branches occur in MWI.
Another part of the problem stems from the fact that what other people experience is relevant to them, whereas for a probability calculation, I only need to be able to statistically predict my own observations. Using QM to predict my own observations, I can ignore the question of whether something has a ten percent chance of happening in the one and only world, or a certainty of happening in one tenth of possible worlds. However, these are not necessarily equivalent ethically.
They are not the same things though. Quantum mechanical measure isn’t actually a head count, like classical measure. The theory doesn’t say that—it’s an extraneous assumption. It might be convenient if it worked that way, but that would be assuming your conclusion.
QM measure isn’t probability—the probability of something occurring or not—because all possible branches occur in MWI.
So whence the Born probabilities, that underly the predictions of QM? I am not well versed in QM, but what is meant by quantum mechanical measure, if not those probabilities?
The macroscopically different branches and their weights?
Focussing on the weight isn’t obviously correct , ethically. You cant assume that the answer to “what do I expect to see” will work the same as the answer to “what should I do”. Is-ought gap and all that.
Its tempting to think that you can apply a standard decision theory in terms of expected value to Many Worlds, since it is a matter of multiplying subjective value by probability. It seems reasonable to assess the moral weight of someone else’s experiences and existence from their point of view. (Edit: also, our experiences seem fully real to us, although we are unlikely to be in a high measure world) That is the intuition behind the common rationalist/utilitarian/EA view that human lives don’t decline in moral worth with distance. So why should they decline with lower quantum mechanical measure?
There is quandary here: sticking to the usual “adds up to normality” principle,as an apriori axiom means discounting the ethical importance of low-measure worlds in order to keep your favourite decision theory operating in the usual single-universe way...even if you are in a multiverse. But sticking to the equally usual universalist axiom, that you dont get to discount someone’s moral worth on the basis of factors that aren’t intrinsic to them, means you should not discount..and that the usual decision theory does not apply.
Basically, there is a tension between four things Rationalists are inclined to believe in:-
Some kind of MWI is true.
Some kind of utilitarian and universalist ethics is true.
Subjective things like suffering are ethically relevant. It’s not all about number of kittens
It’s all business as normal...it all adds up to normality.. fundamental ontological differences should not affect your decision theory.
For the same reason that they decline with classical measure. Two people are worth more than one. And with classical probability measure. A 100% chance of someone surviving something is better than a 50% chance.
They are not the same things though. Quantum mechanical measure isn’t actually a head count, like classical measure. The theory doesn’t say that—it’s an extraneous assumption. It might be convenient if it worked that way, but that would be assuming your conclusion.
QM measure isn’t probability—the probability of something occurring or not—because all possible branches occur in MWI.
Another part of the problem stems from the fact that what other people experience is relevant to them, whereas for a probability calculation, I only need to be able to statistically predict my own observations. Using QM to predict my own observations, I can ignore the question of whether something has a ten percent chance of happening in the one and only world, or a certainty of happening in one tenth of possible worlds. However, these are not necessarily equivalent ethically.
So whence the Born probabilities, that underly the predictions of QM? I am not well versed in QM, but what is meant by quantum mechanical measure, if not those probabilities?