You’re right that you can just take whatever approximation you make at the macroscopic level (‘sunny’) and convert that into a metric for counting worlds. But the point is that everyone will acknowledge that the counting part is arbitrary from the perspective of fundamental physics—but you can remove the arbitrariness that derives from fine-graining, by focusing on the weight. (That is kind of the whole point of a mathematical measure.)
But why would you want to remove this arbitrariness? Your preferences are fine-grained anyway, so why retain classical counting, but deny counting in the space of wavefunction? It’s like saying “dividing world into people and their welfare is arbitrary—let’s focus on measuring mass of a space region”. The point is you can’t remove all decision-theoretic arbitrariness from MWI—“branching indifference” is just arbitrary ethical constraint that is equivalent to valuing measure for no reason, and without it fundamental physics, that works like MWI, does not prevent you from making decisions as if quantum immortality works.
I don’t get why you would say that the preferences are fine-grained, it kinda seems obvious to me that they are not fine-grained. You don’t care about whether worlds that are macroscopically indistinguishable are distinguishable at the quantum level, because you are yourself macroscopic. That’s why branching indifference is not arbitrary. Quantum immortality is a whole other controversial story.
Because scale doesn’t matter—it doesn’t matter if you are implemented on thick or narrow computer.
First of all, macroscopical indistinguishability is not fundamental physical property—branching indifference is additional assumption, so I don’t see how it’s not as arbitrary as branch counting.
But more importantly, branching indifference assumption is not the same as informal “not caring about macroscopically indistinguishable differences”! As Wallace showed, branching indifference implies the Born rule implies you almost shouldn’t care about you in a branch with a measure of 0.000001 even though it may involve drastic macroscopic difference for you in that branch. You being macroscopic doesn’t imply you shouldn’t care about your low-measure instances.
First of all, macroscopical indistinguishability is not fundamental physical property—branching indifference is additional assumption, so I don’t see how it’s not as arbitrary as branch counting.
You’re right it’s not a fundamental physical property—the overall philosophical framework here is that things can be real—as emergent entities—without being fundamental physical properties. Things like lions, and chairs are other examples.
But more importantly, branching indifference assumption is not the same as informal “not caring about macroscopically indistinguishable differences”!
This is how Wallace defines it (he in turn defines macroscopically indistinguishable in terms of providing the same rewards). It’s his term in the axiomatic system he uses to get decision theory to work. There’s not much to argue about here?
As Wallace showed, branching indifference implies the Born rule implies you almost shouldn’t care about you in a branch with a measure of 0.000001 even though it may involve drastic macroscopic difference for you in that branch. You being macroscopic doesn’t imply you shouldn’t care about your low-measure instances.
Yes this is true. Not caring about low-measure instances is a very different proposition from not caring about macroscopically indistinguishable differences. We should care about low-measure instances in proportion to the measure, just as in classical decision theory we care about low-probability instances in proportion to the probability.
This is how Wallace defines it (he in turn defines macroscopically indistinguishable in terms of providing the same rewards). It’s his term in the axiomatic system he uses to get decision theory to work. There’s not much to argue about here?
His definition leads to contradiction with informal intuition that motivates consideration of macroscopical indistinguishability in the first place.
We should care about low-measure instances in proportion to the measure, just as in classical decision theory we care about low-probability instances in proportion to the probability.
Why? Wallace’s argument is just “you don’t care about some irrelevant microscopic differences, so let me write this assumption that is superficially related to that preference, and here—it implies the Born rule”. Given MWI, there is nothing wrong physically or rationally in valuing your instances equally whatever their measure is. Their thoughts and experiences don’t depend on measure the same way they don’t depend on thickness or mass of a computer implementing them. You can rationally not care about irrelevant microscopic differences and still care about number of your thin instances.
I’m not at all saying the experiences of a person in a low-weight world are less valuable than a person in a high-weight world. Just that when you are considering possible futures in a decision-theoretic framework you need to apply the weights (because weight is equivalent to probability).
Wallace’s useful achievement in this context is to show that there exists a set of axioms that makes this work, and this includes branch-indifference.
This is useful because makes clear the way in which the branch-counting approach you’re suggesting is in conflict with decision theory. So I don’t disagree that you can care about the number of your thin instances, but what I’m saying is in that case you need to accept that this makes decision theory and probably consequentialist ethics impossible in your framework.
It doesn’t matter whether you call your multiplier “probability” or “value” if it results in your decision to not care about low-measure branch. The only difference is that probability is supposed to be about knowledge, and Wallace’s argument involving arbitrary assumption, not only physics, means it’s not probability, but value—there is no reason to value knowledge of your low-measure instances less.
this makes decision theory and probably consequentialist ethics impossible in your framework
It doesn’t? Nothing stops you from making decisions in a world where you are constantly splitting. You can try to maximize splits of good experiences or something. It just wouldn’t be the same decisions you would make without knowledge of splits, but why new physical knowledge shouldn’t change your decisions?
OK ‘impossible’ is too strong, I should have said ‘extremely difficult’. That was my point in footnote 3 of the post. Most people would take the fact that it has implications like needing to “maximize splits of good experiences” (I assume you mean maximise the number of splits) as a reductio ad absurdum, due to the fact that this is massively different from our normal intuitions about what we should do. But some people have tried to take that approach, like in the article I mentioned in the footnote. If you or someone else can come up with a consistent and convincing decision approach that involves branch counting I would genuinely love to see it!
And one of Wallace’s axioms, which he calls ‘branching indifference’, essentially says that it doesn’t matter how many branches there are, since macroscopic differences are all that we care about for decisions..
The macroscopically different branches and their weights?
Focussing on the weight isn’t obviously correct , ethically. You cant assume that the answer to “what do I expect to see” will work the same as the answer to “what should I do”. Is-ought gap and all that.
Its tempting to think that you can apply a standard decision theory in terms of expected value to Many Worlds, since it is a matter of multiplying subjective value by probability. It seems reasonable to assess the moral weight of someone else’s experiences and existence from their point of view. (Edit: also, our experiences seem fully real to us, although we are unlikely to be in a high measure world) That is the intuition behind the common rationalist/utilitarian/EA view that human lives don’t decline in moral worth with distance. So why should they decline with lower quantum mechanical measure?
There is quandary here: sticking to the usual “adds up to normality” principle,as an apriori axiom means discounting the ethical importance of low-measure worlds in order to keep your favourite decision theory operating in the usual single-universe way...even if you are in a multiverse. But sticking to the equally usual universalist axiom, that you dont get to discount someone’s moral worth on the basis of factors that aren’t intrinsic to them, means you should not discount..and that the usual decision theory does not apply.
Basically, there is a tension between four things Rationalists are inclined to believe in:-
Some kind of MWI is true.
Some kind of utilitarian and universalist ethics is true.
Subjective things like suffering are ethically relevant. It’s not all about number of kittens
It’s all business as normal...it all adds up to normality.. fundamental ontological differences should not affect your decision theory.
That is the intuition behind the common rationalist/utilitarian/EA view that human lives don’t decline in moral worth with distance. So why should they decline with lower quantum mechanical measure?
For the same reason that they decline with classical measure. Two people are worth more than one. And with classical probability measure. A 100% chance of someone surviving something is better than a 50% chance.
They are not the same things though. Quantum mechanical measure isn’t actually a head count, like classical measure. The theory doesn’t say that—it’s an extraneous assumption. It might be convenient if it worked that way, but that would be assuming your conclusion.
QM measure isn’t probability—the probability of something occurring or not—because all possible branches occur in MWI.
Another part of the problem stems from the fact that what other people experience is relevant to them, whereas for a probability calculation, I only need to be able to statistically predict my own observations. Using QM to predict my own observations, I can ignore the question of whether something has a ten percent chance of happening in the one and only world, or a certainty of happening in one tenth of possible worlds. However, these are not necessarily equivalent ethically.
They are not the same things though. Quantum mechanical measure isn’t actually a head count, like classical measure. The theory doesn’t say that—it’s an extraneous assumption. It might be convenient if it worked that way, but that would be assuming your conclusion.
QM measure isn’t probability—the probability of something occurring or not—because all possible branches occur in MWI.
So whence the Born probabilities, that underly the predictions of QM? I am not well versed in QM, but what is meant by quantum mechanical measure, if not those probabilities?
You’re right that you can just take whatever approximation you make at the macroscopic level (‘sunny’) and convert that into a metric for counting worlds. But the point is that everyone will acknowledge that the counting part is arbitrary from the perspective of fundamental physics—but you can remove the arbitrariness that derives from fine-graining, by focusing on the weight. (That is kind of the whole point of a mathematical measure.)
But why would you want to remove this arbitrariness? Your preferences are fine-grained anyway, so why retain classical counting, but deny counting in the space of wavefunction? It’s like saying “dividing world into people and their welfare is arbitrary—let’s focus on measuring mass of a space region”. The point is you can’t remove all decision-theoretic arbitrariness from MWI—“branching indifference” is just arbitrary ethical constraint that is equivalent to valuing measure for no reason, and without it fundamental physics, that works like MWI, does not prevent you from making decisions as if quantum immortality works.
I don’t get why you would say that the preferences are fine-grained, it kinda seems obvious to me that they are not fine-grained. You don’t care about whether worlds that are macroscopically indistinguishable are distinguishable at the quantum level, because you are yourself macroscopic. That’s why branching indifference is not arbitrary. Quantum immortality is a whole other controversial story.
Because scale doesn’t matter—it doesn’t matter if you are implemented on thick or narrow computer.
First of all, macroscopical indistinguishability is not fundamental physical property—branching indifference is additional assumption, so I don’t see how it’s not as arbitrary as branch counting.
But more importantly, branching indifference assumption is not the same as informal “not caring about macroscopically indistinguishable differences”! As Wallace showed, branching indifference implies the Born rule implies you almost shouldn’t care about you in a branch with a measure of 0.000001 even though it may involve drastic macroscopic difference for you in that branch. You being macroscopic doesn’t imply you shouldn’t care about your low-measure instances.
You’re right it’s not a fundamental physical property—the overall philosophical framework here is that things can be real—as emergent entities—without being fundamental physical properties. Things like lions, and chairs are other examples.
This is how Wallace defines it (he in turn defines macroscopically indistinguishable in terms of providing the same rewards). It’s his term in the axiomatic system he uses to get decision theory to work. There’s not much to argue about here?
Yes this is true. Not caring about low-measure instances is a very different proposition from not caring about macroscopically indistinguishable differences. We should care about low-measure instances in proportion to the measure, just as in classical decision theory we care about low-probability instances in proportion to the probability.
And counted branches.
His definition leads to contradiction with informal intuition that motivates consideration of macroscopical indistinguishability in the first place.
Why? Wallace’s argument is just “you don’t care about some irrelevant microscopic differences, so let me write this assumption that is superficially related to that preference, and here—it implies the Born rule”. Given MWI, there is nothing wrong physically or rationally in valuing your instances equally whatever their measure is. Their thoughts and experiences don’t depend on measure the same way they don’t depend on thickness or mass of a computer implementing them. You can rationally not care about irrelevant microscopic differences and still care about number of your thin instances.
I’m not at all saying the experiences of a person in a low-weight world are less valuable than a person in a high-weight world. Just that when you are considering possible futures in a decision-theoretic framework you need to apply the weights (because weight is equivalent to probability).
Wallace’s useful achievement in this context is to show that there exists a set of axioms that makes this work, and this includes branch-indifference.
This is useful because makes clear the way in which the branch-counting approach you’re suggesting is in conflict with decision theory. So I don’t disagree that you can care about the number of your thin instances, but what I’m saying is in that case you need to accept that this makes decision theory and probably consequentialist ethics impossible in your framework.
It doesn’t matter whether you call your multiplier “probability” or “value” if it results in your decision to not care about low-measure branch. The only difference is that probability is supposed to be about knowledge, and Wallace’s argument involving arbitrary assumption, not only physics, means it’s not probability, but value—there is no reason to value knowledge of your low-measure instances less.
It doesn’t? Nothing stops you from making decisions in a world where you are constantly splitting. You can try to maximize splits of good experiences or something. It just wouldn’t be the same decisions you would make without knowledge of splits, but why new physical knowledge shouldn’t change your decisions?
OK ‘impossible’ is too strong, I should have said ‘extremely difficult’. That was my point in footnote 3 of the post. Most people would take the fact that it has implications like needing to “maximize splits of good experiences” (I assume you mean maximise the number of splits) as a reductio ad absurdum, due to the fact that this is massively different from our normal intuitions about what we should do. But some people have tried to take that approach, like in the article I mentioned in the footnote. If you or someone else can come up with a consistent and convincing decision approach that involves branch counting I would genuinely love to see it!
The macroscopically different branches and their weights?
Focussing on the weight isn’t obviously correct , ethically. You cant assume that the answer to “what do I expect to see” will work the same as the answer to “what should I do”. Is-ought gap and all that.
Its tempting to think that you can apply a standard decision theory in terms of expected value to Many Worlds, since it is a matter of multiplying subjective value by probability. It seems reasonable to assess the moral weight of someone else’s experiences and existence from their point of view. (Edit: also, our experiences seem fully real to us, although we are unlikely to be in a high measure world) That is the intuition behind the common rationalist/utilitarian/EA view that human lives don’t decline in moral worth with distance. So why should they decline with lower quantum mechanical measure?
There is quandary here: sticking to the usual “adds up to normality” principle,as an apriori axiom means discounting the ethical importance of low-measure worlds in order to keep your favourite decision theory operating in the usual single-universe way...even if you are in a multiverse. But sticking to the equally usual universalist axiom, that you dont get to discount someone’s moral worth on the basis of factors that aren’t intrinsic to them, means you should not discount..and that the usual decision theory does not apply.
Basically, there is a tension between four things Rationalists are inclined to believe in:-
Some kind of MWI is true.
Some kind of utilitarian and universalist ethics is true.
Subjective things like suffering are ethically relevant. It’s not all about number of kittens
It’s all business as normal...it all adds up to normality.. fundamental ontological differences should not affect your decision theory.
For the same reason that they decline with classical measure. Two people are worth more than one. And with classical probability measure. A 100% chance of someone surviving something is better than a 50% chance.
They are not the same things though. Quantum mechanical measure isn’t actually a head count, like classical measure. The theory doesn’t say that—it’s an extraneous assumption. It might be convenient if it worked that way, but that would be assuming your conclusion.
QM measure isn’t probability—the probability of something occurring or not—because all possible branches occur in MWI.
Another part of the problem stems from the fact that what other people experience is relevant to them, whereas for a probability calculation, I only need to be able to statistically predict my own observations. Using QM to predict my own observations, I can ignore the question of whether something has a ten percent chance of happening in the one and only world, or a certainty of happening in one tenth of possible worlds. However, these are not necessarily equivalent ethically.
So whence the Born probabilities, that underly the predictions of QM? I am not well versed in QM, but what is meant by quantum mechanical measure, if not those probabilities?