“really exists” is insufficient to affect decisions, though. Two more assertions must be made
3. That world is a valid moral target, which I care about. (Debatable. Stipulated for this purpose.)
4. That world is sufficiently different from my causally-reachable light-cone that I should behave differently than I would if I only cared about “my” world.
#4 is the tricky one IMO. Maybe it is, but how could I know how it differs or what decisions I should make differently?
Many worlds strongly implies that you make all possible decisions, so it undermines the very basis of decision theory—you have no freedom to perform one act while refraining from another.
Many worlds strongly implies that you make all possible decisions
It would imply that only if every decision I make is the result of a “quantum measurement”, which is not the case.
In fact, any organism that can reason (maintain a model of its environment or express any preference for one outcome over another) cannot make all its decisions that way because such a decision cannot increase the mutual information between the environment and the organism’s preferences, the necessity of which is explained in Eliezer’s 2007 blog post What is evidence?
In other words, a quantum measurement splits the universe into 2 branches, but in both branches, the organism has no information about its environment that it did not have before the split, so quantum measurements cannot be the only kind of decisions made by the organism.
Let us get concrete. Many-world physics tells us that the universe will split many times while a computation is running on my computer. Suppose I use my computer to factor a large integer. It is not a consequence of many-world physics that there is another branch of the universe in which the computation resulted in a different answer, i.e., different prime factors. Moreover, if there is another branch that gets a different answer, then any competent computer scientist who
has taken the time to examine my computer and the factorization program in sufficient detail (even one who does not believe in many-worlds physics) will conclude that there is a bug in one or the other: a sufficiently-well-designed computer and factorization program will get the same answer in every one of the thousands of branches that exist at the end of the computation—and designing computers and programs sufficiently well is not beyond the capabilities of people living right now—at least for simple programming tasks like factoring integers.
Ok, a programme is decfacto deterministic: it must do whatever it can do, even though that is only one thing.
But what’s the reason to think the brain is deterministic?
In fact, any organism that can reason (maintain a model of its environment or express any preference for one outcome over another) cannot make all its decisions that way because such a decision cannot increase the mutual information between the environment and the organism’s preferences, the necessity of which is explained in Eliezer’s 2007 blog post What is evidence?
Any decision made any way will.increase the mutual information between the organism and it’s environment because the consequences of the decision will become part of the environment.
In any case, EY wasn’t describing how decision making must occur, or how belief formation must occur, but how belief formationshould occur.
Any decision made any way will increase the mutual information between the organism and its environment because the consequences of the decision will become part of the environment.
If I make a “quantum measurement” (the kind of thing that routinely causes the world to split, e.g., measuring the spin of an electron) then I receive one bit of information, namely, which branch I ended up in. Information is conserved; I only received one bit of information; one entire bit is required to indicate which branch I ended up in; consequently, I must have learned nothing about the original unsplit world. Do you see now?
EY wasn’t describing how decision making must occur, or how belief formation must occur, but how belief formation should occur.
There’s a joke patterned after an old TV commercial (about seatbelt wearing) of the 1960s or 1970s: “Gravity: it’s not just a good idea; it’s the the law”. The same thing is true about Eliezer post “What is evidence?”: namely, for any organism or system to use beliefs to maintain homeostasis, to survive or to steer reality at all requires mutual information between the organism and its environment. It is not just a good idea.
I see that you didn’t learn anything about the pre-split world, but I don’t see why that matters. Again, the epistemic rule that you can only learn from evidence that is causally related to the thing it is evidence of....doesn’t obviously apply to action. Learning has a world->mind arrow, action has a mind->world arrow.
The same thing is true about Eliezer post “What is evidence?”: namely, for any organism or system to use beliefs to maintain homeostasis, to survive or to steer reality at all requires mutual information between the organism and its environment.
Again, action does not destroy mutual information. If an indetrministic event occurs, you gain at least subjective infuriation about it. If the event happens to be your own undetermined decision...the same applies. If your decision was determined, no objective information is gained or lost. If your decision was predictable to yourself, no sujective information is gained or lost.
Saying “in many worlds you make all possible decisions” is technically true, but it is important to add that some actions happen with large probabilities, and some of them happen with microscopic probabilities. That doesn’t undermine decision theory; you can still use it to do the right thing with 99.9999% probability.
(Unless you believe in a version of “many worlds” that says that everything happens with the same measure, which is not what physicists believe. If that was true, then quantum computers would be useless for any purpose other than generating perfectly random numbers.)
(And if your objection is that you can’t make “decisions” when the atoms in your brain are following the laws of physics, then 1. this is unrelated to quantum or classical physics, and 2. yes you can, this is exactly how evolution designed us by selecting for configurations of atoms that were more likely than random to do the right thing.)
That doesn’t undermine decision theory; you can still use it to do the right thing with 99.9999% probability.
If it’s possible to use decision theory in a deterministic universe, then MWI doesnt make things worse except by removing refraining. However, the role of decision theory in a deterministic universe is pretty unclear, since you can’t freely decide to use it to make a better decision than the one you would have made anyway.
(And if your objection is that you can’t make “decisions” when the atoms in your brain are following the laws of physics, then 1. this is unrelated to quantum or classical physics,
If it’s possible to use decision theory in a deterministic universe, then MWI doesnt make things worse except by removing refraining. However, the role of decision theory in a deterministic universe is pretty unclear, since you can’t freely decide to use it to make a better decision than the one you would have made anyway.
The probability density of an agent (e.g. person) operating under a given decision theory (realistically, with a given set of internal causal gears) making a certain decision varies from potential decision to potenial decision. So, although every possible decision has non-zero probability density of happening, there are meaningful differences in frequency between various actions
But MW is deterministic. Under single universe determinism, you can’t actually choose to perform an action you would not have performed. (Decision theory is either a step in the process or irrelevant). What MWI adds to that inability is a further inability to refrain. If MWI allowed freely willed agents to change the measures of their actions, then they could lower the measures of the less favoured ones...but it doesn’t.
That’s exactly what decoherent many worlds asserts!
“really exists” is insufficient to affect decisions, though. Two more assertions must be made
3. That world is a valid moral target, which I care about. (Debatable. Stipulated for this purpose.)
4. That world is sufficiently different from my causally-reachable light-cone that I should behave differently than I would if I only cared about “my” world.
#4 is the tricky one IMO. Maybe it is, but how could I know how it differs or what decisions I should make differently?
Many worlds strongly implies that you make all possible decisions, so it undermines the very basis of decision theory—you have no freedom to perform one act while refraining from another.
It would imply that only if every decision I make is the result of a “quantum measurement”, which is not the case.
In fact, any organism that can reason (maintain a model of its environment or express any preference for one outcome over another) cannot make all its decisions that way because such a decision cannot increase the mutual information between the environment and the organism’s preferences, the necessity of which is explained in Eliezer’s 2007 blog post What is evidence?
In other words, a quantum measurement splits the universe into 2 branches, but in both branches, the organism has no information about its environment that it did not have before the split, so quantum measurements cannot be the only kind of decisions made by the organism.
Let us get concrete. Many-world physics tells us that the universe will split many times while a computation is running on my computer. Suppose I use my computer to factor a large integer. It is not a consequence of many-world physics that there is another branch of the universe in which the computation resulted in a different answer, i.e., different prime factors. Moreover, if there is another branch that gets a different answer, then any competent computer scientist who has taken the time to examine my computer and the factorization program in sufficient detail (even one who does not believe in many-worlds physics) will conclude that there is a bug in one or the other: a sufficiently-well-designed computer and factorization program will get the same answer in every one of the thousands of branches that exist at the end of the computation—and designing computers and programs sufficiently well is not beyond the capabilities of people living right now—at least for simple programming tasks like factoring integers.
Ok, a programme is decfacto deterministic: it must do whatever it can do, even though that is only one thing.
But what’s the reason to think the brain is deterministic?
Any decision made any way will.increase the mutual information between the organism and it’s environment because the consequences of the decision will become part of the environment.
In any case, EY wasn’t describing how decision making must occur, or how belief formation must occur, but how belief formationshould occur.
If I make a “quantum measurement” (the kind of thing that routinely causes the world to split, e.g., measuring the spin of an electron) then I receive one bit of information, namely, which branch I ended up in. Information is conserved; I only received one bit of information; one entire bit is required to indicate which branch I ended up in; consequently, I must have learned nothing about the original unsplit world. Do you see now?
There’s a joke patterned after an old TV commercial (about seatbelt wearing) of the 1960s or 1970s: “Gravity: it’s not just a good idea; it’s the the law”. The same thing is true about Eliezer post “What is evidence?”: namely, for any organism or system to use beliefs to maintain homeostasis, to survive or to steer reality at all requires mutual information between the organism and its environment. It is not just a good idea.
I see that you didn’t learn anything about the pre-split world, but I don’t see why that matters. Again, the epistemic rule that you can only learn from evidence that is causally related to the thing it is evidence of....doesn’t obviously apply to action. Learning has a world->mind arrow, action has a mind->world arrow.
Again, action does not destroy mutual information. If an indetrministic event occurs, you gain at least subjective infuriation about it. If the event happens to be your own undetermined decision...the same applies. If your decision was determined, no objective information is gained or lost. If your decision was predictable to yourself, no sujective information is gained or lost.
Saying “in many worlds you make all possible decisions” is technically true, but it is important to add that some actions happen with large probabilities, and some of them happen with microscopic probabilities. That doesn’t undermine decision theory; you can still use it to do the right thing with 99.9999% probability.
(Unless you believe in a version of “many worlds” that says that everything happens with the same measure, which is not what physicists believe. If that was true, then quantum computers would be useless for any purpose other than generating perfectly random numbers.)
(And if your objection is that you can’t make “decisions” when the atoms in your brain are following the laws of physics, then 1. this is unrelated to quantum or classical physics, and 2. yes you can, this is exactly how evolution designed us by selecting for configurations of atoms that were more likely than random to do the right thing.)
If it’s possible to use decision theory in a deterministic universe, then MWI doesnt make things worse except by removing refraining. However, the role of decision theory in a deterministic universe is pretty unclear, since you can’t freely decide to use it to make a better decision than the one you would have made anyway.
Deterministic physics excludes free choice. Physics doesn’t.
MWI is deterministic over the multiverse, not per-universe.
Yes, and it still precludes free choice, like single universe determinism, as well as precluding refraining
I feel this misses the mark
The probability density of an agent (e.g. person) operating under a given decision theory (realistically, with a given set of internal causal gears) making a certain decision varies from potential decision to potenial decision. So, although every possible decision has non-zero probability density of happening, there are meaningful differences in frequency between various actions
But MW is deterministic. Under single universe determinism, you can’t actually choose to perform an action you would not have performed. (Decision theory is either a step in the process or irrelevant). What MWI adds to that inability is a further inability to refrain. If MWI allowed freely willed agents to change the measures of their actions, then they could lower the measures of the less favoured ones...but it doesn’t.
Probability is still a well-defined concept, even in a deterministic many worlds model
And you still can’t refrain.