But I hope the arguments I’ve laid out above make it clear what the right answer has to be: You should anticipate having both experiences.
Some quantum experiments allow us to mostly anticipate some outcomes and not others. Either quantum physics doesn’t work the way Eliezer thinks it works and the universe is very small to not contain many spontaneously appearing copies of your brain, or we should be pretty surprised to continually find ourselves in such an ordered universe, where we don’t start seeing white noise over and over again.
I agree that if there are two copies of the brain that perfectly simulate it, both exist; but it’s not clear to me what should I anticipate in terms of ending up somewhere. Future versions of me that have fewer copies would feel like they exist just as much as versions that have many copies/run on computers with thicker wires/more current would feel.
But finding myself in an orderly universe, where quantum random number generators produce expected frequencies of results, requires something more than the simple truth that if there’s an abstract computation being computed, well, it is computed, and if it is experiencing, it’s experiencing (independently of how many computers in which proportions using which physics simulating frameworks physically run it).
I’m pretty confused about what is needed to produce a satisfying answer, conditional on a large enough universe, and the only potential explanation I came up with after thinking for ~15 minutes (before reading this post) was pretty circular and not satisfying (I’m not sure of a valid-feeling way that would allow me to consider something in my brain entangled with how true this answer is, without already relying on it).
(“What’s up with all the Boltzmann brain versions of me? Do they start seeing white noise, starting from every single moment? Why am I experiencing this instead?”)
And in a large enough universe, deciding to run on silicon instead of proteins might be pretty bad, because maybe, if GPUs that run the brain are tiny enough, most future versions of you might end up in weird forms of quantum immortality instead of being simulated.
If I physically scale my brain size on some outputs of results of quantum dice throws but not others, do I start observing skewed frequencies of results?
The solution is here. In a nutshell, naive MWI is wrong, not all Everett branches coexist, but a lot of Everett branches do coexist s.t. with high probability all of them display expected frequencies.
if this is not the solution, my understanding is that IBP agents wouldn’t know or care, as regardless of how likely it is that we live in naive MWI or Tegmark IV, they focus on the minimal worlds required. Sure, in these worlds, not all Everett branches coexist, and it is coherent for an agent to focus only on these worlds; but it doesn’t tell us much about how likely we’re in a small world. (I.e., if we thought atoms are ontologically basic, we could build a coherent ASI that only cared about worlds with ontologically basic atoms and only cared about things made of ontologically basic atoms. After observing the world, it would assume it’s running in a simulation of a quantum world on a computer build of ontologically basic atoms, and it would try to influence the atoms outside the simulation and wouldn’t care about our universe. Some coherent ASIs being able to think atoms are ontologically basic shouldn’t tell us anything about whether atoms are indeed ontologically basic.)
Conditional on a small universe, I would prefer the IBP explanation (or other versions of not running all of the branches and producing the Born rule). Without it, there’s clearly some sort of sampling going on.
Not sure what you mean by “this would require a pretty small universe”.
If we live in naive MWI, an IBP agent would not care for good reasons, because naive MWI is a “library of babel” where essentially every conceivable thing happens no matter what you do.
Also not sure what you mean by “some sort of sampling”. AFAICT, quantum IBP is the closest thing to a coherent answer that we have, by a significant margin.
If we live in naive MWI, an IBP agent would not care for good reasons, because naive MWI is a “library of babel” where essentially every conceivable thing happens no matter what you do.
Isn’t the frequency of amplitude-patterns changes depending on what you do? So an agent can care about that instead of point-states.
I mean if the universe is big enough for every conceivable thing to happen, then we should notice that we find ourselves in a surprisingly structured environment and need to assume some sort of an effect where if a cognitive architecture opens its eyes, it opens its eyes in a different places with the likelihood corresponding to how common these places are (e.g., among all Turing machines).
I.e., if your brain is uploaded, and you see a door in front of you, and when you open it, 10 identical computers start running a copy of you each: 9 show you a green room, 1 shows you a red room, you expect that if you enter a room and open your eyes, in 9⁄10 cases you’ll find yourself in a green room.
So if it is the situation we’re in- everything happens- then I think a more natural way to rescue our values would be to care about what cognitive algorithms usually experience, when they open their eyes/other senses. Do they suffer or do they find all sorts of meaningful beauty in their experiences? I don’t think we should stop caring about suffering just because it happens anyway, if we can still have an impact on how common it is.
If we live in a naive MWI, an IBP agent doesn’t care for good reasons internal to it (somewhat similar to how if we’re in our world, an agent that cares only about ontologically basic atoms doesn’t care about our world, for good reasons internal to it), but I think conditional on a naive MWI, humanity’s CEV is different from what IBP agents can natively care about.
Your reasoning is invalid, because in order to talk about updating your beliefs in this context, you need a metaphysical framework which knows how to deal with anthropic probabilities (e.g. it should be able to answer puzzles in the vein of the anthropic trilemma according to some coherent, well-defined mathematical rules). IBP is such a framework, but you haven’t proposed any alternative, not to mention an argument for why that alternative is superior.
I always thought that in naive MWI what matters is not whether something happens in absolute sense, but what Born measure is concentrated on branches that contain good things instead of bad things.
The problem is this requires introducing a special decision-theory postulate that you’re supposed to care about the Born measure for some reason, even though Born measure doesn’t correspond to ordinary probability.
Huh? The whole point of the Born rule is to get a set of ordinary probabilities, which you can then test frequentistically, over a run of experiments. Quantum mechanical measure—amplitude—isn’t ordinary probability, but that’s the thing you put into the Born rule, not the thing you get out of it. And it has it’s own role, which is explaining how much contribution to a coherent superposition each component state makes.
ETA
There is a further problem interpreting the probabilities of fully decohered branches. (Calling then Everett branches is very misleading—a clear theory of decoherence is precisely what’s lacking in Everett’s work)
Whether you are supposed to care about them ethically is very unclear, since it is not clear how utilitarian style ethics would apply, even if you could make sense of the probabilities. But you are not supposed to care about them for the purposes of doing science, since they can no longer make any difference to your branch. MWI works like a collapse theory in practice.
always thought that in naive MWI what matters is not whether something happens in absolute sense, but what Born measure is concentrated on branches that contain good things instead of bad things.
It’s tempting to ethically discount low measure decoherent branches in some way, because that most closely approximates conventional single world utilitarianism—that is something “naive MWI” might mean. However, one should not jump to the conclusion that something is true just because it is convenient. And of course, MWI is a scientific theory so it doesn’t comes with built in ethics.
The alternative view starts with the question of whether a person low measure world still count as a full.person? If they should not, is that because they are a near-zombie, with a faint consciousness that weighs little in a hedonic utilitarian calculus? If they are not such zombies, why would they not count as a full person—the standard utilitarian argument that people in far-off lands are still moral patients seems to apply. Of course, MWI doesn’t directly answer the question about consciousness.
(For example, if I toss a quantum fair coin n times, there will be 2^n branches with all possible outcomes.)
If “naive MWI” means the idea that any elementary interaction produces decoherent branching, then it is wrong for the reasons I explain here. Since there are some coherent superpositions, and not just decoherent branches, there are cases where the Born rule gives you ordinary probabilities, as any undergraduate physics student knows.
(What is the meaning of the probability measure over the branches if all branches coexist?)
It’s not the existence, it’s the lack of interaction/interference.
The topic of this thread is: In naive MWI, it is postulated that all Everett branches coexist. (For example, if I toss a quantum fair coin n times, there will be 2n branches with all possible outcomes.) Under this assumption, it’s not clear in what sense the Born rule is true. (What is the meaning of the probability measure over the branches if all branches coexist?)
Some quantum experiments allow us to mostly anticipate some outcomes and not others. Either quantum physics doesn’t work the way Eliezer thinks it works and the universe is very small to not contain many spontaneously appearing copies of your brain, or we should be pretty surprised to continually find ourselves in such an ordered universe, where we don’t start seeing white noise over and over again.
I agree that if there are two copies of the brain that perfectly simulate it, both exist; but it’s not clear to me what should I anticipate in terms of ending up somewhere. Future versions of me that have fewer copies would feel like they exist just as much as versions that have many copies/run on computers with thicker wires/more current would feel.
But finding myself in an orderly universe, where quantum random number generators produce expected frequencies of results, requires something more than the simple truth that if there’s an abstract computation being computed, well, it is computed, and if it is experiencing, it’s experiencing (independently of how many computers in which proportions using which physics simulating frameworks physically run it).
I’m pretty confused about what is needed to produce a satisfying answer, conditional on a large enough universe, and the only potential explanation I came up with after thinking for ~15 minutes (before reading this post) was pretty circular and not satisfying (I’m not sure of a valid-feeling way that would allow me to consider something in my brain entangled with how true this answer is, without already relying on it).
(“What’s up with all the Boltzmann brain versions of me? Do they start seeing white noise, starting from every single moment? Why am I experiencing this instead?”)
And in a large enough universe, deciding to run on silicon instead of proteins might be pretty bad, because maybe, if GPUs that run the brain are tiny enough, most future versions of you might end up in weird forms of quantum immortality instead of being simulated.
If I physically scale my brain size on some outputs of results of quantum dice throws but not others, do I start observing skewed frequencies of results?
The solution is here. In a nutshell, naive MWI is wrong, not all Everett branches coexist, but a lot of Everett branches do coexist s.t. with high probability all of them display expected frequencies.
I can imagine this being the solution, but
this would require a pretty small universe
if this is not the solution, my understanding is that IBP agents wouldn’t know or care, as regardless of how likely it is that we live in naive MWI or Tegmark IV, they focus on the minimal worlds required. Sure, in these worlds, not all Everett branches coexist, and it is coherent for an agent to focus only on these worlds; but it doesn’t tell us much about how likely we’re in a small world. (I.e., if we thought atoms are ontologically basic, we could build a coherent ASI that only cared about worlds with ontologically basic atoms and only cared about things made of ontologically basic atoms. After observing the world, it would assume it’s running in a simulation of a quantum world on a computer build of ontologically basic atoms, and it would try to influence the atoms outside the simulation and wouldn’t care about our universe. Some coherent ASIs being able to think atoms are ontologically basic shouldn’t tell us anything about whether atoms are indeed ontologically basic.)
Conditional on a small universe, I would prefer the IBP explanation (or other versions of not running all of the branches and producing the Born rule). Without it, there’s clearly some sort of sampling going on.
Not sure what you mean by “this would require a pretty small universe”.
If we live in naive MWI, an IBP agent would not care for good reasons, because naive MWI is a “library of babel” where essentially every conceivable thing happens no matter what you do.
Also not sure what you mean by “some sort of sampling”. AFAICT, quantum IBP is the closest thing to a coherent answer that we have, by a significant margin.
Isn’t the frequency of amplitude-patterns changes depending on what you do? So an agent can care about that instead of point-states.
I mean if the universe is big enough for every conceivable thing to happen, then we should notice that we find ourselves in a surprisingly structured environment and need to assume some sort of an effect where if a cognitive architecture opens its eyes, it opens its eyes in a different places with the likelihood corresponding to how common these places are (e.g., among all Turing machines).
I.e., if your brain is uploaded, and you see a door in front of you, and when you open it, 10 identical computers start running a copy of you each: 9 show you a green room, 1 shows you a red room, you expect that if you enter a room and open your eyes, in 9⁄10 cases you’ll find yourself in a green room.
So if it is the situation we’re in- everything happens- then I think a more natural way to rescue our values would be to care about what cognitive algorithms usually experience, when they open their eyes/other senses. Do they suffer or do they find all sorts of meaningful beauty in their experiences? I don’t think we should stop caring about suffering just because it happens anyway, if we can still have an impact on how common it is.
If we live in a naive MWI, an IBP agent doesn’t care for good reasons internal to it (somewhat similar to how if we’re in our world, an agent that cares only about ontologically basic atoms doesn’t care about our world, for good reasons internal to it), but I think conditional on a naive MWI, humanity’s CEV is different from what IBP agents can natively care about.
Your reasoning is invalid, because in order to talk about updating your beliefs in this context, you need a metaphysical framework which knows how to deal with anthropic probabilities (e.g. it should be able to answer puzzles in the vein of the anthropic trilemma according to some coherent, well-defined mathematical rules). IBP is such a framework, but you haven’t proposed any alternative, not to mention an argument for why that alternative is superior.
I always thought that in naive MWI what matters is not whether something happens in absolute sense, but what Born measure is concentrated on branches that contain good things instead of bad things.
The problem is this requires introducing a special decision-theory postulate that you’re supposed to care about the Born measure for some reason, even though Born measure doesn’t correspond to ordinary probability.
Huh? The whole point of the Born rule is to get a set of ordinary probabilities, which you can then test frequentistically, over a run of experiments. Quantum mechanical measure—amplitude—isn’t ordinary probability, but that’s the thing you put into the Born rule, not the thing you get out of it. And it has it’s own role, which is explaining how much contribution to a coherent superposition each component state makes.
ETA
There is a further problem interpreting the probabilities of fully decohered branches. (Calling then Everett branches is very misleading—a clear theory of decoherence is precisely what’s lacking in Everett’s work)
Whether you are supposed to care about them ethically is very unclear, since it is not clear how utilitarian style ethics would apply, even if you could make sense of the probabilities. But you are not supposed to care about them for the purposes of doing science, since they can no longer make any difference to your branch. MWI works like a collapse theory in practice.
It’s tempting to ethically discount low measure decoherent branches in some way, because that most closely approximates conventional single world utilitarianism—that is something “naive MWI” might mean. However, one should not jump to the conclusion that something is true just because it is convenient. And of course, MWI is a scientific theory so it doesn’t comes with built in ethics.
The alternative view starts with the question of whether a person low measure world still count as a full.person? If they should not, is that because they are a near-zombie, with a faint consciousness that weighs little in a hedonic utilitarian calculus? If they are not such zombies, why would they not count as a full person—the standard utilitarian argument that people in far-off lands are still moral patients seems to apply. Of course, MWI doesn’t directly answer the question about consciousness.
If “naive MWI” means the idea that any elementary interaction produces decoherent branching, then it is wrong for the reasons I explain here. Since there are some coherent superpositions, and not just decoherent branches, there are cases where the Born rule gives you ordinary probabilities, as any undergraduate physics student knows.
It’s not the existence, it’s the lack of interaction/interference.
The topic of this thread is: In naive MWI, it is postulated that all Everett branches coexist. (For example, if I toss a quantum fair coin n times, there will be 2n branches with all possible outcomes.) Under this assumption, it’s not clear in what sense the Born rule is true. (What is the meaning of the probability measure over the branches if all branches coexist?)