Not sure what you mean by “this would require a pretty small universe”.
If we live in naive MWI, an IBP agent would not care for good reasons, because naive MWI is a “library of babel” where essentially every conceivable thing happens no matter what you do.
Also not sure what you mean by “some sort of sampling”. AFAICT, quantum IBP is the closest thing to a coherent answer that we have, by a significant margin.
If we live in naive MWI, an IBP agent would not care for good reasons, because naive MWI is a “library of babel” where essentially every conceivable thing happens no matter what you do.
Isn’t the frequency of amplitude-patterns changes depending on what you do? So an agent can care about that instead of point-states.
I mean if the universe is big enough for every conceivable thing to happen, then we should notice that we find ourselves in a surprisingly structured environment and need to assume some sort of an effect where if a cognitive architecture opens its eyes, it opens its eyes in a different places with the likelihood corresponding to how common these places are (e.g., among all Turing machines).
I.e., if your brain is uploaded, and you see a door in front of you, and when you open it, 10 identical computers start running a copy of you each: 9 show you a green room, 1 shows you a red room, you expect that if you enter a room and open your eyes, in 9⁄10 cases you’ll find yourself in a green room.
So if it is the situation we’re in- everything happens- then I think a more natural way to rescue our values would be to care about what cognitive algorithms usually experience, when they open their eyes/other senses. Do they suffer or do they find all sorts of meaningful beauty in their experiences? I don’t think we should stop caring about suffering just because it happens anyway, if we can still have an impact on how common it is.
If we live in a naive MWI, an IBP agent doesn’t care for good reasons internal to it (somewhat similar to how if we’re in our world, an agent that cares only about ontologically basic atoms doesn’t care about our world, for good reasons internal to it), but I think conditional on a naive MWI, humanity’s CEV is different from what IBP agents can natively care about.
Your reasoning is invalid, because in order to talk about updating your beliefs in this context, you need a metaphysical framework which knows how to deal with anthropic probabilities (e.g. it should be able to answer puzzles in the vein of the anthropic trilemma according to some coherent, well-defined mathematical rules). IBP is such a framework, but you haven’t proposed any alternative, not to mention an argument for why that alternative is superior.
I always thought that in naive MWI what matters is not whether something happens in absolute sense, but what Born measure is concentrated on branches that contain good things instead of bad things.
The problem is this requires introducing a special decision-theory postulate that you’re supposed to care about the Born measure for some reason, even though Born measure doesn’t correspond to ordinary probability.
Huh? The whole point of the Born rule is to get a set of ordinary probabilities, which you can then test frequentistically, over a run of experiments. Quantum mechanical measure—amplitude—isn’t ordinary probability, but that’s the thing you put into the Born rule, not the thing you get out of it. And it has it’s own role, which is explaining how much contribution to a coherent superposition each component state makes.
ETA
There is a further problem interpreting the probabilities of fully decohered branches. (Calling then Everett branches is very misleading—a clear theory of decoherence is precisely what’s lacking in Everett’s work)
Whether you are supposed to care about them ethically is very unclear, since it is not clear how utilitarian style ethics would apply, even if you could make sense of the probabilities. But you are not supposed to care about them for the purposes of doing science, since they can no longer make any difference to your branch. MWI works like a collapse theory in practice.
always thought that in naive MWI what matters is not whether something happens in absolute sense, but what Born measure is concentrated on branches that contain good things instead of bad things.
It’s tempting to ethically discount low measure decoherent branches in some way, because that most closely approximates conventional single world utilitarianism—that is something “naive MWI” might mean. However, one should not jump to the conclusion that something is true just because it is convenient. And of course, MWI is a scientific theory so it doesn’t comes with built in ethics.
The alternative view starts with the question of whether a person low measure world still count as a full.person? If they should not, is that because they are a near-zombie, with a faint consciousness that weighs little in a hedonic utilitarian calculus? If they are not such zombies, why would they not count as a full person—the standard utilitarian argument that people in far-off lands are still moral patients seems to apply. Of course, MWI doesn’t directly answer the question about consciousness.
(For example, if I toss a quantum fair coin n times, there will be 2^n branches with all possible outcomes.)
If “naive MWI” means the idea that any elementary interaction produces decoherent branching, then it is wrong for the reasons I explain here. Since there are some coherent superpositions, and not just decoherent branches, there are cases where the Born rule gives you ordinary probabilities, as any undergraduate physics student knows.
(What is the meaning of the probability measure over the branches if all branches coexist?)
It’s not the existence, it’s the lack of interaction/interference.
The topic of this thread is: In naive MWI, it is postulated that all Everett branches coexist. (For example, if I toss a quantum fair coin n times, there will be 2n branches with all possible outcomes.) Under this assumption, it’s not clear in what sense the Born rule is true. (What is the meaning of the probability measure over the branches if all branches coexist?)
Not sure what you mean by “this would require a pretty small universe”.
If we live in naive MWI, an IBP agent would not care for good reasons, because naive MWI is a “library of babel” where essentially every conceivable thing happens no matter what you do.
Also not sure what you mean by “some sort of sampling”. AFAICT, quantum IBP is the closest thing to a coherent answer that we have, by a significant margin.
Isn’t the frequency of amplitude-patterns changes depending on what you do? So an agent can care about that instead of point-states.
I mean if the universe is big enough for every conceivable thing to happen, then we should notice that we find ourselves in a surprisingly structured environment and need to assume some sort of an effect where if a cognitive architecture opens its eyes, it opens its eyes in a different places with the likelihood corresponding to how common these places are (e.g., among all Turing machines).
I.e., if your brain is uploaded, and you see a door in front of you, and when you open it, 10 identical computers start running a copy of you each: 9 show you a green room, 1 shows you a red room, you expect that if you enter a room and open your eyes, in 9⁄10 cases you’ll find yourself in a green room.
So if it is the situation we’re in- everything happens- then I think a more natural way to rescue our values would be to care about what cognitive algorithms usually experience, when they open their eyes/other senses. Do they suffer or do they find all sorts of meaningful beauty in their experiences? I don’t think we should stop caring about suffering just because it happens anyway, if we can still have an impact on how common it is.
If we live in a naive MWI, an IBP agent doesn’t care for good reasons internal to it (somewhat similar to how if we’re in our world, an agent that cares only about ontologically basic atoms doesn’t care about our world, for good reasons internal to it), but I think conditional on a naive MWI, humanity’s CEV is different from what IBP agents can natively care about.
Your reasoning is invalid, because in order to talk about updating your beliefs in this context, you need a metaphysical framework which knows how to deal with anthropic probabilities (e.g. it should be able to answer puzzles in the vein of the anthropic trilemma according to some coherent, well-defined mathematical rules). IBP is such a framework, but you haven’t proposed any alternative, not to mention an argument for why that alternative is superior.
I always thought that in naive MWI what matters is not whether something happens in absolute sense, but what Born measure is concentrated on branches that contain good things instead of bad things.
The problem is this requires introducing a special decision-theory postulate that you’re supposed to care about the Born measure for some reason, even though Born measure doesn’t correspond to ordinary probability.
Huh? The whole point of the Born rule is to get a set of ordinary probabilities, which you can then test frequentistically, over a run of experiments. Quantum mechanical measure—amplitude—isn’t ordinary probability, but that’s the thing you put into the Born rule, not the thing you get out of it. And it has it’s own role, which is explaining how much contribution to a coherent superposition each component state makes.
ETA
There is a further problem interpreting the probabilities of fully decohered branches. (Calling then Everett branches is very misleading—a clear theory of decoherence is precisely what’s lacking in Everett’s work)
Whether you are supposed to care about them ethically is very unclear, since it is not clear how utilitarian style ethics would apply, even if you could make sense of the probabilities. But you are not supposed to care about them for the purposes of doing science, since they can no longer make any difference to your branch. MWI works like a collapse theory in practice.
It’s tempting to ethically discount low measure decoherent branches in some way, because that most closely approximates conventional single world utilitarianism—that is something “naive MWI” might mean. However, one should not jump to the conclusion that something is true just because it is convenient. And of course, MWI is a scientific theory so it doesn’t comes with built in ethics.
The alternative view starts with the question of whether a person low measure world still count as a full.person? If they should not, is that because they are a near-zombie, with a faint consciousness that weighs little in a hedonic utilitarian calculus? If they are not such zombies, why would they not count as a full person—the standard utilitarian argument that people in far-off lands are still moral patients seems to apply. Of course, MWI doesn’t directly answer the question about consciousness.
If “naive MWI” means the idea that any elementary interaction produces decoherent branching, then it is wrong for the reasons I explain here. Since there are some coherent superpositions, and not just decoherent branches, there are cases where the Born rule gives you ordinary probabilities, as any undergraduate physics student knows.
It’s not the existence, it’s the lack of interaction/interference.
The topic of this thread is: In naive MWI, it is postulated that all Everett branches coexist. (For example, if I toss a quantum fair coin n times, there will be 2n branches with all possible outcomes.) Under this assumption, it’s not clear in what sense the Born rule is true. (What is the meaning of the probability measure over the branches if all branches coexist?)