I’m assuming all deciders are coming to the same best decision so no worries about deciders disagreeing if you change your mind.
I’m going to be the odd-one-out here and say that both answers are correct at the time they are made… if you care far more (which I don’t think you should) about African kids in your own Everett branch (or live in a hypothetical crazy universe where many worlds is false).
(Chapter 1 of Permutation City spoiler, please click here first if not read it yet, you’ll be glad you did...): Jura lbh punatr lbhe zvaq nsgre orvat gbyq, lbh jvyy or yvxr Cnhy Qheunz, qvfnoyvat gur cnenpuhgr nsgre gur pbcl jnf znqr.
If you care about African kids in other branches equally, then the first decision is always correct, because although the second choice would make it more likely that kids in your branch will be better off, it will cost the kids in other branches more.
Why drag quantum mechanics into this? Taking the expected value gives you exactly the same thing as it does classically, and the answer is still the same. Nay is right and yea is wrong. You seem to be invoking “everett branch” as a mysterious, not-so-useful answer.
I’m not trying to be mysterious. As far as I can see, there is a distinction. The expected value of switching to Yea from your point of view is affected by whether or not you care about the kids in the branches you are not yourself in.
After being told your status, you’re split:
1⁄20 are U=Decider, U=Heads. Yea is very bad here.
9⁄20 are U=Decider, U=Tails. Yea is good here.
9⁄20 are U=Passenger, U=Heads. Yea is very bad here.
1⁄20 are U=Passenger, U=Tails. Yea is good here.
After being told your status, the new information changes the expected values across the set of branches you could now be in, because that set has changed. It is now only the first 2 lines, above, and is heavily weighted towards Yea = good, so for the kids in your own branches, Yea wins.
But the other branches still exist. If all deciders must come to the same decision (see above), then the expected value of Yea is lower than Nay as long as you care about the kids in branches you’re not in yourself—Nay wins. If fact, this expected value is exactly what it was before you had the new information about which branches you can now be in yourself.
Okay. You’re bringing up quantum mechanics needlessly, though. This is exactly the same reasoning as cousin it went through in the post, and leads to exactly the same problem, since everyone can be expected to reason like you. If yea is only said because it generates better results, and you always switch to yea, then QED always saying yea should have better results. But it doesn’t!
Since, as I’ve been saying, it’s identical to the original problem, if I knew how to resolve it I’d already have posted the resolution. :)
What can be shown is that it’s contradictory. If yea is better for your “branch” when you vote yea, and everyone always follows this reasoning and votes yea, and the whole point of “branches” is that they’re no longer causally linked, all the branches should do better. More simply, if yea is the right choice for every decider, it’s because “always yea” actually does better than “always nay.”
But always yea is not better than always nay.
If you would like to argue that there is no contradiction, you could try and find a way to resolve it by showing how a vote can be better every single time without being better all the time.
if yea is the right choice for every decider, it’s because “always yea” actually does better than “always nay.
It is the best choice for every decider who only cares about the kids in their Everett branches.
It’s not the best choice for deciders (or non-deciders, though they don’t get a say) who care equally about kids across all the branches. Their preferences are as before.
It’s a really lousy choice for any non-deciders who only care about the kids in their Everett branches. Their expected outcome for “yea” just got worse by the same amount that first lot of deciders who only care about their kids got better. Unfortunately for them, their sole decider thinks he’s probably in the Tails group, and that his kids will gain by saying “yea”, as he is perfectly rational to think given the information he has at that time.
What does an entity that only cares about the kids in its Everett branches even look like? I am confused. Usually things have preferences about lotteries over outcomes, and an outcome is an entire multiverse, and these things are physically realized and their preferences change when the coinflip happens? How does that even work? I guess if you want you can implement an entity that works like that, but I’m not certain why we’d even call it the same entity at any two times. This sort of entity would do very well to cut out its eyes and ears so it never learns it’s a decider and begin chanting “nay, nay, nay!” wouldn’t it?
What does an entity that only cares about the kids in its Everett branches even look like?
Example 1: Someone that doesn’t know about or believe in many worlds. The don’t care about kids in alternate Everett branches, because to their mind they don’t exist, so have zero value. In his mind, all value is in this single universe, with a coin that he is 90% sure landed Tails. By his beliefs, “yea” wins. Most people just don’t think about entire multiverses.
Example 2: Someone who gets many worlds, but tends inclined to be overwhelmingly more charitable to those that feel Near rather than Far, and to those that feel like Their Responsibility rather than Someone Else’s Problem. I hear this isn’t too uncommon :-)
This sort of entity would do very well to cut out its eyes and ears so it never learns it’s a decider and begin chanting “nay, nay, nay!” wouldn’t it?
Actual cutting aside, this is an excellent strategy. Upvoted :)
What does an entity that only cares about the kids in its Everett branches even look like?
Example 1: Someone that doesn’t know about or believe in many worlds. The don’t care about kids in alternate Everett branches, because to their mind they don’t exist, so have zero value. In his mind, all value is in this single universe, with a coin that he is 90% sure landed Tails. By his beliefs, “yea” wins. Most people just don’t think about entire multiverses.
Example 2: Someone who gets many worlds, but tends inclined to be overwhelmingly more charitable to those that feel Near rather than Far, and to those that feel like Their Responsibility rather than Someone Else’s Problem. I hear this isn’t too uncommon :-)
This sort of entity would do very well to cut out its eyes and ears so it never learns it’s a decider and begin chanting “nay, nay, nay!” wouldn’t it?
Actual cutting aside, this is an excellent strategy.
I suppose I’ll avoid repeating myself and try to say new things.
You seem to be saying that when you vote yea, it’s right, but when other people vote yea, it’s wrong. Hmm, I guess you could resolve it by allowing the validity of logic to vary depending on who used it. But that would be bad.
I think we may be misunderstanding each-other, and possibly even arguing about different things. I’m finding it increasingly hard to think how your comments could possibly be a logical response to those you’re responding to, and I suspect you’re feeling the same.
I said I disagreed with those deciders who didn’t care about kids in other branches.
Well, I give up. This makes so little sense to me that I have lost all hope of this going somewhere useful. It was interesting, though, and it gave me a clearer picture of the problem, so I regret nothing :D
I’m not trying to be mysterious. As far as I can see, there is a distinction. The expected value of switching to Yea from your point of view is affected by whether or not you care about the kids in the branches you are not yourself in.
After being told your status, you’re split:
1⁄20 are U=Decider, U=Heads. Yea is very bad here.
9⁄20 are U=Decider, U=Tails. Yea is good here.
9⁄20 are U=Passenger, U=Heads. Yea is very bad here.
1⁄20 are U=Passenger, U=Tails. Yea is good here.
After being told your status, the new information changes the expected values across the set of branches you could now be in, because that set has changed. It is now only the first 2 lines, above, and is heavily weighted towards Yea = good, so for the kids in your own branches, Yea wins.
But the other branches still exist. If all deciders must come to the same decision (see above), then the expected value of Yea is lower than Nay as long as you care about the kids in branches you’re not in yourself—Nay wins. If fact, this expected value is exactly what it was before you had the new information about which branches you can now be in yourself.
I’m not trying to be mysterious. As far as I can see, there is a distinction. The expected value of switching to Yea from your point of view is affected by whether or not you care about the kids in the branches you are not yourself in.
After being told your status, you’re split:
1/20 are U=Decider, U=Heads. Yea is very bad here.
9/20 are U=Decider, U=Tails. Yea is good here.
9/20 are U=Passenger, U=Heads. Yea is very bad here.
1/20 are U=Passenger, U=Tails. Yea is good here.
After being told your status, the new information changes the expected values across the set of branches you could now be in, because that set has changed. It is now only the first 2 lines, above, and is heavily weighted towards Yea = good, so for the kids in your own branches, Yea wins.
But the other branches still exist. If all deciders must come to the same decision (see above), then the expected value of Yea is lower than Nay as long as you care about the kids in branches you’re not in yourself—Nay wins. If fact, this expected value is exactly what it was before you had the new information about which branches you can now be in yourself.
I think your reasoning here is correct and that it is as good an argument against the many worlds interpretation as any that I have seen.
The best argument against the many worlds interpretation that you have seen is somewhat muddled thinking about ethical considerations with respect to normal coin tosses?
Yup, that’s the best. I’d be happy to hear about the best you’ve seen, especially if you’ve seen better.
Why do you assume I would be inclined to one up the argument? The more natural interpretation of my implied inference is in approximately the reverse direction.
If the best argument against MWI that a self professed physicist and MWI critic has ever seen has absolutely zero persuasive power then that is rather strong evidence in favor.
I am new to this board and come in with a “prior” of rejecting MWI beyond the tiniest amount on the basis of, among other things, conservation of energy and mass. (Where do these constantly forming new worlds come from?) MWI seems more like a mapmakers mistake than a description of the territory, which manifestly has only one universe in it every time I look.
I was inviting you to show me with links or description whatever you find most compelling, if you could be bothered to. I am reading main sequence stuff and this is one of the more interesting puzzles among Less Wrong’s idiosyncratic consensi.
Here a subsequent discussion about some experimental test(s) of MWI. Also here a video dicussion between Scott Aaronson and Yudkowsky (starting at 38:11). More links on topic can be found here.
ETA
Sorry, I wanted to reply to another of your comments, wrong tab. Anyway.
Wikipedia points to a site that says conservation of energy is not violated. Do you know if it’s factually wrong or what’s going on here? (if so can you update wikipedia? :D)
Q22 Does many-worlds violate conservation of energy?
First, the law conservation of energy is based on observations within each world. All observations within each world are consistent with conservation of energy, therefore energy is conserved.
Second, and more precisely, conservation of energy, in QM, is formulated in terms of weighted averages or expectation values. Conservation of energy is expressed by saying that the time derivative of the expected energy of a closed system vanishes. This statement can be scaled up to include the whole universe. Each world has an approximate energy, but the energy of the total wavefunction, or any subset of, involves summing over each world, weighted with its probability measure. This weighted sum is a constant. So energy is conserved within each world and also across the totality of worlds.
One way of viewing this result—that observed conserved quantities are conserved across the totality of worlds—is to note that new worlds are not created by the action of the wave equation, rather existing worlds are split into successively “thinner” and “thinner” slices, if we view the probability densities as “thickness”.
I don’t understand. How is my argument an argument against the many worlds interpretation? (Without falling into the logical fallacy of Appeal to Consequences).
It would seem to suggest that if I want to be rich I should buy a bunch of lottery tickets and then kill myself when I don’t win.
I have not seen the local discussion of MWI and everett branches, but my “conclusion” in the past has been that MWI is a defect of the map maker and not a feature of the territory. I’d be happy to be pointed to something that would change my mind or at least rock it a bit, but for now it looks like angels dancing on the heads of pins. Has somebody provided an experiment that would rule MWI in or out? If so, what was the result? If not, then how is a consideration of MWI anything other than confusing the map with the territory?
If I have fallen in to Appeal to Consequences with my original post, than my bad.
It would seem to suggest that if I want to be rich I should buy a bunch of lottery tickets and then kill myself when I don’t win.
I don’t think that’s the case, but even if it were, using that to argue against the likelihood of MWI would be Appeal to Consequences.
I have not seen the local discussion of MWI and everett branches, but my “conclusion” in the past has been that MWI is a defect of the map maker and not a feature of the territory.
That’s what I used to think :)
I’d be happy to be pointed to something that would change my mind or at least rock it a bit
If you’re prepared for a long but rewarding read, Eliezer’s Quantum Physics Sequence is a non-mysterious introduction to quantum mechanics, intended to be accessible to anyone who can grok algebra and complex numbers. Cleaning up the old confusion about QM is used to introduce basic issues in rationality (such as the technical version of Occam’s Razor), epistemology, reductionism, naturalism, and philosophy of science.
Has somebody provided an experiment that would rule MWI in or out? If so, what was the result? If not, then how is a consideration of MWI anything other than confusing the map with the territory?
The idea is that MWI is the simplest explanation that fits the data, by the definition of simplest that has proven to be most useful when predicting which of different theories that match the same data is actually correct.
I’m assuming all deciders are coming to the same best decision so no worries about deciders disagreeing if you change your mind.
I’m going to be the odd-one-out here and say that both answers are correct at the time they are made… if you care far more (which I don’t think you should) about African kids in your own Everett branch (or live in a hypothetical crazy universe where many worlds is false).
(Chapter 1 of Permutation City spoiler, please click here first if not read it yet, you’ll be glad you did...): Jura lbh punatr lbhe zvaq nsgre orvat gbyq, lbh jvyy or yvxr Cnhy Qheunz, qvfnoyvat gur cnenpuhgr nsgre gur pbcl jnf znqr.
If you care about African kids in other branches equally, then the first decision is always correct, because although the second choice would make it more likely that kids in your branch will be better off, it will cost the kids in other branches more.
Why drag quantum mechanics into this? Taking the expected value gives you exactly the same thing as it does classically, and the answer is still the same. Nay is right and yea is wrong. You seem to be invoking “everett branch” as a mysterious, not-so-useful answer.
I’m not trying to be mysterious. As far as I can see, there is a distinction. The expected value of switching to Yea from your point of view is affected by whether or not you care about the kids in the branches you are not yourself in.
After being told your status, you’re split:
1⁄20 are U=Decider, U=Heads. Yea is very bad here.
9⁄20 are U=Decider, U=Tails. Yea is good here.
9⁄20 are U=Passenger, U=Heads. Yea is very bad here.
1⁄20 are U=Passenger, U=Tails. Yea is good here.
After being told your status, the new information changes the expected values across the set of branches you could now be in, because that set has changed. It is now only the first 2 lines, above, and is heavily weighted towards Yea = good, so for the kids in your own branches, Yea wins.
But the other branches still exist. If all deciders must come to the same decision (see above), then the expected value of Yea is lower than Nay as long as you care about the kids in branches you’re not in yourself—Nay wins. If fact, this expected value is exactly what it was before you had the new information about which branches you can now be in yourself.
Okay. You’re bringing up quantum mechanics needlessly, though. This is exactly the same reasoning as cousin it went through in the post, and leads to exactly the same problem, since everyone can be expected to reason like you. If yea is only said because it generates better results, and you always switch to yea, then QED always saying yea should have better results. But it doesn’t!
But my whole point has been that yea can yield better results, iff you don’t care about kids in other branches, which would make branches relevant.
To show that branches are not relevant, tell me why that argument (that Yeah wins in this case) is wrong, don’t just assert that it’s wrong.
Since, as I’ve been saying, it’s identical to the original problem, if I knew how to resolve it I’d already have posted the resolution. :)
What can be shown is that it’s contradictory. If yea is better for your “branch” when you vote yea, and everyone always follows this reasoning and votes yea, and the whole point of “branches” is that they’re no longer causally linked, all the branches should do better. More simply, if yea is the right choice for every decider, it’s because “always yea” actually does better than “always nay.”
But always yea is not better than always nay.
If you would like to argue that there is no contradiction, you could try and find a way to resolve it by showing how a vote can be better every single time without being better all the time.
It is the best choice for every decider who only cares about the kids in their Everett branches.
It’s not the best choice for deciders (or non-deciders, though they don’t get a say) who care equally about kids across all the branches. Their preferences are as before.
It’s a really lousy choice for any non-deciders who only care about the kids in their Everett branches. Their expected outcome for “yea” just got worse by the same amount that first lot of deciders who only care about their kids got better. Unfortunately for them, their sole decider thinks he’s probably in the Tails group, and that his kids will gain by saying “yea”, as he is perfectly rational to think given the information he has at that time.
There is no contradiction.
What does an entity that only cares about the kids in its Everett branches even look like? I am confused. Usually things have preferences about lotteries over outcomes, and an outcome is an entire multiverse, and these things are physically realized and their preferences change when the coinflip happens? How does that even work? I guess if you want you can implement an entity that works like that, but I’m not certain why we’d even call it the same entity at any two times. This sort of entity would do very well to cut out its eyes and ears so it never learns it’s a decider and begin chanting “nay, nay, nay!” wouldn’t it?
Example 1: Someone that doesn’t know about or believe in many worlds. The don’t care about kids in alternate Everett branches, because to their mind they don’t exist, so have zero value. In his mind, all value is in this single universe, with a coin that he is 90% sure landed Tails. By his beliefs, “yea” wins. Most people just don’t think about entire multiverses.
Example 2: Someone who gets many worlds, but tends inclined to be overwhelmingly more charitable to those that feel Near rather than Far, and to those that feel like Their Responsibility rather than Someone Else’s Problem. I hear this isn’t too uncommon :-)
Actual cutting aside, this is an excellent strategy. Upvoted :)
Example 1: Someone that doesn’t know about or believe in many worlds. The don’t care about kids in alternate Everett branches, because to their mind they don’t exist, so have zero value. In his mind, all value is in this single universe, with a coin that he is 90% sure landed Tails. By his beliefs, “yea” wins. Most people just don’t think about entire multiverses.
Example 2: Someone who gets many worlds, but tends inclined to be overwhelmingly more charitable to those that feel Near rather than Far, and to those that feel like Their Responsibility rather than Someone Else’s Problem. I hear this isn’t too uncommon :-)
Actual cutting aside, this is an excellent strategy.
I suppose I’ll avoid repeating myself and try to say new things.
You seem to be saying that when you vote yea, it’s right, but when other people vote yea, it’s wrong. Hmm, I guess you could resolve it by allowing the validity of logic to vary depending on who used it. But that would be bad.
(Edited for clarity)
I think we may be misunderstanding each-other, and possibly even arguing about different things. I’m finding it increasingly hard to think how your comments could possibly be a logical response to those you’re responding to, and I suspect you’re feeling the same.
Serves me right, of course.
When I do what? What are you even talking about?
Ah, sorry, that does look odd. I meant “when you vote ‘yea,’ it’s okay, but when they vote ‘yea’ for exactly the same reasons, it’s bad.”
Not sure why the ones voting “yea” would be me. I said I disagreed with those deciders who didn’t care about kids in other branches.
Anyway, they vote differently despite being in the same situation because their preferences are different.
Well, I give up. This makes so little sense to me that I have lost all hope of this going somewhere useful. It was interesting, though, and it gave me a clearer picture of the problem, so I regret nothing :D
We’re not perfect Bayesians (and certainly don’t have common knowledge of each-other’s beliefs!) so we can agree to disagree.
Besides, I’m running away for a few days. Merry Xmas :)
I’m not trying to be mysterious. As far as I can see, there is a distinction. The expected value of switching to Yea from your point of view is affected by whether or not you care about the kids in the branches you are not yourself in.
After being told your status, you’re split:
1⁄20 are U=Decider, U=Heads. Yea is very bad here.
9⁄20 are U=Decider, U=Tails. Yea is good here.
9⁄20 are U=Passenger, U=Heads. Yea is very bad here.
1⁄20 are U=Passenger, U=Tails. Yea is good here.
After being told your status, the new information changes the expected values across the set of branches you could now be in, because that set has changed. It is now only the first 2 lines, above, and is heavily weighted towards Yea = good, so for the kids in your own branches, Yea wins.
But the other branches still exist. If all deciders must come to the same decision (see above), then the expected value of Yea is lower than Nay as long as you care about the kids in branches you’re not in yourself—Nay wins. If fact, this expected value is exactly what it was before you had the new information about which branches you can now be in yourself.
I’m not trying to be mysterious. As far as I can see, there is a distinction. The expected value of switching to Yea from your point of view is affected by whether or not you care about the kids in the branches you are not yourself in.
After being told your status, you’re split: 1/20 are U=Decider, U=Heads. Yea is very bad here. 9/20 are U=Decider, U=Tails. Yea is good here. 9/20 are U=Passenger, U=Heads. Yea is very bad here. 1/20 are U=Passenger, U=Tails. Yea is good here.
After being told your status, the new information changes the expected values across the set of branches you could now be in, because that set has changed. It is now only the first 2 lines, above, and is heavily weighted towards Yea = good, so for the kids in your own branches, Yea wins.
But the other branches still exist. If all deciders must come to the same decision (see above), then the expected value of Yea is lower than Nay as long as you care about the kids in branches you’re not in yourself—Nay wins. If fact, this expected value is exactly what it was before you had the new information about which branches you can now be in yourself.
I think your reasoning here is correct and that it is as good an argument against the many worlds interpretation as any that I have seen.
The best argument against the many worlds interpretation that you have seen is somewhat muddled thinking about ethical considerations with respect to normal coin tosses?
Yup, that’s the best. I’d be happy to hear about the best you’ve seen, especially if you’ve seen better.
Why do you assume I would be inclined to one up the argument? The more natural interpretation of my implied inference is in approximately the reverse direction.
If the best argument against MWI that a self professed physicist and MWI critic has ever seen has absolutely zero persuasive power then that is rather strong evidence in favor.
I am new to this board and come in with a “prior” of rejecting MWI beyond the tiniest amount on the basis of, among other things, conservation of energy and mass. (Where do these constantly forming new worlds come from?) MWI seems more like a mapmakers mistake than a description of the territory, which manifestly has only one universe in it every time I look.
I was inviting you to show me with links or description whatever you find most compelling, if you could be bothered to. I am reading main sequence stuff and this is one of the more interesting puzzles among Less Wrong’s idiosyncratic consensi.
Here a subsequent discussion about some experimental test(s) of MWI. Also here a video dicussion between Scott Aaronson and Yudkowsky (starting at 38:11). More links on topic can be found here.
ETA Sorry, I wanted to reply to another of your comments, wrong tab. Anyway.
Wikipedia points to a site that says conservation of energy is not violated. Do you know if it’s factually wrong or what’s going on here? (if so can you update wikipedia? :D)
Q22 Does many-worlds violate conservation of energy?
First, the law conservation of energy is based on observations within each world. All observations within each world are consistent with conservation of energy, therefore energy is conserved. Second, and more precisely, conservation of energy, in QM, is formulated in terms of weighted averages or expectation values. Conservation of energy is expressed by saying that the time derivative of the expected energy of a closed system vanishes. This statement can be scaled up to include the whole universe. Each world has an approximate energy, but the energy of the total wavefunction, or any subset of, involves summing over each world, weighted with its probability measure. This weighted sum is a constant. So energy is conserved within each world and also across the totality of worlds.
One way of viewing this result—that observed conserved quantities are conserved across the totality of worlds—is to note that new worlds are not created by the action of the wave equation, rather existing worlds are split into successively “thinner” and “thinner” slices, if we view the probability densities as “thickness”.
I don’t understand. How is my argument an argument against the many worlds interpretation? (Without falling into the logical fallacy of Appeal to Consequences).
It would seem to suggest that if I want to be rich I should buy a bunch of lottery tickets and then kill myself when I don’t win.
I have not seen the local discussion of MWI and everett branches, but my “conclusion” in the past has been that MWI is a defect of the map maker and not a feature of the territory. I’d be happy to be pointed to something that would change my mind or at least rock it a bit, but for now it looks like angels dancing on the heads of pins. Has somebody provided an experiment that would rule MWI in or out? If so, what was the result? If not, then how is a consideration of MWI anything other than confusing the map with the territory?
If I have fallen in to Appeal to Consequences with my original post, than my bad.
I don’t think that’s the case, but even if it were, using that to argue against the likelihood of MWI would be Appeal to Consequences.
That’s what I used to think :)
If you’re prepared for a long but rewarding read, Eliezer’s Quantum Physics Sequence is a non-mysterious introduction to quantum mechanics, intended to be accessible to anyone who can grok algebra and complex numbers. Cleaning up the old confusion about QM is used to introduce basic issues in rationality (such as the technical version of Occam’s Razor), epistemology, reductionism, naturalism, and philosophy of science.
For a shorter sequence that concentrates on why MWI wins, see And the Winner is… Many-Worlds!
The idea is that MWI is the simplest explanation that fits the data, by the definition of simplest that has proven to be most useful when predicting which of different theories that match the same data is actually correct.
How is it an argument against the many worlds interpretation?
Unless you’re falling into the logical fallacy of Appeal to Consequences.