Why drag quantum mechanics into this? Taking the expected value gives you exactly the same thing as it does classically, and the answer is still the same. Nay is right and yea is wrong. You seem to be invoking “everett branch” as a mysterious, not-so-useful answer.
I’m not trying to be mysterious. As far as I can see, there is a distinction. The expected value of switching to Yea from your point of view is affected by whether or not you care about the kids in the branches you are not yourself in.
After being told your status, you’re split:
1⁄20 are U=Decider, U=Heads. Yea is very bad here.
9⁄20 are U=Decider, U=Tails. Yea is good here.
9⁄20 are U=Passenger, U=Heads. Yea is very bad here.
1⁄20 are U=Passenger, U=Tails. Yea is good here.
After being told your status, the new information changes the expected values across the set of branches you could now be in, because that set has changed. It is now only the first 2 lines, above, and is heavily weighted towards Yea = good, so for the kids in your own branches, Yea wins.
But the other branches still exist. If all deciders must come to the same decision (see above), then the expected value of Yea is lower than Nay as long as you care about the kids in branches you’re not in yourself—Nay wins. If fact, this expected value is exactly what it was before you had the new information about which branches you can now be in yourself.
Okay. You’re bringing up quantum mechanics needlessly, though. This is exactly the same reasoning as cousin it went through in the post, and leads to exactly the same problem, since everyone can be expected to reason like you. If yea is only said because it generates better results, and you always switch to yea, then QED always saying yea should have better results. But it doesn’t!
Since, as I’ve been saying, it’s identical to the original problem, if I knew how to resolve it I’d already have posted the resolution. :)
What can be shown is that it’s contradictory. If yea is better for your “branch” when you vote yea, and everyone always follows this reasoning and votes yea, and the whole point of “branches” is that they’re no longer causally linked, all the branches should do better. More simply, if yea is the right choice for every decider, it’s because “always yea” actually does better than “always nay.”
But always yea is not better than always nay.
If you would like to argue that there is no contradiction, you could try and find a way to resolve it by showing how a vote can be better every single time without being better all the time.
if yea is the right choice for every decider, it’s because “always yea” actually does better than “always nay.
It is the best choice for every decider who only cares about the kids in their Everett branches.
It’s not the best choice for deciders (or non-deciders, though they don’t get a say) who care equally about kids across all the branches. Their preferences are as before.
It’s a really lousy choice for any non-deciders who only care about the kids in their Everett branches. Their expected outcome for “yea” just got worse by the same amount that first lot of deciders who only care about their kids got better. Unfortunately for them, their sole decider thinks he’s probably in the Tails group, and that his kids will gain by saying “yea”, as he is perfectly rational to think given the information he has at that time.
What does an entity that only cares about the kids in its Everett branches even look like? I am confused. Usually things have preferences about lotteries over outcomes, and an outcome is an entire multiverse, and these things are physically realized and their preferences change when the coinflip happens? How does that even work? I guess if you want you can implement an entity that works like that, but I’m not certain why we’d even call it the same entity at any two times. This sort of entity would do very well to cut out its eyes and ears so it never learns it’s a decider and begin chanting “nay, nay, nay!” wouldn’t it?
What does an entity that only cares about the kids in its Everett branches even look like?
Example 1: Someone that doesn’t know about or believe in many worlds. The don’t care about kids in alternate Everett branches, because to their mind they don’t exist, so have zero value. In his mind, all value is in this single universe, with a coin that he is 90% sure landed Tails. By his beliefs, “yea” wins. Most people just don’t think about entire multiverses.
Example 2: Someone who gets many worlds, but tends inclined to be overwhelmingly more charitable to those that feel Near rather than Far, and to those that feel like Their Responsibility rather than Someone Else’s Problem. I hear this isn’t too uncommon :-)
This sort of entity would do very well to cut out its eyes and ears so it never learns it’s a decider and begin chanting “nay, nay, nay!” wouldn’t it?
Actual cutting aside, this is an excellent strategy. Upvoted :)
What does an entity that only cares about the kids in its Everett branches even look like?
Example 1: Someone that doesn’t know about or believe in many worlds. The don’t care about kids in alternate Everett branches, because to their mind they don’t exist, so have zero value. In his mind, all value is in this single universe, with a coin that he is 90% sure landed Tails. By his beliefs, “yea” wins. Most people just don’t think about entire multiverses.
Example 2: Someone who gets many worlds, but tends inclined to be overwhelmingly more charitable to those that feel Near rather than Far, and to those that feel like Their Responsibility rather than Someone Else’s Problem. I hear this isn’t too uncommon :-)
This sort of entity would do very well to cut out its eyes and ears so it never learns it’s a decider and begin chanting “nay, nay, nay!” wouldn’t it?
Actual cutting aside, this is an excellent strategy.
I suppose I’ll avoid repeating myself and try to say new things.
You seem to be saying that when you vote yea, it’s right, but when other people vote yea, it’s wrong. Hmm, I guess you could resolve it by allowing the validity of logic to vary depending on who used it. But that would be bad.
I think we may be misunderstanding each-other, and possibly even arguing about different things. I’m finding it increasingly hard to think how your comments could possibly be a logical response to those you’re responding to, and I suspect you’re feeling the same.
I said I disagreed with those deciders who didn’t care about kids in other branches.
Well, I give up. This makes so little sense to me that I have lost all hope of this going somewhere useful. It was interesting, though, and it gave me a clearer picture of the problem, so I regret nothing :D
I’m not trying to be mysterious. As far as I can see, there is a distinction. The expected value of switching to Yea from your point of view is affected by whether or not you care about the kids in the branches you are not yourself in.
After being told your status, you’re split:
1⁄20 are U=Decider, U=Heads. Yea is very bad here.
9⁄20 are U=Decider, U=Tails. Yea is good here.
9⁄20 are U=Passenger, U=Heads. Yea is very bad here.
1⁄20 are U=Passenger, U=Tails. Yea is good here.
After being told your status, the new information changes the expected values across the set of branches you could now be in, because that set has changed. It is now only the first 2 lines, above, and is heavily weighted towards Yea = good, so for the kids in your own branches, Yea wins.
But the other branches still exist. If all deciders must come to the same decision (see above), then the expected value of Yea is lower than Nay as long as you care about the kids in branches you’re not in yourself—Nay wins. If fact, this expected value is exactly what it was before you had the new information about which branches you can now be in yourself.
I’m not trying to be mysterious. As far as I can see, there is a distinction. The expected value of switching to Yea from your point of view is affected by whether or not you care about the kids in the branches you are not yourself in.
After being told your status, you’re split:
1/20 are U=Decider, U=Heads. Yea is very bad here.
9/20 are U=Decider, U=Tails. Yea is good here.
9/20 are U=Passenger, U=Heads. Yea is very bad here.
1/20 are U=Passenger, U=Tails. Yea is good here.
After being told your status, the new information changes the expected values across the set of branches you could now be in, because that set has changed. It is now only the first 2 lines, above, and is heavily weighted towards Yea = good, so for the kids in your own branches, Yea wins.
But the other branches still exist. If all deciders must come to the same decision (see above), then the expected value of Yea is lower than Nay as long as you care about the kids in branches you’re not in yourself—Nay wins. If fact, this expected value is exactly what it was before you had the new information about which branches you can now be in yourself.
Why drag quantum mechanics into this? Taking the expected value gives you exactly the same thing as it does classically, and the answer is still the same. Nay is right and yea is wrong. You seem to be invoking “everett branch” as a mysterious, not-so-useful answer.
I’m not trying to be mysterious. As far as I can see, there is a distinction. The expected value of switching to Yea from your point of view is affected by whether or not you care about the kids in the branches you are not yourself in.
After being told your status, you’re split:
1⁄20 are U=Decider, U=Heads. Yea is very bad here.
9⁄20 are U=Decider, U=Tails. Yea is good here.
9⁄20 are U=Passenger, U=Heads. Yea is very bad here.
1⁄20 are U=Passenger, U=Tails. Yea is good here.
After being told your status, the new information changes the expected values across the set of branches you could now be in, because that set has changed. It is now only the first 2 lines, above, and is heavily weighted towards Yea = good, so for the kids in your own branches, Yea wins.
But the other branches still exist. If all deciders must come to the same decision (see above), then the expected value of Yea is lower than Nay as long as you care about the kids in branches you’re not in yourself—Nay wins. If fact, this expected value is exactly what it was before you had the new information about which branches you can now be in yourself.
Okay. You’re bringing up quantum mechanics needlessly, though. This is exactly the same reasoning as cousin it went through in the post, and leads to exactly the same problem, since everyone can be expected to reason like you. If yea is only said because it generates better results, and you always switch to yea, then QED always saying yea should have better results. But it doesn’t!
But my whole point has been that yea can yield better results, iff you don’t care about kids in other branches, which would make branches relevant.
To show that branches are not relevant, tell me why that argument (that Yeah wins in this case) is wrong, don’t just assert that it’s wrong.
Since, as I’ve been saying, it’s identical to the original problem, if I knew how to resolve it I’d already have posted the resolution. :)
What can be shown is that it’s contradictory. If yea is better for your “branch” when you vote yea, and everyone always follows this reasoning and votes yea, and the whole point of “branches” is that they’re no longer causally linked, all the branches should do better. More simply, if yea is the right choice for every decider, it’s because “always yea” actually does better than “always nay.”
But always yea is not better than always nay.
If you would like to argue that there is no contradiction, you could try and find a way to resolve it by showing how a vote can be better every single time without being better all the time.
It is the best choice for every decider who only cares about the kids in their Everett branches.
It’s not the best choice for deciders (or non-deciders, though they don’t get a say) who care equally about kids across all the branches. Their preferences are as before.
It’s a really lousy choice for any non-deciders who only care about the kids in their Everett branches. Their expected outcome for “yea” just got worse by the same amount that first lot of deciders who only care about their kids got better. Unfortunately for them, their sole decider thinks he’s probably in the Tails group, and that his kids will gain by saying “yea”, as he is perfectly rational to think given the information he has at that time.
There is no contradiction.
What does an entity that only cares about the kids in its Everett branches even look like? I am confused. Usually things have preferences about lotteries over outcomes, and an outcome is an entire multiverse, and these things are physically realized and their preferences change when the coinflip happens? How does that even work? I guess if you want you can implement an entity that works like that, but I’m not certain why we’d even call it the same entity at any two times. This sort of entity would do very well to cut out its eyes and ears so it never learns it’s a decider and begin chanting “nay, nay, nay!” wouldn’t it?
Example 1: Someone that doesn’t know about or believe in many worlds. The don’t care about kids in alternate Everett branches, because to their mind they don’t exist, so have zero value. In his mind, all value is in this single universe, with a coin that he is 90% sure landed Tails. By his beliefs, “yea” wins. Most people just don’t think about entire multiverses.
Example 2: Someone who gets many worlds, but tends inclined to be overwhelmingly more charitable to those that feel Near rather than Far, and to those that feel like Their Responsibility rather than Someone Else’s Problem. I hear this isn’t too uncommon :-)
Actual cutting aside, this is an excellent strategy. Upvoted :)
Example 1: Someone that doesn’t know about or believe in many worlds. The don’t care about kids in alternate Everett branches, because to their mind they don’t exist, so have zero value. In his mind, all value is in this single universe, with a coin that he is 90% sure landed Tails. By his beliefs, “yea” wins. Most people just don’t think about entire multiverses.
Example 2: Someone who gets many worlds, but tends inclined to be overwhelmingly more charitable to those that feel Near rather than Far, and to those that feel like Their Responsibility rather than Someone Else’s Problem. I hear this isn’t too uncommon :-)
Actual cutting aside, this is an excellent strategy.
I suppose I’ll avoid repeating myself and try to say new things.
You seem to be saying that when you vote yea, it’s right, but when other people vote yea, it’s wrong. Hmm, I guess you could resolve it by allowing the validity of logic to vary depending on who used it. But that would be bad.
(Edited for clarity)
I think we may be misunderstanding each-other, and possibly even arguing about different things. I’m finding it increasingly hard to think how your comments could possibly be a logical response to those you’re responding to, and I suspect you’re feeling the same.
Serves me right, of course.
When I do what? What are you even talking about?
Ah, sorry, that does look odd. I meant “when you vote ‘yea,’ it’s okay, but when they vote ‘yea’ for exactly the same reasons, it’s bad.”
Not sure why the ones voting “yea” would be me. I said I disagreed with those deciders who didn’t care about kids in other branches.
Anyway, they vote differently despite being in the same situation because their preferences are different.
Well, I give up. This makes so little sense to me that I have lost all hope of this going somewhere useful. It was interesting, though, and it gave me a clearer picture of the problem, so I regret nothing :D
We’re not perfect Bayesians (and certainly don’t have common knowledge of each-other’s beliefs!) so we can agree to disagree.
Besides, I’m running away for a few days. Merry Xmas :)
I’m not trying to be mysterious. As far as I can see, there is a distinction. The expected value of switching to Yea from your point of view is affected by whether or not you care about the kids in the branches you are not yourself in.
After being told your status, you’re split:
1⁄20 are U=Decider, U=Heads. Yea is very bad here.
9⁄20 are U=Decider, U=Tails. Yea is good here.
9⁄20 are U=Passenger, U=Heads. Yea is very bad here.
1⁄20 are U=Passenger, U=Tails. Yea is good here.
After being told your status, the new information changes the expected values across the set of branches you could now be in, because that set has changed. It is now only the first 2 lines, above, and is heavily weighted towards Yea = good, so for the kids in your own branches, Yea wins.
But the other branches still exist. If all deciders must come to the same decision (see above), then the expected value of Yea is lower than Nay as long as you care about the kids in branches you’re not in yourself—Nay wins. If fact, this expected value is exactly what it was before you had the new information about which branches you can now be in yourself.
I’m not trying to be mysterious. As far as I can see, there is a distinction. The expected value of switching to Yea from your point of view is affected by whether or not you care about the kids in the branches you are not yourself in.
After being told your status, you’re split: 1/20 are U=Decider, U=Heads. Yea is very bad here. 9/20 are U=Decider, U=Tails. Yea is good here. 9/20 are U=Passenger, U=Heads. Yea is very bad here. 1/20 are U=Passenger, U=Tails. Yea is good here.
After being told your status, the new information changes the expected values across the set of branches you could now be in, because that set has changed. It is now only the first 2 lines, above, and is heavily weighted towards Yea = good, so for the kids in your own branches, Yea wins.
But the other branches still exist. If all deciders must come to the same decision (see above), then the expected value of Yea is lower than Nay as long as you care about the kids in branches you’re not in yourself—Nay wins. If fact, this expected value is exactly what it was before you had the new information about which branches you can now be in yourself.