I seem to be entering a new stage in my ‘study of Less Wrong beliefs’ where I feel like I’ve identified and assimilated a large fraction of them, but am beginning to notice a collusion of contradictions. This isn’t so surprising, since Less Wrong is the grouped beliefs of many different people, and it’s each person’s job to find their own self-consistent ribbon.
But just to check one of these—Omega’s accurate prediction of your choice in the Newcomb problem, which assumes determinism, is actually impossible, right?
You can get around the universe being non-deterministic because of quantum mechanical considerations using the many worlds hypothesis: all symmetric possible ‘quark’ choices are made, and the universe evolves all of these as branching realities. If your choice to one-box or two-box is dependent upon some random factors, then Omega can’t predict what will happen because when he makes the prediction, he is up-branch of you. He doesn’t know which branch you’ll be in. Or, more accurately, he won’t be able to make a prediction that is true for all the branches.
So long as you make your Newcomb’s choice for what seem like good reasons rather than by flipping a quantum coin, it is likely that very many of you will pick the same good reasons, and that Omega can easily achieve 99% or higher accuracy. I would expect almost no Eliezer Yudkowskys to two-box—if Robin Hanson is right about mangled worlds and there’s a cutoff for worlds of very small amplitude, possibly none of me. Remember, quantum branching does not correspond to high-level decisionmaking.
Yes, most Eliezer Yudkowskys will 1-box. And most byrnemas too. But the new twist (new for me, anyway) is that the Eliezer’s that two-box are the ones that really win, as rare as they are.
The one who wins or loses is the one who makes the decision. You might as well say that if someone buys a quantum lottery ticket, the one who really wins is the future self who wins the lottery a few days later; but actually, the one who buys the lottery ticket loses.
The slight quantum chance that EY will 2-box causes the sum of EYs to lose, relative to a perfect 1-boxer, assuming Omega correctly predicts that chance and randomly fills boxes accordingly. The precise Everett branches where EY 2-boxes and where EY loses are generally different, but the higher the probability that he 1-boxes, the higher his expected value is.
And, also, we define winning as winning on average. A person can get lucky and win the lottery—doesn’t mean that person was rational to play the lottery.
Interestingly, I worked through the math once to see if you could improve on committed 1-boxing by using a strategy of quantum randomness. Assuming Omega fills the boxes such that P(box A has $)=P(1-box), P(1-box)=1 is the optimal solution.
Interesting. I was idly wondering about that. Along somewhat different lines:
I’ve decided that I am a one-boxer,and I will one box. With the following caveat: at the moment of decision, I will look for an anomaly with virtual zero probability. A star streaks across the sky and fuses with another one. Someone spills a glass of milk and halfway towards the ground, the milk rises up and fills itself back into the glass. If this happens, I will 2-box.
Winning the extra amount in this way in a handful of worlds won’t do anything to my average winnings—it won’t even increase it by epsilon. However, it could make a difference if something really important is at stake, where I would want to secure the chance that it happens one time in the whole universe.
Why is this comment being down-voted? I thought it was rather clever to use Omega’s one weak spot—quantum uncertainty—to optimize your winnings even over a set with measure zero.
Because Omega is going to know what triggers you would use for anomalies. A star streaking across the sky is easy to see coming if you know the current state of the universe. As such, Omega would know you are about to two-box even though you are currently planning to one-box.
When the star streaks across the sky, you think, “Ohmigosh! It happened! I’m about to get rich!” Then you open the boxes and get $1000.
Essentially, it boils down to this: If you can predict a scenario where you will two-box instead of one-box than Omega can as well.
The idea of flipping quantum coins is more fool proof. The idea of stars streaking or milk unspilling is only hard for us to see coming. Not to mention it will probably trigger all sorts of biases when you start looking for ways to cheat the system.
Note: I am not up to speed on quantum mechanics. I could be off on a few things here.
OK, right: looking for a merging of stars would be a terrible anomaly to use because that’s probably classical mechanics and Omega-predictable. The milk unspilling would still be a good example, because Omega can’t see it coming either. (He can accurately predict that I will two-box in this case, but he can’t predict that the milk will unspill.)
I would have to be very careful that the anomaly I use is really not predictable. For example, I screwed up with the streaking star. I was already reluctant to trust flipping quantum coins, whatever those are. They would need to be flipped or simulated by some mechanical device and may have all kinds of systematic biases and impracticalities if you are actually trying to flip 10^23^23 coins.
Without having plenty of time to think about it, and say, some physicists advising me, it would probably be wise for me to just one-box.
I didn’t down vote but I confess I don’t really know what you’re talking about in that comment. Why would you two box in that case? What really important thing is at stake? I don’t get it.
OK. The way I’ve understood the problem with Omega is that Omega is a perfect predictor so you have 2 options and 2 outcomes:
you two box --> you get $2,000 ($1000 in each box)
you one box --> you get 1M ($1M in one box, $1000 in the second box)
If Omega is not a perfect predictor, it’s possible that you two box and you get 1,001,000. (Omega incorrectly predicted you’d one box.)
However, if you are likely to 2box using this reasoning, Omega will adjust his prediction accordingly (and will even reduce your winnings when you do 1box—so that you can’t beat him).
My solution was to 1box almost always—so that Omega predicts you will one box, but then ‘cheat’ and 2-box almost never (but sometimes). According to Greg, your ‘sometimes’ has to be over a set of measure 0, any larger than that and you’ll be penalized due to Omega’s arithmetic.
What really important thing is at stake?
Nothing—if only an extra thousand is at stake, I probably wouldn’t even bother with my quantum caveat. One million dollars would be great anyway. But I can imagine an unfriendly Omega giving me choices where I would really want to have both boxes maximally filled … and then I’ll have to realize (rationally) that I must almost always 1 box, but I can get away with 2-boxing a handful of times. The problem with a handful, is that how does a subjective observer choose something so rarely? They must identify an appropriately rare quantum event.
So this job could even be accomplished by flipping a quantum coin 10000 times and only two-boxing when they come up tails each time. You’re just looking for a decision mechanism that only applies in a handful of branches.
The math is actually quite straight-forward, if anyone cares to see it. Consider a generalized Newcomb’s problem. Box A either contains $A or nothing, while box B contains $B (obviously A>B, or there is no actual problem). Let Pb the probability that you 1-box. Let Po be the probability that Omega fills box A (note that only quantum randomness counts, here. If you decide by a “random” but deterministic process, Omega knows how it turns out, even if you don’t, so Pb=0 or 1). Let F be your expected return.
Regardless of what Omega does, you collect the contents of box A, and have a (1-Pb) probability of collecting the contents of box B.
F(Po=1)= A + (1-Pb)B
F(Po=0)=(1-Pb)B
For the non-degenerate cases, these add together as expected.
F(Po, Pb) = Po(A + (1-Pb)B) + (1-Po)[(1-Pb)B]
Suppose Po = Pb := P
F(P) = P(A + (1-P)B) + [(1-P)^2] B
=P(A + B—PB) + (1-2P+P^2) B
=PA + PB - (P^2)B + B − 2PB + (P^2)B
=PA + PB + B − 2PB
=B + P(A-B)
If A > B, F(P) is monotonically increasing, so P = 1 is the gives maximum return. If A<B, P=0 is the maximum (I hope it’s obvious to everyone that if box B has MORE money than a full box A, 2-boxing is ideal).
I’m not sure why you take Po = Pb. If Omega is trying to maximize his chance of predicting correctly then he’ll take Po = 1 if Pb > 1⁄2 and Pb = 0 if Pb < 1⁄2. Then, assuming A > B / 2, the optimal choice is Po = 1⁄2.
Actually, if Omega behaves this way there is a jump discontinuity in expected value at Po=1/2. We can move the optimum away from the discontinuity by postulating there is some degree of imprecision in our ability to choose a quantum coin with the desired characteristic. Maybe when we try to pick a coin with bias Po we end up with a coin with bias Po+e, where e is an error chosen from a uniform distribution over [Po-E, Po+E]. The optimal choice of Po is now 1⁄2 + 2E, assuming A > 2EB, which is the case for sufficiently small E (E < 1⁄4 suffices). The expected payoff is now robust (continuous) to small perturbations in our choice of Po.
Your solution does have Omega maximize right answers. My solution works if Omega wants the “correct” result summed over all Everett branches: for every you that 2-boxes, there exists an empty box A, even if it doesn’t usually go to the 2-boxer.
Both answers are correct, but for different problems. The “classical” Newcomb’s problem is unphysical, just as byrnema initially described. A “Quantum Newcomb’s problem” requires specifying how Omega deals with quantum uncertainty.
Interesting. Since the spirit of Newcomb’s problem depends on 1-boxing have a higher payoff, I think it makes sense to additionally postulate your solution to quantum uncertainty, as it maintains the same maximizer. That’s so even if the Everett interpretation of QM is wrong.
Let p be the probability that you 2-box, and suppose (as Greg said) that Omega lets P(box A empty) = p with its decision being independent of yours. It sounds like you’re saying you only care about the frequency with which you get the maximal reward. This is P(you 2-box)*P(box A full) = p(1-p) which is maximized by p=0.5, not by p infinitesimally small.
I think Omega’s capabilities serve a LCPW function in thought experiments; it makes the possibilities simpler to consider than a more physically plausible setup might.
Also, I’d say that our wetware brains probably aren’t close to deterministic in how we decide (though it would take knowledge far beyond what we currently have to be sure of this), but e.g. an uploaded brain running on a classical computer would be perfectly (in principle) predictable.
If your choice to one-box or two-box is dependent upon some random factors, then Omega can’t predict what will happen because when he makes the prediction, he is up-branch of you. He doesn’t know which branch you’ll be in.
What Omega can do instead is simulate every branch and count the number of branches in which you two-box, to get a probability, and treat you as a two-boxer if this probability is greater than some threshold. This covers both the cases where you roll a die, and the cases where your decision depends on events in your brain that don’t always go the same way. In fact, Omega doesn’t even need to simulate every branch; a moderate sized sample would be good enough for the rules of Newcomb’s problem to work as they’re supposed to.
But the real reason for treating Omega as a perfect predictor is that one of the more natural ways of modeling an imperfect predictor is to decompose it into some probability of being a perfect predictor and some probability of its prediction being completely independent of your choice, the probabilities depending on how good a predictor you think it really is. In that context, denying the possibility that a perfect predictor could exist is decidedly unhelpful.
Thank to everyone who replied. So I see that we don’t really believe that the universe is deterministic in the way implied by the problem. OK, that’s consistent then.
I’m sufficiently uninformed on how quantum mechanics would interact with determinism that so far I’ve been operating under the assumption that it doesn’t. Maybe someone here can enlighten me? Does the behavior of things-that-behave-quantumly typically affect macro-level events, or is this restricted to when you look at them and record experimental data as a direct causal result of the behavior? Is there some way to prove that quantum events are random, as opposed to caused deterministically by something we just haven’t found? (I’m not sure even in principle how you could prove that something is random. It’d be proving the negative on the existence of causation for a possibly-hidden cause.)
Does the behavior of things-that-behave-quantumly typically affect macro-level events, or is this restricted to when you look at them and record experimental data as a direct causal result of the behavior?
Yes; since many important macroscopic events (e.g. weather, we’re quite sure) are extremely sensitive to initial conditions, two Everett branches that differ only by a single small quantum event can quickly diverge in macroscopic behavior.
Does the behavior of things-that-behave-quantumly typically affect macro-level events...?
Yes. They only appear weird if you look at small enough scales, but classical electrons would not have stable orbits, so without quantum effects there’d be no stable atoms.
Is there some way to prove that quantum events are random, as opposed to caused deterministically by something we just haven’t found?
No, but there is evidence. There is a proof that if they were caused by something unknown but deterministic (or if there even was a classical probability function for certain events) then they would follow Bell’s inequalities. But that appears not to be the case.
But this is where things get really shaky for materialism. If something cannot be explained in X, this means there is something outside X that determines it.
Materialists must hope that in spite of Bell’s inequalities, there is some kind of non-random mechanism that would explain quantum events, regardless of whether it is possible for us to deduce it.
Alicorn asked above:
I’m not sure even in principle how you could prove that something is random.
In principle, you can’t. And one of the foundational (but non-obvious) assumptions of materialism is that nothing is truly random. The non-refutibility of materialism depends upon never being able to demonstrate that something is actually random.
Later edit: I realize that this comment is somewhat of a non-sequitur in the context of this thread. (oops) I’ll explain that these kinds of questions have been my motivation for thinking about Newcomb in the first place. Sometimes I’m worried about whether materialism is self-consistent, sometimes I’m worried about whether dualism is a coherent idea within the context of materialism, and these questions are often conflated in my mind as a single project.
And one of the foundational (but non-obvious) assumptions of materialism is that nothing is truly random.
In that case I am not a materialist. I don’t believe in any entities that materialists don’t believe in, but I do believe that you have to resort to Many Worlds in order to be right and believe in determinism. Questions that amount to asking “which Everett branch are we in” can have nondeterministic answers.
No worries—you can still be a materialist. Many worlds is the materialist solution to the problem of random collapse. (But I think that’s what you just wrote—sorry if I misunderstood something.)
Suppose that a particle has a perfectly undetermined choice to go left or go right. If the particle goes left, a materialist must hold in principle that there is a mechanism that determined the direction, but then they can’t say the direction was undetermined.
Many worlds says that both directions were chosen, and you happen to find yourself in the one where the particle went left. So there is no problem with something outside the system swooping down and making an arbitrary decision.
The EPR paradox (Einstein-Podolsky-Rosen paradox) is a set of experiments that suggest ‘spooky action at a distance’ because particles appear to share information instantaneously, at a distance, long after an interaction between them.
People applying “common sense” would like to argue that there is some way that the information is being shared—some hidden variable that collects and shares the information between them.
Bell’s Inequality only assumes there there is some such hidden variable operating locally* -- with no specifications of any kind on how it works—and deduces correlations between particles sharing information that is in contradiction with experiments.
* that is, mechanically rather than ‘magically’ at a distance
Well, actually everything has to follow them because of Bell’s Theorem.
Edit: The second link should be to this explanation, which is somewhat less funny, but actually explains the experiments that violate the theorem. Sorry that I took so long, but it appeared that the server was down when I first tried to fix it, so I went and did other things for half an hour.
There is no special line where events become macro-level events. It’s not like you get to 10 atoms or a mole and suddenly everything is deterministic again. You’re position right now is subject to indeterminacy. It just happens that you’re big enough that the chances every particle of your body moves together in the same, noticeable direction is very very small (and by very small I mean that I can confidently predict it will never happen).
In principle our best physics tells us that determinism is just false as a metaphysics. Other people have answered the question you meant to ask which is whether the extreme indeterminacies of very small particles can effect the actions of much larger collections of particles.
In principle our best physics tells us that determinism is just false as a metaphysics.
As said above and elsewhere, MWI is perfectly deterministic. It’s just that there is no single fact of the matter as to which outcome you will observe from within it, because there’s not just one time-descendant of you.
Thats a fair point, but I don’t think it is quite that easy. On one formulation a deterministic system is a system whose end conditions are set by the rules of the system and the starting conditions. Under this definition, MWI is deterministic. But often what we mean by determinism is that it is not the case that the world could have been otherwise. For one extension of ‘world’ that is true. But for another extension, the world not only could have been otherwise. It is otherwise. There are also a lot of confusions about our use of indexicals here: what we’re referring to with “I”, “You”, “This”, “That” My” etc. Determinism usually implies that ever true statement (including true statements with indexicals) is necessarily true. But it isn’t obvious to me that many worlds gives us that. Also, a common thought experiment to glean people’s intuitions about determinism is basically to say that we live in a universe where a super computer that can exactly predict the future is possible. MWI doesn’t allow for that.
Perhaps we shouldn’t try to fit our square-pegged physics into the round holes of traditional philosophical concepts. But I take your point.
Why would determinism have anything to say about indexicals? There aren’t any Turing-complete models that forbid indexical uncertainty; you can always copy a program and put the copies in different environments. So I don’t see what use such a concept of “determinism” would have.
Thinking about this it isn’t a concern about indexicals but a concern about reference in general. When we refer to an object we’re not referring to it’s extension throughout all Everett branches but we’re also referring to an object extended in time. So take a sentence like “The table moved from the center of the room to the corner.” If determinism is true we usually think that all sentences like this are necessary truths and sentences like “The table could have stayed in the center” are false. But I’m not sure what the right way to evaluate these sentences is given MWI.
The world is deterministic at least to the extent that everything knowable is determined (but not necessarily the other way around). This is why you need determinism in the world in order to be able to make decisions (and can’t use something not being determined as a reason for the possibility of making decisions).
I seem to be entering a new stage in my ‘study of Less Wrong beliefs’ where I feel like I’ve identified and assimilated a large fraction of them, but am beginning to notice a collusion of contradictions. This isn’t so surprising, since Less Wrong is the grouped beliefs of many different people, and it’s each person’s job to find their own self-consistent ribbon.
But just to check one of these—Omega’s accurate prediction of your choice in the Newcomb problem, which assumes determinism, is actually impossible, right?
You can get around the universe being non-deterministic because of quantum mechanical considerations using the many worlds hypothesis: all symmetric possible ‘quark’ choices are made, and the universe evolves all of these as branching realities. If your choice to one-box or two-box is dependent upon some random factors, then Omega can’t predict what will happen because when he makes the prediction, he is up-branch of you. He doesn’t know which branch you’ll be in. Or, more accurately, he won’t be able to make a prediction that is true for all the branches.
So long as you make your Newcomb’s choice for what seem like good reasons rather than by flipping a quantum coin, it is likely that very many of you will pick the same good reasons, and that Omega can easily achieve 99% or higher accuracy. I would expect almost no Eliezer Yudkowskys to two-box—if Robin Hanson is right about mangled worlds and there’s a cutoff for worlds of very small amplitude, possibly none of me. Remember, quantum branching does not correspond to high-level decisionmaking.
Yes, most Eliezer Yudkowskys will 1-box. And most byrnemas too. But the new twist (new for me, anyway) is that the Eliezer’s that two-box are the ones that really win, as rare as they are.
The one who wins or loses is the one who makes the decision. You might as well say that if someone buys a quantum lottery ticket, the one who really wins is the future self who wins the lottery a few days later; but actually, the one who buys the lottery ticket loses.
The slight quantum chance that EY will 2-box causes the sum of EYs to lose, relative to a perfect 1-boxer, assuming Omega correctly predicts that chance and randomly fills boxes accordingly. The precise Everett branches where EY 2-boxes and where EY loses are generally different, but the higher the probability that he 1-boxes, the higher his expected value is.
And, also, we define winning as winning on average. A person can get lucky and win the lottery—doesn’t mean that person was rational to play the lottery.
Interestingly, I worked through the math once to see if you could improve on committed 1-boxing by using a strategy of quantum randomness. Assuming Omega fills the boxes such that P(box A has $)=P(1-box), P(1-box)=1 is the optimal solution.
Interesting. I was idly wondering about that. Along somewhat different lines:
I’ve decided that I am a one-boxer,and I will one box. With the following caveat: at the moment of decision, I will look for an anomaly with virtual zero probability. A star streaks across the sky and fuses with another one. Someone spills a glass of milk and halfway towards the ground, the milk rises up and fills itself back into the glass. If this happens, I will 2-box.
Winning the extra amount in this way in a handful of worlds won’t do anything to my average winnings—it won’t even increase it by epsilon. However, it could make a difference if something really important is at stake, where I would want to secure the chance that it happens one time in the whole universe.
Why is this comment being down-voted? I thought it was rather clever to use Omega’s one weak spot—quantum uncertainty—to optimize your winnings even over a set with measure zero.
Because Omega is going to know what triggers you would use for anomalies. A star streaking across the sky is easy to see coming if you know the current state of the universe. As such, Omega would know you are about to two-box even though you are currently planning to one-box.
When the star streaks across the sky, you think, “Ohmigosh! It happened! I’m about to get rich!” Then you open the boxes and get $1000.
Essentially, it boils down to this: If you can predict a scenario where you will two-box instead of one-box than Omega can as well.
The idea of flipping quantum coins is more fool proof. The idea of stars streaking or milk unspilling is only hard for us to see coming. Not to mention it will probably trigger all sorts of biases when you start looking for ways to cheat the system.
Note: I am not up to speed on quantum mechanics. I could be off on a few things here.
OK, right: looking for a merging of stars would be a terrible anomaly to use because that’s probably classical mechanics and Omega-predictable. The milk unspilling would still be a good example, because Omega can’t see it coming either. (He can accurately predict that I will two-box in this case, but he can’t predict that the milk will unspill.)
I would have to be very careful that the anomaly I use is really not predictable. For example, I screwed up with the streaking star. I was already reluctant to trust flipping quantum coins, whatever those are. They would need to be flipped or simulated by some mechanical device and may have all kinds of systematic biases and impracticalities if you are actually trying to flip 10^23^23 coins.
Without having plenty of time to think about it, and say, some physicists advising me, it would probably be wise for me to just one-box.
I didn’t down vote but I confess I don’t really know what you’re talking about in that comment. Why would you two box in that case? What really important thing is at stake? I don’t get it.
OK. The way I’ve understood the problem with Omega is that Omega is a perfect predictor so you have 2 options and 2 outcomes:
you two box --> you get $2,000 ($1000 in each box)
you one box --> you get 1M ($1M in one box, $1000 in the second box)
If Omega is not a perfect predictor, it’s possible that you two box and you get 1,001,000. (Omega incorrectly predicted you’d one box.)
However, if you are likely to 2box using this reasoning, Omega will adjust his prediction accordingly (and will even reduce your winnings when you do 1box—so that you can’t beat him).
My solution was to 1box almost always—so that Omega predicts you will one box, but then ‘cheat’ and 2-box almost never (but sometimes). According to Greg, your ‘sometimes’ has to be over a set of measure 0, any larger than that and you’ll be penalized due to Omega’s arithmetic.
Nothing—if only an extra thousand is at stake, I probably wouldn’t even bother with my quantum caveat. One million dollars would be great anyway. But I can imagine an unfriendly Omega giving me choices where I would really want to have both boxes maximally filled … and then I’ll have to realize (rationally) that I must almost always 1 box, but I can get away with 2-boxing a handful of times. The problem with a handful, is that how does a subjective observer choose something so rarely? They must identify an appropriately rare quantum event.
So this job could even be accomplished by flipping a quantum coin 10000 times and only two-boxing when they come up tails each time. You’re just looking for a decision mechanism that only applies in a handful of branches.
Yes, exactly.
The math is actually quite straight-forward, if anyone cares to see it. Consider a generalized Newcomb’s problem. Box A either contains $A or nothing, while box B contains $B (obviously A>B, or there is no actual problem). Let Pb the probability that you 1-box. Let Po be the probability that Omega fills box A (note that only quantum randomness counts, here. If you decide by a “random” but deterministic process, Omega knows how it turns out, even if you don’t, so Pb=0 or 1). Let F be your expected return.
Regardless of what Omega does, you collect the contents of box A, and have a (1-Pb) probability of collecting the contents of box B. F(Po=1)= A + (1-Pb)B
F(Po=0)=(1-Pb)B
For the non-degenerate cases, these add together as expected. F(Po, Pb) = Po(A + (1-Pb)B) + (1-Po)[(1-Pb)B]
Suppose Po = Pb := P
F(P) = P(A + (1-P)B) + [(1-P)^2] B
=P(A + B—PB) + (1-2P+P^2) B
=PA + PB - (P^2)B + B − 2PB + (P^2)B
=PA + PB + B − 2PB
=B + P(A-B)
If A > B, F(P) is monotonically increasing, so P = 1 is the gives maximum return. If A<B, P=0 is the maximum (I hope it’s obvious to everyone that if box B has MORE money than a full box A, 2-boxing is ideal).
I’m not sure why you take Po = Pb. If Omega is trying to maximize his chance of predicting correctly then he’ll take Po = 1 if Pb > 1⁄2 and Pb = 0 if Pb < 1⁄2. Then, assuming A > B / 2, the optimal choice is Po = 1⁄2.
Actually, if Omega behaves this way there is a jump discontinuity in expected value at Po=1/2. We can move the optimum away from the discontinuity by postulating there is some degree of imprecision in our ability to choose a quantum coin with the desired characteristic. Maybe when we try to pick a coin with bias Po we end up with a coin with bias Po+e, where e is an error chosen from a uniform distribution over [Po-E, Po+E]. The optimal choice of Po is now 1⁄2 + 2E, assuming A > 2EB, which is the case for sufficiently small E (E < 1⁄4 suffices). The expected payoff is now robust (continuous) to small perturbations in our choice of Po.
A good point.
Your solution does have Omega maximize right answers. My solution works if Omega wants the “correct” result summed over all Everett branches: for every you that 2-boxes, there exists an empty box A, even if it doesn’t usually go to the 2-boxer.
Both answers are correct, but for different problems. The “classical” Newcomb’s problem is unphysical, just as byrnema initially described. A “Quantum Newcomb’s problem” requires specifying how Omega deals with quantum uncertainty.
Interesting. Since the spirit of Newcomb’s problem depends on 1-boxing have a higher payoff, I think it makes sense to additionally postulate your solution to quantum uncertainty, as it maintains the same maximizer. That’s so even if the Everett interpretation of QM is wrong.
Let p be the probability that you 2-box, and suppose (as Greg said) that Omega lets P(box A empty) = p with its decision being independent of yours. It sounds like you’re saying you only care about the frequency with which you get the maximal reward. This is P(you 2-box)*P(box A full) = p(1-p) which is maximized by p=0.5, not by p infinitesimally small.
I think Omega’s capabilities serve a LCPW function in thought experiments; it makes the possibilities simpler to consider than a more physically plausible setup might.
Also, I’d say that our wetware brains probably aren’t close to deterministic in how we decide (though it would take knowledge far beyond what we currently have to be sure of this), but e.g. an uploaded brain running on a classical computer would be perfectly (in principle) predictable.
What Omega can do instead is simulate every branch and count the number of branches in which you two-box, to get a probability, and treat you as a two-boxer if this probability is greater than some threshold. This covers both the cases where you roll a die, and the cases where your decision depends on events in your brain that don’t always go the same way. In fact, Omega doesn’t even need to simulate every branch; a moderate sized sample would be good enough for the rules of Newcomb’s problem to work as they’re supposed to.
But the real reason for treating Omega as a perfect predictor is that one of the more natural ways of modeling an imperfect predictor is to decompose it into some probability of being a perfect predictor and some probability of its prediction being completely independent of your choice, the probabilities depending on how good a predictor you think it really is. In that context, denying the possibility that a perfect predictor could exist is decidedly unhelpful.
Thank to everyone who replied. So I see that we don’t really believe that the universe is deterministic in the way implied by the problem. OK, that’s consistent then.
I’m sufficiently uninformed on how quantum mechanics would interact with determinism that so far I’ve been operating under the assumption that it doesn’t. Maybe someone here can enlighten me? Does the behavior of things-that-behave-quantumly typically affect macro-level events, or is this restricted to when you look at them and record experimental data as a direct causal result of the behavior? Is there some way to prove that quantum events are random, as opposed to caused deterministically by something we just haven’t found? (I’m not sure even in principle how you could prove that something is random. It’d be proving the negative on the existence of causation for a possibly-hidden cause.)
Yes; since many important macroscopic events (e.g. weather, we’re quite sure) are extremely sensitive to initial conditions, two Everett branches that differ only by a single small quantum event can quickly diverge in macroscopic behavior.
Yes. They only appear weird if you look at small enough scales, but classical electrons would not have stable orbits, so without quantum effects there’d be no stable atoms.
No, but there is evidence. There is a proof that if they were caused by something unknown but deterministic (or if there even was a classical probability function for certain events) then they would follow Bell’s inequalities. But that appears not to be the case.
But this is where things get really shaky for materialism. If something cannot be explained in X, this means there is something outside X that determines it.
Materialists must hope that in spite of Bell’s inequalities, there is some kind of non-random mechanism that would explain quantum events, regardless of whether it is possible for us to deduce it.
Alicorn asked above:
In principle, you can’t. And one of the foundational (but non-obvious) assumptions of materialism is that nothing is truly random. The non-refutibility of materialism depends upon never being able to demonstrate that something is actually random.
Later edit: I realize that this comment is somewhat of a non-sequitur in the context of this thread. (oops) I’ll explain that these kinds of questions have been my motivation for thinking about Newcomb in the first place. Sometimes I’m worried about whether materialism is self-consistent, sometimes I’m worried about whether dualism is a coherent idea within the context of materialism, and these questions are often conflated in my mind as a single project.
In that case I am not a materialist. I don’t believe in any entities that materialists don’t believe in, but I do believe that you have to resort to Many Worlds in order to be right and believe in determinism. Questions that amount to asking “which Everett branch are we in” can have nondeterministic answers.
No worries—you can still be a materialist. Many worlds is the materialist solution to the problem of random collapse. (But I think that’s what you just wrote—sorry if I misunderstood something.)
Suppose that a particle has a perfectly undetermined choice to go left or go right. If the particle goes left, a materialist must hold in principle that there is a mechanism that determined the direction, but then they can’t say the direction was undetermined.
Many worlds says that both directions were chosen, and you happen to find yourself in the one where the particle went left. So there is no problem with something outside the system swooping down and making an arbitrary decision.
Those sorts of question can arise in non-QM contexts too.
Or, of course, the causes could be non-local.
What are Bell’s inequalities, and why do quantumly-behaving things with deterministic causes have to follow them?
Alicorn, if you’re free after dinner tomorrow, I can probably explain this one.
The EPR paradox (Einstein-Podolsky-Rosen paradox) is a set of experiments that suggest ‘spooky action at a distance’ because particles appear to share information instantaneously, at a distance, long after an interaction between them.
People applying “common sense” would like to argue that there is some way that the information is being shared—some hidden variable that collects and shares the information between them.
Bell’s Inequality only assumes there there is some such hidden variable operating locally* -- with no specifications of any kind on how it works—and deduces correlations between particles sharing information that is in contradiction with experiments.
* that is, mechanically rather than ‘magically’ at a distance
Um… am I missing something or did no one link to, ahem:
http://lesswrong.com/lw/q1/bells_theorem_no_epr_reality/
Thank you, although I find this a little too technical to wrap my brain around at the moment.
Well, actually everything has to follow them because of Bell’s Theorem.
Edit: The second link should be to this explanation, which is somewhat less funny, but actually explains the experiments that violate the theorem. Sorry that I took so long, but it appeared that the server was down when I first tried to fix it, so I went and did other things for half an hour.
There’s no good explanation anywhere. :(
There is no special line where events become macro-level events. It’s not like you get to 10 atoms or a mole and suddenly everything is deterministic again. You’re position right now is subject to indeterminacy. It just happens that you’re big enough that the chances every particle of your body moves together in the same, noticeable direction is very very small (and by very small I mean that I can confidently predict it will never happen).
In principle our best physics tells us that determinism is just false as a metaphysics. Other people have answered the question you meant to ask which is whether the extreme indeterminacies of very small particles can effect the actions of much larger collections of particles.
IAWYC except, of course, for this:
As said above and elsewhere, MWI is perfectly deterministic. It’s just that there is no single fact of the matter as to which outcome you will observe from within it, because there’s not just one time-descendant of you.
Thats a fair point, but I don’t think it is quite that easy. On one formulation a deterministic system is a system whose end conditions are set by the rules of the system and the starting conditions. Under this definition, MWI is deterministic. But often what we mean by determinism is that it is not the case that the world could have been otherwise. For one extension of ‘world’ that is true. But for another extension, the world not only could have been otherwise. It is otherwise. There are also a lot of confusions about our use of indexicals here: what we’re referring to with “I”, “You”, “This”, “That” My” etc. Determinism usually implies that ever true statement (including true statements with indexicals) is necessarily true. But it isn’t obvious to me that many worlds gives us that. Also, a common thought experiment to glean people’s intuitions about determinism is basically to say that we live in a universe where a super computer that can exactly predict the future is possible. MWI doesn’t allow for that.
Perhaps we shouldn’t try to fit our square-pegged physics into the round holes of traditional philosophical concepts. But I take your point.
Why would determinism have anything to say about indexicals? There aren’t any Turing-complete models that forbid indexical uncertainty; you can always copy a program and put the copies in different environments. So I don’t see what use such a concept of “determinism” would have.
Thinking about this it isn’t a concern about indexicals but a concern about reference in general. When we refer to an object we’re not referring to it’s extension throughout all Everett branches but we’re also referring to an object extended in time. So take a sentence like “The table moved from the center of the room to the corner.” If determinism is true we usually think that all sentences like this are necessary truths and sentences like “The table could have stayed in the center” are false. But I’m not sure what the right way to evaluate these sentences is given MWI.
Voted down because my writing is confusing or because I said something stupid?
Perfection is impossible, but a very, very accurate prediction might be possible.
The world is deterministic at least to the extent that everything knowable is determined (but not necessarily the other way around). This is why you need determinism in the world in order to be able to make decisions (and can’t use something not being determined as a reason for the possibility of making decisions).
Yes.