The “paradox” of Newcomb’s Problem arises because Omega leaves the room. I’m going to steel-man the situation a little bit by stipulating that Omega doesn’t put the million in the black box if it predicts that you will take longer than 1 minute to decide, or if it predicts that you will somehow randomize your decision using some process external to your brain.
Now, instinctively, some people want to say something to themselves like “Well, Omega has left the room. No matter whether I reach for one box or both of them, the amounts of money within them aren’t going to change. So I might as well take both and get as much money as I can, given what has already occurred.”, and then they take both boxes. The problem is, Omega predicted that they would go through that very chain of reasoning, and thus didn’t fill the black box with a million dollars.
A better approach is to say to yourself, “Omega has probably correctly predicted what I’m going to do. So, I’ll perform the action that, if Omega predicted it, will end up with me getting the most money.” And then you end up with the million.
Now, some counter by saying “Wait a second...then for the two boxers, it’s already too late for them to change their decision! So they’re not irrational, just unlucky (to have the disposition to two-box).” And, from the outside, I might agree. But the person interacting with Omega can’t make the same argument! Because then Omega would have predicted that the two-boxer would say that, and then proceed to use that as an excuse to take both boxes.
However, humans can’t predict each other’s behavior anywhere close to as well as a hypothetical brain-scanning superintelligence can. So we use precommitment instead. AIs who can read each other’s code, though...again, they might be able to very accurately predict each other’s decisions.
I don’t see how quantum physics has anything to do with this problem. Quantum randomness events don’t (with probability epsilon) reach such a high macro-scale as that of a human decision. I’ve never dropped a penny and seen quantum randomness carry it a meter (or even a centimeter) off to the right.
I don’t see how quantum physics has anything to do with this problem. Quantum randomness events don’t (with probability epsilon) reach such a high macro-scale as that of a human decision.
Yes they do reach macrocsopic level, or QM would not be an experimental science.
The “paradox” of Newcomb’s Problem arises because Omega leaves the room. I’m going to steel-man the situation a little bit by stipulating that Omega doesn’t put the million in the black box if it predicts that you will take longer than 1 minute to decide, or if it predicts that you will somehow randomize your decision using some process external to your brain.
Now, instinctively, some people want to say something to themselves like “Well, Omega has left the room. No matter whether I reach for one box or both of them, the amounts of money within them aren’t going to change. So I might as well take both and get as much money as I can, given what has already occurred.”, and then they take both boxes. The problem is, Omega predicted that they would go through that very chain of reasoning, and thus didn’t fill the black box with a million dollars.
A better approach is to say to yourself, “Omega has probably correctly predicted what I’m going to do. So, I’ll perform the action that, if Omega predicted it, will end up with me getting the most money.” And then you end up with the million.
Now, some counter by saying “Wait a second...then for the two boxers, it’s already too late for them to change their decision! So they’re not irrational, just unlucky (to have the disposition to two-box).” And, from the outside, I might agree. But the person interacting with Omega can’t make the same argument! Because then Omega would have predicted that the two-boxer would say that, and then proceed to use that as an excuse to take both boxes.
However, humans can’t predict each other’s behavior anywhere close to as well as a hypothetical brain-scanning superintelligence can. So we use precommitment instead. AIs who can read each other’s code, though...again, they might be able to very accurately predict each other’s decisions.
I don’t see how quantum physics has anything to do with this problem. Quantum randomness events don’t (with probability epsilon) reach such a high macro-scale as that of a human decision. I’ve never dropped a penny and seen quantum randomness carry it a meter (or even a centimeter) off to the right.
Yes they do reach macrocsopic level, or QM would not be an experimental science.