Except if your decision procedure results in you one-boxing, then you’ll lose more often than not in similar situations in real life.
Sure, people who give this answer are ignoring things about the thought experiment that make one-boxing the obvious win—like Annoyance said, if you use rationality you can follow the evidence without necessarily having an explanation of how it works. Sure, tarot cards have been shown to predict the weather and one-boxing has been shown to result in better prizes.
However, we don’t make decisions using some simple algorithm. Our decisions come primarily from our character—by building good habits of character, we engage in good activities. By developing good rationalist habits, we behave more rationally. And the decision procedure that leads you to two-box on the completely fictional and unrealistic thought experiment, is the same decision procedure that would make you win in real life.
I do not base my life on the fiction of Newcomb’s problem, but I do take lessons from it. Not the lesson that an amazingly powerful creature is going to offer me a million dollars, but the lesson that it is possible to try and fail to be rational, by missing a step, or that I may jump too soon to the conclusion that something is “impossible”, or that trying hard to learn more rationality tricks will profit me, even if not as much as that million dollars.
I don’t see how this works. Are you saying that betting on an outcome with extremely high probability and a very good payoff is somehow a bad idea in real life?
Can you give an example of how my one-box reasoning would lead to a bad result, for example?
See, here’s my rationale for one-boxing: if somebody copied my brain and ran it in a simulation, I can assume that it would make the same decision I would… and therefore it is at least possible for the predictor to be perfect. If the predictor also does, in fact, have a perfect track record, then it makes sense to use an approach that would result in a consistent win, regardless of whether “I” am the simulation or the “real” me.
Or, to put it another way, the only way that I can win by two-boxing, is if my “simulation” (however crude) one-boxes...
Which means I need to be able to tell with perfect accuracy whether I’m being simulated or not. Or more precisely, both the real me AND the simulation must be able to tell, because if the simulated me thinks it’s real, it will two-box… and if I can’t conclusively prove I’m real, I must one-box, to prevent the real me from being screwed over.
Thus the only “safe” strategy is to one-box, since I cannot prove that I am NOT being simulated at the time I make the decision… which would be the only way to be sure I could outsmart the Predictor.
(Note, by the way, that it doesn’t matter whether the Predictor’s method of prediction is based on copying my brain or not; it’s just a way of representing the idea of a perfect or near-perfect prediction mechanism. The same logic applies to low-fidelity simulations, even simple heuristic methods of predicting my actions.)
Whew. Anyway, I’m curious to see how that decision procedure would lead to bad results in real-world situations… heck, I’m curious to see how I would ever apply that line of reasoning to a real world situation. ;-)
heck, I’m curious to see how I would ever apply that line of reasoning to a real world situation. ;-)
Well, it seems you’ve grasped the better part of my point, anyway.
To begin, you’re assuming that simulating a human is possible. We don’t know how to do it yet, and there really aren’t good reasons to just assume that it is. You’re really jumping the gun by letting that one under the tent.
Here’s my rationale for not one-boxing:
However much money is in the boxes, it’s already in there when I’m making my decision. I’m allowed to take all of the boxes, some of which contain money. Therefore, to maximize my money, I should take all of the boxes. The only way to deny this is to grant that either: 1. my making the decision affects the past, or 2. other folks can know in advance what decisions I’m going to make. Since neither of these holds in reality, there is no real situation in which the decision-procedure that leads to two-boxing is inferior to the decision-procedure that leads to one-boxing.
In a real life version of the thought experiment, the person in charge of the experiment would just be lying to you about the predictor’s accuracy, and you’d be a fool to one-box. These are the situations we should be prepared for, not fantasies.
I’m making the assumption that we’ve verified the predictor’s accuracy, so it doesn’t really matter how the predictor achieves it.
In any case, this basically boils down to cause-and-effect once again: if you believe in “free will”, then you’ll object to the existence of a Predictor, and base your decisions accordingly. If you believe, however, in cause-and-effect, then a Predictor is at least theoretically possible.
(By the way, if humans have free will—i.e., the ability to behave in an acausal manner—then so do subatomic particles.)
I’m making the assumption that we’ve verified the predictor’s accuracy, so it doesn’t really matter how the predictor achieves it.
Right, and I’m saying that that assumption only holds in fiction, and so using decision procedures based on it is irrational.
In any case, this basically boils down to cause-and-effect once again: if you believe in “free will”, then you’ll object to the existence of a Predictor, and base your decisions accordingly. If you believe, however, in cause-and-effect, then a Predictor is at least theoretically possible.
I’m afraid this is a straw man—I’m with Dennett on free will. However, in most situations you find yourself in, believing that a Predictor has the aforementioned power is bad for you, free will or no.
Also, I’m not sure what you mean by ‘at least theoretically possible’. Do you mean ‘possible or not possible’? Or ‘not yet provably impossible’? The Predictor is at best unlikely, and might be physically impossible even in a completely deterministic universe (entirely due to practicality / engineering concerns / the amount of matter in the universe / the amount that one has to model).
(By the way, if humans have free will—i.e., the ability to behave in an acausal manner—then so do subatomic particles.)
This does not logically follow. Insert missing premises?
As for the rest, I’m surprised you think it would take such a lot of engineering to simulate a human brain… we’re already working on simulating small parts of a mouse brain… there’s not that many more orders of magnitude left. Similarly, if you think nanotech will make cryonics practical at some point, then the required technology is on par with what you’d need to make a brain in a jar… or just duplicate the person and use their answer.
Except if your decision procedure results in you one-boxing, then you’ll lose more often than not in similar situations in real life.
Sure, people who give this answer are ignoring things about the thought experiment that make one-boxing the obvious win—like Annoyance said, if you use rationality you can follow the evidence without necessarily having an explanation of how it works. Sure, tarot cards have been shown to predict the weather and one-boxing has been shown to result in better prizes.
However, we don’t make decisions using some simple algorithm. Our decisions come primarily from our character—by building good habits of character, we engage in good activities. By developing good rationalist habits, we behave more rationally. And the decision procedure that leads you to two-box on the completely fictional and unrealistic thought experiment, is the same decision procedure that would make you win in real life.
Don’t base your life on a fiction.
I do not base my life on the fiction of Newcomb’s problem, but I do take lessons from it. Not the lesson that an amazingly powerful creature is going to offer me a million dollars, but the lesson that it is possible to try and fail to be rational, by missing a step, or that I may jump too soon to the conclusion that something is “impossible”, or that trying hard to learn more rationality tricks will profit me, even if not as much as that million dollars.
I don’t see how this works. Are you saying that betting on an outcome with extremely high probability and a very good payoff is somehow a bad idea in real life?
Can you give an example of how my one-box reasoning would lead to a bad result, for example?
See, here’s my rationale for one-boxing: if somebody copied my brain and ran it in a simulation, I can assume that it would make the same decision I would… and therefore it is at least possible for the predictor to be perfect. If the predictor also does, in fact, have a perfect track record, then it makes sense to use an approach that would result in a consistent win, regardless of whether “I” am the simulation or the “real” me.
Or, to put it another way, the only way that I can win by two-boxing, is if my “simulation” (however crude) one-boxes...
Which means I need to be able to tell with perfect accuracy whether I’m being simulated or not. Or more precisely, both the real me AND the simulation must be able to tell, because if the simulated me thinks it’s real, it will two-box… and if I can’t conclusively prove I’m real, I must one-box, to prevent the real me from being screwed over.
Thus the only “safe” strategy is to one-box, since I cannot prove that I am NOT being simulated at the time I make the decision… which would be the only way to be sure I could outsmart the Predictor.
(Note, by the way, that it doesn’t matter whether the Predictor’s method of prediction is based on copying my brain or not; it’s just a way of representing the idea of a perfect or near-perfect prediction mechanism. The same logic applies to low-fidelity simulations, even simple heuristic methods of predicting my actions.)
Whew. Anyway, I’m curious to see how that decision procedure would lead to bad results in real-world situations… heck, I’m curious to see how I would ever apply that line of reasoning to a real world situation. ;-)
Well, it seems you’ve grasped the better part of my point, anyway.
To begin, you’re assuming that simulating a human is possible. We don’t know how to do it yet, and there really aren’t good reasons to just assume that it is. You’re really jumping the gun by letting that one under the tent.
Here’s my rationale for not one-boxing:
However much money is in the boxes, it’s already in there when I’m making my decision. I’m allowed to take all of the boxes, some of which contain money. Therefore, to maximize my money, I should take all of the boxes. The only way to deny this is to grant that either: 1. my making the decision affects the past, or 2. other folks can know in advance what decisions I’m going to make. Since neither of these holds in reality, there is no real situation in which the decision-procedure that leads to two-boxing is inferior to the decision-procedure that leads to one-boxing.
In a real life version of the thought experiment, the person in charge of the experiment would just be lying to you about the predictor’s accuracy, and you’d be a fool to one-box. These are the situations we should be prepared for, not fantasies.
I’m making the assumption that we’ve verified the predictor’s accuracy, so it doesn’t really matter how the predictor achieves it.
In any case, this basically boils down to cause-and-effect once again: if you believe in “free will”, then you’ll object to the existence of a Predictor, and base your decisions accordingly. If you believe, however, in cause-and-effect, then a Predictor is at least theoretically possible.
(By the way, if humans have free will—i.e., the ability to behave in an acausal manner—then so do subatomic particles.)
Right, and I’m saying that that assumption only holds in fiction, and so using decision procedures based on it is irrational.
I’m afraid this is a straw man—I’m with Dennett on free will. However, in most situations you find yourself in, believing that a Predictor has the aforementioned power is bad for you, free will or no.
Also, I’m not sure what you mean by ‘at least theoretically possible’. Do you mean ‘possible or not possible’? Or ‘not yet provably impossible’? The Predictor is at best unlikely, and might be physically impossible even in a completely deterministic universe (entirely due to practicality / engineering concerns / the amount of matter in the universe / the amount that one has to model).
This does not logically follow. Insert missing premises?
Free will and subatomic particles
As for the rest, I’m surprised you think it would take such a lot of engineering to simulate a human brain… we’re already working on simulating small parts of a mouse brain… there’s not that many more orders of magnitude left. Similarly, if you think nanotech will make cryonics practical at some point, then the required technology is on par with what you’d need to make a brain in a jar… or just duplicate the person and use their answer.