I don’t see how this works. Are you saying that betting on an outcome with extremely high probability and a very good payoff is somehow a bad idea in real life?
Can you give an example of how my one-box reasoning would lead to a bad result, for example?
See, here’s my rationale for one-boxing: if somebody copied my brain and ran it in a simulation, I can assume that it would make the same decision I would… and therefore it is at least possible for the predictor to be perfect. If the predictor also does, in fact, have a perfect track record, then it makes sense to use an approach that would result in a consistent win, regardless of whether “I” am the simulation or the “real” me.
Or, to put it another way, the only way that I can win by two-boxing, is if my “simulation” (however crude) one-boxes...
Which means I need to be able to tell with perfect accuracy whether I’m being simulated or not. Or more precisely, both the real me AND the simulation must be able to tell, because if the simulated me thinks it’s real, it will two-box… and if I can’t conclusively prove I’m real, I must one-box, to prevent the real me from being screwed over.
Thus the only “safe” strategy is to one-box, since I cannot prove that I am NOT being simulated at the time I make the decision… which would be the only way to be sure I could outsmart the Predictor.
(Note, by the way, that it doesn’t matter whether the Predictor’s method of prediction is based on copying my brain or not; it’s just a way of representing the idea of a perfect or near-perfect prediction mechanism. The same logic applies to low-fidelity simulations, even simple heuristic methods of predicting my actions.)
Whew. Anyway, I’m curious to see how that decision procedure would lead to bad results in real-world situations… heck, I’m curious to see how I would ever apply that line of reasoning to a real world situation. ;-)
heck, I’m curious to see how I would ever apply that line of reasoning to a real world situation. ;-)
Well, it seems you’ve grasped the better part of my point, anyway.
To begin, you’re assuming that simulating a human is possible. We don’t know how to do it yet, and there really aren’t good reasons to just assume that it is. You’re really jumping the gun by letting that one under the tent.
Here’s my rationale for not one-boxing:
However much money is in the boxes, it’s already in there when I’m making my decision. I’m allowed to take all of the boxes, some of which contain money. Therefore, to maximize my money, I should take all of the boxes. The only way to deny this is to grant that either: 1. my making the decision affects the past, or 2. other folks can know in advance what decisions I’m going to make. Since neither of these holds in reality, there is no real situation in which the decision-procedure that leads to two-boxing is inferior to the decision-procedure that leads to one-boxing.
In a real life version of the thought experiment, the person in charge of the experiment would just be lying to you about the predictor’s accuracy, and you’d be a fool to one-box. These are the situations we should be prepared for, not fantasies.
I’m making the assumption that we’ve verified the predictor’s accuracy, so it doesn’t really matter how the predictor achieves it.
In any case, this basically boils down to cause-and-effect once again: if you believe in “free will”, then you’ll object to the existence of a Predictor, and base your decisions accordingly. If you believe, however, in cause-and-effect, then a Predictor is at least theoretically possible.
(By the way, if humans have free will—i.e., the ability to behave in an acausal manner—then so do subatomic particles.)
I’m making the assumption that we’ve verified the predictor’s accuracy, so it doesn’t really matter how the predictor achieves it.
Right, and I’m saying that that assumption only holds in fiction, and so using decision procedures based on it is irrational.
In any case, this basically boils down to cause-and-effect once again: if you believe in “free will”, then you’ll object to the existence of a Predictor, and base your decisions accordingly. If you believe, however, in cause-and-effect, then a Predictor is at least theoretically possible.
I’m afraid this is a straw man—I’m with Dennett on free will. However, in most situations you find yourself in, believing that a Predictor has the aforementioned power is bad for you, free will or no.
Also, I’m not sure what you mean by ‘at least theoretically possible’. Do you mean ‘possible or not possible’? Or ‘not yet provably impossible’? The Predictor is at best unlikely, and might be physically impossible even in a completely deterministic universe (entirely due to practicality / engineering concerns / the amount of matter in the universe / the amount that one has to model).
(By the way, if humans have free will—i.e., the ability to behave in an acausal manner—then so do subatomic particles.)
This does not logically follow. Insert missing premises?
As for the rest, I’m surprised you think it would take such a lot of engineering to simulate a human brain… we’re already working on simulating small parts of a mouse brain… there’s not that many more orders of magnitude left. Similarly, if you think nanotech will make cryonics practical at some point, then the required technology is on par with what you’d need to make a brain in a jar… or just duplicate the person and use their answer.
I don’t see how this works. Are you saying that betting on an outcome with extremely high probability and a very good payoff is somehow a bad idea in real life?
Can you give an example of how my one-box reasoning would lead to a bad result, for example?
See, here’s my rationale for one-boxing: if somebody copied my brain and ran it in a simulation, I can assume that it would make the same decision I would… and therefore it is at least possible for the predictor to be perfect. If the predictor also does, in fact, have a perfect track record, then it makes sense to use an approach that would result in a consistent win, regardless of whether “I” am the simulation or the “real” me.
Or, to put it another way, the only way that I can win by two-boxing, is if my “simulation” (however crude) one-boxes...
Which means I need to be able to tell with perfect accuracy whether I’m being simulated or not. Or more precisely, both the real me AND the simulation must be able to tell, because if the simulated me thinks it’s real, it will two-box… and if I can’t conclusively prove I’m real, I must one-box, to prevent the real me from being screwed over.
Thus the only “safe” strategy is to one-box, since I cannot prove that I am NOT being simulated at the time I make the decision… which would be the only way to be sure I could outsmart the Predictor.
(Note, by the way, that it doesn’t matter whether the Predictor’s method of prediction is based on copying my brain or not; it’s just a way of representing the idea of a perfect or near-perfect prediction mechanism. The same logic applies to low-fidelity simulations, even simple heuristic methods of predicting my actions.)
Whew. Anyway, I’m curious to see how that decision procedure would lead to bad results in real-world situations… heck, I’m curious to see how I would ever apply that line of reasoning to a real world situation. ;-)
Well, it seems you’ve grasped the better part of my point, anyway.
To begin, you’re assuming that simulating a human is possible. We don’t know how to do it yet, and there really aren’t good reasons to just assume that it is. You’re really jumping the gun by letting that one under the tent.
Here’s my rationale for not one-boxing:
However much money is in the boxes, it’s already in there when I’m making my decision. I’m allowed to take all of the boxes, some of which contain money. Therefore, to maximize my money, I should take all of the boxes. The only way to deny this is to grant that either: 1. my making the decision affects the past, or 2. other folks can know in advance what decisions I’m going to make. Since neither of these holds in reality, there is no real situation in which the decision-procedure that leads to two-boxing is inferior to the decision-procedure that leads to one-boxing.
In a real life version of the thought experiment, the person in charge of the experiment would just be lying to you about the predictor’s accuracy, and you’d be a fool to one-box. These are the situations we should be prepared for, not fantasies.
I’m making the assumption that we’ve verified the predictor’s accuracy, so it doesn’t really matter how the predictor achieves it.
In any case, this basically boils down to cause-and-effect once again: if you believe in “free will”, then you’ll object to the existence of a Predictor, and base your decisions accordingly. If you believe, however, in cause-and-effect, then a Predictor is at least theoretically possible.
(By the way, if humans have free will—i.e., the ability to behave in an acausal manner—then so do subatomic particles.)
Right, and I’m saying that that assumption only holds in fiction, and so using decision procedures based on it is irrational.
I’m afraid this is a straw man—I’m with Dennett on free will. However, in most situations you find yourself in, believing that a Predictor has the aforementioned power is bad for you, free will or no.
Also, I’m not sure what you mean by ‘at least theoretically possible’. Do you mean ‘possible or not possible’? Or ‘not yet provably impossible’? The Predictor is at best unlikely, and might be physically impossible even in a completely deterministic universe (entirely due to practicality / engineering concerns / the amount of matter in the universe / the amount that one has to model).
This does not logically follow. Insert missing premises?
Free will and subatomic particles
As for the rest, I’m surprised you think it would take such a lot of engineering to simulate a human brain… we’re already working on simulating small parts of a mouse brain… there’s not that many more orders of magnitude left. Similarly, if you think nanotech will make cryonics practical at some point, then the required technology is on par with what you’d need to make a brain in a jar… or just duplicate the person and use their answer.