heck, I’m curious to see how I would ever apply that line of reasoning to a real world situation. ;-)
Well, it seems you’ve grasped the better part of my point, anyway.
To begin, you’re assuming that simulating a human is possible. We don’t know how to do it yet, and there really aren’t good reasons to just assume that it is. You’re really jumping the gun by letting that one under the tent.
Here’s my rationale for not one-boxing:
However much money is in the boxes, it’s already in there when I’m making my decision. I’m allowed to take all of the boxes, some of which contain money. Therefore, to maximize my money, I should take all of the boxes. The only way to deny this is to grant that either: 1. my making the decision affects the past, or 2. other folks can know in advance what decisions I’m going to make. Since neither of these holds in reality, there is no real situation in which the decision-procedure that leads to two-boxing is inferior to the decision-procedure that leads to one-boxing.
In a real life version of the thought experiment, the person in charge of the experiment would just be lying to you about the predictor’s accuracy, and you’d be a fool to one-box. These are the situations we should be prepared for, not fantasies.
I’m making the assumption that we’ve verified the predictor’s accuracy, so it doesn’t really matter how the predictor achieves it.
In any case, this basically boils down to cause-and-effect once again: if you believe in “free will”, then you’ll object to the existence of a Predictor, and base your decisions accordingly. If you believe, however, in cause-and-effect, then a Predictor is at least theoretically possible.
(By the way, if humans have free will—i.e., the ability to behave in an acausal manner—then so do subatomic particles.)
I’m making the assumption that we’ve verified the predictor’s accuracy, so it doesn’t really matter how the predictor achieves it.
Right, and I’m saying that that assumption only holds in fiction, and so using decision procedures based on it is irrational.
In any case, this basically boils down to cause-and-effect once again: if you believe in “free will”, then you’ll object to the existence of a Predictor, and base your decisions accordingly. If you believe, however, in cause-and-effect, then a Predictor is at least theoretically possible.
I’m afraid this is a straw man—I’m with Dennett on free will. However, in most situations you find yourself in, believing that a Predictor has the aforementioned power is bad for you, free will or no.
Also, I’m not sure what you mean by ‘at least theoretically possible’. Do you mean ‘possible or not possible’? Or ‘not yet provably impossible’? The Predictor is at best unlikely, and might be physically impossible even in a completely deterministic universe (entirely due to practicality / engineering concerns / the amount of matter in the universe / the amount that one has to model).
(By the way, if humans have free will—i.e., the ability to behave in an acausal manner—then so do subatomic particles.)
This does not logically follow. Insert missing premises?
As for the rest, I’m surprised you think it would take such a lot of engineering to simulate a human brain… we’re already working on simulating small parts of a mouse brain… there’s not that many more orders of magnitude left. Similarly, if you think nanotech will make cryonics practical at some point, then the required technology is on par with what you’d need to make a brain in a jar… or just duplicate the person and use their answer.
Well, it seems you’ve grasped the better part of my point, anyway.
To begin, you’re assuming that simulating a human is possible. We don’t know how to do it yet, and there really aren’t good reasons to just assume that it is. You’re really jumping the gun by letting that one under the tent.
Here’s my rationale for not one-boxing:
However much money is in the boxes, it’s already in there when I’m making my decision. I’m allowed to take all of the boxes, some of which contain money. Therefore, to maximize my money, I should take all of the boxes. The only way to deny this is to grant that either: 1. my making the decision affects the past, or 2. other folks can know in advance what decisions I’m going to make. Since neither of these holds in reality, there is no real situation in which the decision-procedure that leads to two-boxing is inferior to the decision-procedure that leads to one-boxing.
In a real life version of the thought experiment, the person in charge of the experiment would just be lying to you about the predictor’s accuracy, and you’d be a fool to one-box. These are the situations we should be prepared for, not fantasies.
I’m making the assumption that we’ve verified the predictor’s accuracy, so it doesn’t really matter how the predictor achieves it.
In any case, this basically boils down to cause-and-effect once again: if you believe in “free will”, then you’ll object to the existence of a Predictor, and base your decisions accordingly. If you believe, however, in cause-and-effect, then a Predictor is at least theoretically possible.
(By the way, if humans have free will—i.e., the ability to behave in an acausal manner—then so do subatomic particles.)
Right, and I’m saying that that assumption only holds in fiction, and so using decision procedures based on it is irrational.
I’m afraid this is a straw man—I’m with Dennett on free will. However, in most situations you find yourself in, believing that a Predictor has the aforementioned power is bad for you, free will or no.
Also, I’m not sure what you mean by ‘at least theoretically possible’. Do you mean ‘possible or not possible’? Or ‘not yet provably impossible’? The Predictor is at best unlikely, and might be physically impossible even in a completely deterministic universe (entirely due to practicality / engineering concerns / the amount of matter in the universe / the amount that one has to model).
This does not logically follow. Insert missing premises?
Free will and subatomic particles
As for the rest, I’m surprised you think it would take such a lot of engineering to simulate a human brain… we’re already working on simulating small parts of a mouse brain… there’s not that many more orders of magnitude left. Similarly, if you think nanotech will make cryonics practical at some point, then the required technology is on par with what you’d need to make a brain in a jar… or just duplicate the person and use their answer.