The part of physics that implies someone cannot scan your brain and simulate inputs so as to perfectly predict your actions is quantum mechanics. But I don’t think invoking it is the best response to your question. Though it does make me wonder how Eliezer reconciles his thoughts on one-boxing with his many-worlds interpretation of QM. Doesn’t many-worlds imply that every game with Omega creates worlds in which Omega is wrong?
If they can perfectly predict your actions, then you have no choice, so talking about which choice to make is meaningless. If you believe you should one-box based if Omega can perfectly predict your actions, but two-box otherwise, then you are better off trying to two-box: In that case, you’ve already agreed that you should two=box if Omega can’t perfectly predict your actions. If Omega can, you won’t be able to two-box unless Omega already predicted that you would, so it won’t hurt to try to 2-box.
If you find an Omega, then you are in an environment where Omega is possible. Perhaps we are all simulated and QM is optional. Maybe we have easily enough determinism in our brains that Omega can make predictions, much as quantum mechanics ought to in some sense prevent predicting where a cannonball will fly but in practice does not. Perhaps it’s a hypothetical where we’re AI to begin with so deterministic behavior is just to be expected.
If they can perfectly predict your actions, then you have no choice, so talking about which choice to make is meaningless.
This was argued against in the Sequences and in general, doesn’t seem to be a strong argument. It seems perfectly compatible to believe your actions follow deterministically and still talk about decision theory—all the functional decision theory stuff is assuming a deterministic decision process, I think.
Re QM: sometimes I’ve seen it stipulated that the world in which the scenario happens is deterministic. It’s entirely possible that the amount of noise generated by QM isn’t enough to affect your choice (besides for a very unlikely “your brain has a couple bits changed randomly in exactly the right way to change your choice”, but that should be way too many orders of magnitude unlikely so as to not matter in any expected utility calculation).
This was argued against in the Sequences and in general, doesn’t seem to be a strong argument. It seems perfectly compatible to believe your actions follow deterministically and still talk about decision theory—all the functional decision theory stuff is assuming a deterministic decision process, I think.
It is compatible to believe your actions follow deterministically and still talk about decision theory. It is not compatible to believe your actions follow deterministically, and still talk about decision theory from a first-person point of view, as if you could by force of will violate your programming.
To ask what choice a deterministic entity should make presupposes both that it does, and does not, have choice. Presupposing a contradiction means STOP, your reasoning has crashed and you can prove any conclusion if you continue.
It is not compatible to believe your actions follow deterministically, and still talk about decision theory from a first-person point of view,
So it’s the pronouns that matter? If I keep using “Aris Katsaris” rather than “I” that makes a difference to whether the person I’m talking about makes decisions that can be deterministally predicted?
Whether someone can predict your decisions has ZERO relevancy on whether you are the one making the decisions or not. This sort of confusion where people think that “free will” means “being unpredictable” is nonsensical—it’s the very opposite. For the decisions to be yours, they must be theoretically predictable, arising from the contents of your brains. Adding in randomness and unpredictability, like e.g. using dice or coinflips reduces the ownership of the decisions and hence the free will.
Old and tired, maybe, but clearly there is not much consensus yet (even if, ahem, some people consider it to be as clear as day).
Note that who makes the decision is a matter of control and has nothing to do with freedom. A calculator controls its display and so the “decision” to output 4 in response to 2+2 it its own, in a way. But applying decision theory to a calculator is nonsensical and there is no free choice involved.
I just now read that one post. It isn’t clear how you think it’s relevant. I’m guessing you think that it implies that positing free will is invalid.
You don’t have to believe in free will to incorporate it into a model of how humans act. We’re all nominalists here; we don’t believe that the concepts in our theories actually exist somewhere in Form-space.
When someone asks the question, “Should you one-box?”, they’re using a model which uses the concept of free will. You can’t object to that by saying “You don’t really have free will.” You can object that it is the wrong model to use for this problem, but then you have to spell out why, and what model you want to use instead, and what question you actually want to ask, since it can’t be that one.
People in the LW community don’t usually do that. I see sloppy statements claiming that humans “should” one-box, based on a presumption that they have no free will. That’s making a claim within a paradigm while rejecting the paradigm. It makes no sense.
Consider what Eliezer says about coin flips:
We’ve previously discussed how probability is in the mind. If you are uncertain about whether a classical coin has landed heads or tails, that is a fact about your state of mind, not a property of the coin. The coin itself is either heads or tails. But people forget this, and think that coin.probability == 0.5, which is the Mind Projection Fallacy: treating properties of the mind as if they were properties of the external world.
The mind projection fallacy is treating the word “probability” not in a nominalist way, but in a philosophical realist way, as if they were things existing in the world. Probabilities are subjective. You don’t project them onto the external world. That doesn’t make “coin.probability == 0.5″ a “false” statement. It correctly specifies the distribution of possibilities given the information available within the mind making the probability assessment. I think that is what Eliezer is trying to say there.
“Free will” is a useful theoretical construct in a similar way. It may not be a thing in the world, but it is a model for talking about how we make decisions. We can only model our own brains; you can’t fully simulate your own brain within your own brain; you can’t demand that we use the territory as our map.
It’s not just the one post, it’s the whole sequence of related posts.
It’s hard for me to summarize it all and do it justice, but it disagrees with the way you’re framing this. I would suggest you read some of that sequence and/or some of the decision theory papers for a defense of “should” notions being used even when believing in a deterministic world, which you reject. I don’t really want to argue the whole thing from scratch, but that is where our disagreement would lie.
The part of physics that implies someone cannot scan your brain and simulate inputs so as to perfectly predict your actions is quantum mechanics. But I don’t think invoking it is the best response to your question. Though it does make me wonder how Eliezer reconciles his thoughts on one-boxing with his many-worlds interpretation of QM. Doesn’t many-worlds imply that every game with Omega creates worlds in which Omega is wrong?
If they can perfectly predict your actions, then you have no choice, so talking about which choice to make is meaningless. If you believe you should one-box based if Omega can perfectly predict your actions, but two-box otherwise, then you are better off trying to two-box: In that case, you’ve already agreed that you should two=box if Omega can’t perfectly predict your actions. If Omega can, you won’t be able to two-box unless Omega already predicted that you would, so it won’t hurt to try to 2-box.
No, it just makes you deterministic. You still have a choice to make, as you don’t know what Omega predicted (until you make your choice).
If you find an Omega, then you are in an environment where Omega is possible. Perhaps we are all simulated and QM is optional. Maybe we have easily enough determinism in our brains that Omega can make predictions, much as quantum mechanics ought to in some sense prevent predicting where a cannonball will fly but in practice does not. Perhaps it’s a hypothetical where we’re AI to begin with so deterministic behavior is just to be expected.
This was argued against in the Sequences and in general, doesn’t seem to be a strong argument. It seems perfectly compatible to believe your actions follow deterministically and still talk about decision theory—all the functional decision theory stuff is assuming a deterministic decision process, I think.
Re QM: sometimes I’ve seen it stipulated that the world in which the scenario happens is deterministic. It’s entirely possible that the amount of noise generated by QM isn’t enough to affect your choice (besides for a very unlikely “your brain has a couple bits changed randomly in exactly the right way to change your choice”, but that should be way too many orders of magnitude unlikely so as to not matter in any expected utility calculation).
It is compatible to believe your actions follow deterministically and still talk about decision theory. It is not compatible to believe your actions follow deterministically, and still talk about decision theory from a first-person point of view, as if you could by force of will violate your programming.
To ask what choice a deterministic entity should make presupposes both that it does, and does not, have choice. Presupposing a contradiction means STOP, your reasoning has crashed and you can prove any conclusion if you continue.
So it’s the pronouns that matter? If I keep using “Aris Katsaris” rather than “I” that makes a difference to whether the person I’m talking about makes decisions that can be deterministally predicted?
Whether someone can predict your decisions has ZERO relevancy on whether you are the one making the decisions or not. This sort of confusion where people think that “free will” means “being unpredictable” is nonsensical—it’s the very opposite. For the decisions to be yours, they must be theoretically predictable, arising from the contents of your brains. Adding in randomness and unpredictability, like e.g. using dice or coinflips reduces the ownership of the decisions and hence the free will.
This is old and tired territory.
Old and tired, maybe, but clearly there is not much consensus yet (even if, ahem, some people consider it to be as clear as day).
Note that who makes the decision is a matter of control and has nothing to do with freedom. A calculator controls its display and so the “decision” to output 4 in response to 2+2 it its own, in a way. But applying decision theory to a calculator is nonsensical and there is no free choice involved.
Have you read http://lesswrong.com/lw/rb/possibility_and_couldness/ and the related posts and have some disagreement with them?
I just now read that one post. It isn’t clear how you think it’s relevant. I’m guessing you think that it implies that positing free will is invalid.
You don’t have to believe in free will to incorporate it into a model of how humans act. We’re all nominalists here; we don’t believe that the concepts in our theories actually exist somewhere in Form-space.
When someone asks the question, “Should you one-box?”, they’re using a model which uses the concept of free will. You can’t object to that by saying “You don’t really have free will.” You can object that it is the wrong model to use for this problem, but then you have to spell out why, and what model you want to use instead, and what question you actually want to ask, since it can’t be that one.
People in the LW community don’t usually do that. I see sloppy statements claiming that humans “should” one-box, based on a presumption that they have no free will. That’s making a claim within a paradigm while rejecting the paradigm. It makes no sense.
Consider what Eliezer says about coin flips:
The mind projection fallacy is treating the word “probability” not in a nominalist way, but in a philosophical realist way, as if they were things existing in the world. Probabilities are subjective. You don’t project them onto the external world. That doesn’t make “coin.probability == 0.5″ a “false” statement. It correctly specifies the distribution of possibilities given the information available within the mind making the probability assessment. I think that is what Eliezer is trying to say there.
“Free will” is a useful theoretical construct in a similar way. It may not be a thing in the world, but it is a model for talking about how we make decisions. We can only model our own brains; you can’t fully simulate your own brain within your own brain; you can’t demand that we use the territory as our map.
It’s not just the one post, it’s the whole sequence of related posts.
It’s hard for me to summarize it all and do it justice, but it disagrees with the way you’re framing this. I would suggest you read some of that sequence and/or some of the decision theory papers for a defense of “should” notions being used even when believing in a deterministic world, which you reject. I don’t really want to argue the whole thing from scratch, but that is where our disagreement would lie.