If they can perfectly predict your actions, then you have no choice, so talking about which choice to make is meaningless.
This was argued against in the Sequences and in general, doesn’t seem to be a strong argument. It seems perfectly compatible to believe your actions follow deterministically and still talk about decision theory—all the functional decision theory stuff is assuming a deterministic decision process, I think.
Re QM: sometimes I’ve seen it stipulated that the world in which the scenario happens is deterministic. It’s entirely possible that the amount of noise generated by QM isn’t enough to affect your choice (besides for a very unlikely “your brain has a couple bits changed randomly in exactly the right way to change your choice”, but that should be way too many orders of magnitude unlikely so as to not matter in any expected utility calculation).
This was argued against in the Sequences and in general, doesn’t seem to be a strong argument. It seems perfectly compatible to believe your actions follow deterministically and still talk about decision theory—all the functional decision theory stuff is assuming a deterministic decision process, I think.
It is compatible to believe your actions follow deterministically and still talk about decision theory. It is not compatible to believe your actions follow deterministically, and still talk about decision theory from a first-person point of view, as if you could by force of will violate your programming.
To ask what choice a deterministic entity should make presupposes both that it does, and does not, have choice. Presupposing a contradiction means STOP, your reasoning has crashed and you can prove any conclusion if you continue.
It is not compatible to believe your actions follow deterministically, and still talk about decision theory from a first-person point of view,
So it’s the pronouns that matter? If I keep using “Aris Katsaris” rather than “I” that makes a difference to whether the person I’m talking about makes decisions that can be deterministally predicted?
Whether someone can predict your decisions has ZERO relevancy on whether you are the one making the decisions or not. This sort of confusion where people think that “free will” means “being unpredictable” is nonsensical—it’s the very opposite. For the decisions to be yours, they must be theoretically predictable, arising from the contents of your brains. Adding in randomness and unpredictability, like e.g. using dice or coinflips reduces the ownership of the decisions and hence the free will.
Old and tired, maybe, but clearly there is not much consensus yet (even if, ahem, some people consider it to be as clear as day).
Note that who makes the decision is a matter of control and has nothing to do with freedom. A calculator controls its display and so the “decision” to output 4 in response to 2+2 it its own, in a way. But applying decision theory to a calculator is nonsensical and there is no free choice involved.
I just now read that one post. It isn’t clear how you think it’s relevant. I’m guessing you think that it implies that positing free will is invalid.
You don’t have to believe in free will to incorporate it into a model of how humans act. We’re all nominalists here; we don’t believe that the concepts in our theories actually exist somewhere in Form-space.
When someone asks the question, “Should you one-box?”, they’re using a model which uses the concept of free will. You can’t object to that by saying “You don’t really have free will.” You can object that it is the wrong model to use for this problem, but then you have to spell out why, and what model you want to use instead, and what question you actually want to ask, since it can’t be that one.
People in the LW community don’t usually do that. I see sloppy statements claiming that humans “should” one-box, based on a presumption that they have no free will. That’s making a claim within a paradigm while rejecting the paradigm. It makes no sense.
Consider what Eliezer says about coin flips:
We’ve previously discussed how probability is in the mind. If you are uncertain about whether a classical coin has landed heads or tails, that is a fact about your state of mind, not a property of the coin. The coin itself is either heads or tails. But people forget this, and think that coin.probability == 0.5, which is the Mind Projection Fallacy: treating properties of the mind as if they were properties of the external world.
The mind projection fallacy is treating the word “probability” not in a nominalist way, but in a philosophical realist way, as if they were things existing in the world. Probabilities are subjective. You don’t project them onto the external world. That doesn’t make “coin.probability == 0.5″ a “false” statement. It correctly specifies the distribution of possibilities given the information available within the mind making the probability assessment. I think that is what Eliezer is trying to say there.
“Free will” is a useful theoretical construct in a similar way. It may not be a thing in the world, but it is a model for talking about how we make decisions. We can only model our own brains; you can’t fully simulate your own brain within your own brain; you can’t demand that we use the territory as our map.
It’s not just the one post, it’s the whole sequence of related posts.
It’s hard for me to summarize it all and do it justice, but it disagrees with the way you’re framing this. I would suggest you read some of that sequence and/or some of the decision theory papers for a defense of “should” notions being used even when believing in a deterministic world, which you reject. I don’t really want to argue the whole thing from scratch, but that is where our disagreement would lie.
This was argued against in the Sequences and in general, doesn’t seem to be a strong argument. It seems perfectly compatible to believe your actions follow deterministically and still talk about decision theory—all the functional decision theory stuff is assuming a deterministic decision process, I think.
Re QM: sometimes I’ve seen it stipulated that the world in which the scenario happens is deterministic. It’s entirely possible that the amount of noise generated by QM isn’t enough to affect your choice (besides for a very unlikely “your brain has a couple bits changed randomly in exactly the right way to change your choice”, but that should be way too many orders of magnitude unlikely so as to not matter in any expected utility calculation).
It is compatible to believe your actions follow deterministically and still talk about decision theory. It is not compatible to believe your actions follow deterministically, and still talk about decision theory from a first-person point of view, as if you could by force of will violate your programming.
To ask what choice a deterministic entity should make presupposes both that it does, and does not, have choice. Presupposing a contradiction means STOP, your reasoning has crashed and you can prove any conclusion if you continue.
So it’s the pronouns that matter? If I keep using “Aris Katsaris” rather than “I” that makes a difference to whether the person I’m talking about makes decisions that can be deterministally predicted?
Whether someone can predict your decisions has ZERO relevancy on whether you are the one making the decisions or not. This sort of confusion where people think that “free will” means “being unpredictable” is nonsensical—it’s the very opposite. For the decisions to be yours, they must be theoretically predictable, arising from the contents of your brains. Adding in randomness and unpredictability, like e.g. using dice or coinflips reduces the ownership of the decisions and hence the free will.
This is old and tired territory.
Old and tired, maybe, but clearly there is not much consensus yet (even if, ahem, some people consider it to be as clear as day).
Note that who makes the decision is a matter of control and has nothing to do with freedom. A calculator controls its display and so the “decision” to output 4 in response to 2+2 it its own, in a way. But applying decision theory to a calculator is nonsensical and there is no free choice involved.
Have you read http://lesswrong.com/lw/rb/possibility_and_couldness/ and the related posts and have some disagreement with them?
I just now read that one post. It isn’t clear how you think it’s relevant. I’m guessing you think that it implies that positing free will is invalid.
You don’t have to believe in free will to incorporate it into a model of how humans act. We’re all nominalists here; we don’t believe that the concepts in our theories actually exist somewhere in Form-space.
When someone asks the question, “Should you one-box?”, they’re using a model which uses the concept of free will. You can’t object to that by saying “You don’t really have free will.” You can object that it is the wrong model to use for this problem, but then you have to spell out why, and what model you want to use instead, and what question you actually want to ask, since it can’t be that one.
People in the LW community don’t usually do that. I see sloppy statements claiming that humans “should” one-box, based on a presumption that they have no free will. That’s making a claim within a paradigm while rejecting the paradigm. It makes no sense.
Consider what Eliezer says about coin flips:
The mind projection fallacy is treating the word “probability” not in a nominalist way, but in a philosophical realist way, as if they were things existing in the world. Probabilities are subjective. You don’t project them onto the external world. That doesn’t make “coin.probability == 0.5″ a “false” statement. It correctly specifies the distribution of possibilities given the information available within the mind making the probability assessment. I think that is what Eliezer is trying to say there.
“Free will” is a useful theoretical construct in a similar way. It may not be a thing in the world, but it is a model for talking about how we make decisions. We can only model our own brains; you can’t fully simulate your own brain within your own brain; you can’t demand that we use the territory as our map.
It’s not just the one post, it’s the whole sequence of related posts.
It’s hard for me to summarize it all and do it justice, but it disagrees with the way you’re framing this. I would suggest you read some of that sequence and/or some of the decision theory papers for a defense of “should” notions being used even when believing in a deterministic world, which you reject. I don’t really want to argue the whole thing from scratch, but that is where our disagreement would lie.