I just now read that one post. It isn’t clear how you think it’s relevant. I’m guessing you think that it implies that positing free will is invalid.
You don’t have to believe in free will to incorporate it into a model of how humans act. We’re all nominalists here; we don’t believe that the concepts in our theories actually exist somewhere in Form-space.
When someone asks the question, “Should you one-box?”, they’re using a model which uses the concept of free will. You can’t object to that by saying “You don’t really have free will.” You can object that it is the wrong model to use for this problem, but then you have to spell out why, and what model you want to use instead, and what question you actually want to ask, since it can’t be that one.
People in the LW community don’t usually do that. I see sloppy statements claiming that humans “should” one-box, based on a presumption that they have no free will. That’s making a claim within a paradigm while rejecting the paradigm. It makes no sense.
Consider what Eliezer says about coin flips:
We’ve previously discussed how probability is in the mind. If you are uncertain about whether a classical coin has landed heads or tails, that is a fact about your state of mind, not a property of the coin. The coin itself is either heads or tails. But people forget this, and think that coin.probability == 0.5, which is the Mind Projection Fallacy: treating properties of the mind as if they were properties of the external world.
The mind projection fallacy is treating the word “probability” not in a nominalist way, but in a philosophical realist way, as if they were things existing in the world. Probabilities are subjective. You don’t project them onto the external world. That doesn’t make “coin.probability == 0.5″ a “false” statement. It correctly specifies the distribution of possibilities given the information available within the mind making the probability assessment. I think that is what Eliezer is trying to say there.
“Free will” is a useful theoretical construct in a similar way. It may not be a thing in the world, but it is a model for talking about how we make decisions. We can only model our own brains; you can’t fully simulate your own brain within your own brain; you can’t demand that we use the territory as our map.
It’s not just the one post, it’s the whole sequence of related posts.
It’s hard for me to summarize it all and do it justice, but it disagrees with the way you’re framing this. I would suggest you read some of that sequence and/or some of the decision theory papers for a defense of “should” notions being used even when believing in a deterministic world, which you reject. I don’t really want to argue the whole thing from scratch, but that is where our disagreement would lie.
Have you read http://lesswrong.com/lw/rb/possibility_and_couldness/ and the related posts and have some disagreement with them?
I just now read that one post. It isn’t clear how you think it’s relevant. I’m guessing you think that it implies that positing free will is invalid.
You don’t have to believe in free will to incorporate it into a model of how humans act. We’re all nominalists here; we don’t believe that the concepts in our theories actually exist somewhere in Form-space.
When someone asks the question, “Should you one-box?”, they’re using a model which uses the concept of free will. You can’t object to that by saying “You don’t really have free will.” You can object that it is the wrong model to use for this problem, but then you have to spell out why, and what model you want to use instead, and what question you actually want to ask, since it can’t be that one.
People in the LW community don’t usually do that. I see sloppy statements claiming that humans “should” one-box, based on a presumption that they have no free will. That’s making a claim within a paradigm while rejecting the paradigm. It makes no sense.
Consider what Eliezer says about coin flips:
The mind projection fallacy is treating the word “probability” not in a nominalist way, but in a philosophical realist way, as if they were things existing in the world. Probabilities are subjective. You don’t project them onto the external world. That doesn’t make “coin.probability == 0.5″ a “false” statement. It correctly specifies the distribution of possibilities given the information available within the mind making the probability assessment. I think that is what Eliezer is trying to say there.
“Free will” is a useful theoretical construct in a similar way. It may not be a thing in the world, but it is a model for talking about how we make decisions. We can only model our own brains; you can’t fully simulate your own brain within your own brain; you can’t demand that we use the territory as our map.
It’s not just the one post, it’s the whole sequence of related posts.
It’s hard for me to summarize it all and do it justice, but it disagrees with the way you’re framing this. I would suggest you read some of that sequence and/or some of the decision theory papers for a defense of “should” notions being used even when believing in a deterministic world, which you reject. I don’t really want to argue the whole thing from scratch, but that is where our disagreement would lie.