But if you read the other parts of the solution to “free will”, and then furthermore explicitly formulate TDT, then this is what utterly, finally, completely, and without even a tiny trace of confusion or dissatisfaction or a sense of lingering questions, kills off entirely the question of “free will”.
If this is correct, then it amounts to a profound philosophical and scientific achievement.
Free will is about as easy as a problem can get and still be Confusing. Plenty of moderately good reductionists have refused to be confused by it. Killing off the problem entirely is more like dropping nuclear weapons to obliterate the last remnants of a dead horse than any great innovation within the field of reductionism.
There are non-reductionist philosophers who would think of reducing free will as a great and difficult achievement, but by reductionist standards it’s a mostly-solved problem already.
Formal cooperation in the one-shot PD, now that should be interesting.
Free will is counted as one of the great problems of philosophy. Wikipedia Lists it as a “central problem of metaphysics”. SEP has a whole, long article on it along with others on: “compatibilism”, “causal determinism” , “free will and fatalism”, “divine foreknowledge”, “incompatibilism (nondeterministic) theories of free will” and “arguments for incompatibilism”.
If you really have “nuked the dead donkey” here, you would cut out a lot of literature. Furthermore, religious people would no longer be able to use “free will” as a magic incantation with which to defend God.
The only reason free will is regarded as a problem of philosophy is that philosophers are in the rather bizarre habit of defining it as “your actions are uncaused”—it should be no surprise that a nonsensical definition leads to problems!
When we use the correct definition—the one that corresponds to how the term is actually used—“your actions are caused by your own decisions, as opposed to by external coercion”—the problem doesn’t arise.
rather, if one challenges a valid verbal theory one can usually find some way of convincing people that there is some “wiggle room”, that it may or may not be valid, etc. But a mathematical theory has, I think, an air of respectability that will make people pay attention, even if they don’t like it, and especially if they don’t actually understand the mathematics.
If your theory finds applications, (which, given the robotics revolution we seem to be in the middle of is not vastly unlikely), then it will further marginalize those who stick to the old convenient confusion about free will.
Of course, given what has happened with evolution (smart Christians accept it, but find excuses to still believe in God), I suspect that it will only have an incremental impact on religiosity, even amongst the elite.
Free will seems like a pretty boring topic to me. The main recent activity I have noticed in the area was Daniel Dennett’s “Freedom Evolves” book. That book was pretty boring and mostly wrong—I thought. It was curious to see Daniel Dennett make such a mess of the subject, though.
As it happens, I’m reading through Freedom Evolves right now; up to chapter 3, and while I don’t quite buy his ideas on inevitability, it so far doesn’t strike me as a mess?
Here is what I don’t understand about the free will problem. I know this is a simple objection, so there must be a standard reply to it; but I don’t know what that reply is.
Denote F as a world in which free will exists, f as one in which it doesn’t.
Denote B as a world in which you believe in free will, and b as one in which you don’t.
Let a combination of the two, e.g., FB, denote the utility you derive from having that belief in that world. Suppose FB > Fb and fb > fB (being correct > being wrong).
The expected utility of B is FB x p(F) + fB x (1-p(F)).
Expected utility of b is Fb x p(F) + fb x (1-p(F)).
Choose b if Fb x p(F) + fb x (1-p(F)) > FB x p(F) + fB x (1-p(F)).
But, that’s not right in this case! You shouldn’t consider worlds of type f in your decision, because if you’re in one of those worlds, your decision is pre-ordained. It doesn’t make any sense to “choose” not to believe in free will—that belief may be correct, but if it is correct, then you can’t choose it.
Over worlds of type F, the expected utility of B is FB x p(F), and the utility of b is Fb x p(F), and FB > Fb. So you always choose B.
You shouldn’t consider worlds of type f in your decision, because if you’re in one of those worlds, your decision is pre-ordained. It doesn’t make any sense to “choose” not to believe in free will—that belief may be correct, but if it is correct, then you can’t choose it.
Saying that you shouldn’t do something because it’s preordained whether you do it or not is a very confused way of looking at things. Christine Korsgaard, by whom I am normally unimpressed but who has a few quotables, says:
Having discovered that my conduct is predictable, will I now sit quietly in my chair, waiting to see what I will do? Then I will not do anything but sit quietly in my chair. And that had better be what you predicted, or you will have been wrong. But in any case why should I do that, if I think I ought to be working?
I don’t understand what that Korsgaard quote is trying to say.
Saying that you shouldn’t do something because it’s preordained whether you do it or not is a very confused way of looking at things.
I didn’t say that. I said that, when making a choice, you shouldn’t consider, in your set of possible worlds, possible worlds in which you can’t make that choice.
It’s certainly not as confused a way of looking at things as choosing to believe that you can’t choose what to believe.
I should have said you shouldn’t try to consider those worlds. If you are in f, then it may be that you will consider such possible worlds; and there’s no shouldness about it.
“But”, you might object, “what should you do if you are a computer program, running in a deterministic language on deterministic hardware?”
The answer is that in that case, you do what you will do. You might adopt the view that you have no free will, and you might be right.
The 2-sentence version of what I’m saying is that, if you don’t believe in free will, you might be making an error that you could have avoided. But if you believe in free will, you can’t be making an error that you could have avoided.
I don’t understand what that Korsgaard quote is trying to say.
In the context of the larger paper, the most charitable way of interpreting her (IMO) is that whether we have free will or not, we have the subjective impression of it, this impression is simply not going anywhere, and so it makes no sense to try to figure out how a lack of free will ought to influence our behavior, because then we’ll just sit around waiting for our lack of free will to pick us up out of our chair and make us water our houseplants and that’s not going to happen.
I said that, when making a choice, you shouldn’t consider, in your set of possible worlds, possible worlds in which you can’t make that choice.
What if we’re in a possible world where we can’t choose not to consider those worlds? ;)
It’s certainly not as confused a way of looking at things as choosing to believe that you can’t choose what to believe.
“Choosing to believe that you can’t choose what to believe” is not a way of looking at things; it’s a possible state of affairs, in which one has a somewhat self-undermining and false belief. Now, believing that one can choose to believe that one cannot choose what to believe is a way of looking at things, and might even be true. There is some evidence that people can choose to believe self-undermining false things, so believing that one could choose to believe a particular self-undermining false thing which happens to have recursive bearing on the choice to believe it isn’t so far out.
Denote F as a world in which free will exists, f as one in which it doesn’t.
I am unable to attach a truth condition to these sentences—I can’t imagine two different ways that reality could be which would make the statements true or alternatively false.
You shouldn’t consider worlds of type f in your decision, because if you’re in one of those worlds, your decision is pre-ordained.
If I want to, I can assign a meaning to “free will” in which it is tautologically true of causal universes as such, and applied to agents, is true of some agents but not others. But you used the term, you tell me what it means to you.
You used the term first. You called it a “dead horse” and “about as easy as a problem can get and still be Confusing”. I would think this meant that you have a clear concept of what it means. And it can’t be a tautology, because tautologies are not dead horses.
I can at least say that, to me, “Free will exists” implies “No Omega can predict with certainty whether I will one-box or two-box.” (This is not an “if and only if” because I don’t want to say that a random process has free will; nor that an undecidable algorithm has free will.)
I thought about saying: “Free will does not exist” if and only if “Consciousness is epiphenomenal”. That sounds dangerously tautological, but closer to what I mean.
I can’t think how to say anything more descriptive than what I wrote in my first comment above. I understand that saying there is free will seems to imply that I am not an algorithm; and that that seems to require some weird spiritualism or vitalism. But that is vague and fuzzy to me; whereas it is clear that it doesn’t make sense to worry about what I should do in the worlds where I can’t actually choose what I will do. I choose to live with the vague paradox rather than the clear-cut one.
ADDED: I should clarify that I don’t believe in free will. I believe there is no such thing. But, when choosing how to act, I don’t consider that possibility, because of the reasons I gave previously.
All right, I read all of the non-italicized links, except for the “All posts on Less Wrong tagged Free Will”, trusting that one of them would say something relevant to what I’ve said here. But alas, no.
All of those links are attempts to argue about the truth value of “there is free will”, or about whether the concept of free will is coherent, or about what sort of mental models might cause someone to believe in free will.
None of those things are at issue here. What I am talking about is what happens when you are trying to compute something over different possible worlds, where what your computation actually does is different in these different worlds. When you must compare expected value in possible worlds in which there is no free will, to expected value in possible worlds in which there is free will, and then make a choice; what that choice actually does is not independent of what possible world you end up in. This means that you can’t apply expectation-maximization in the usual way. The counterintuitive result, I think, is that you should act in the way that maximizes expected value given that there is free will, regardless of the computed expected value given that there is not free will.
As I mentioned, I don’t believe in free will. But I think, based on a history of other concepts or frameworks that seemed paradoxical but were eventually worked out satisfactorily, that it’s possible there’s something to the naive notion of “free will”.
We have a naive notion of “free will” which, so far, no one has been able to connect up with our understanding of physics in a coherent way. This is powerful evidence that it doesn’t exist, or isn’t even a meaningful concept. It isn’t proof, however; I could say the same thing about “consciousness”, which as far as I can see really shouldn’t exist.
All attempts that I’ve seen so far to parse out what free will means, including Eliezer’s careful and well-written essays linked to above, fail to noticeably reduce the probability I assign to there being naive “free will”, because the probability that there is some error in the description or mapping or analogies made is always much higher than the very-low prior probability that I assign to there being “free will”.
I’m not arguing in favor of free will. I’m arguing that, when considering an action to take that is conditioned on the existence of free will, you should not do the usual expected-utility calculations, because the answer to the free will question determines what it is you’re actually doing when you choose an action to take, in a way that has an asymmetry such that, if there is any possibility epsilon > 0 that free will exists, you should assume it exists.
(BTW, I think a philosopher who wished to defend free will could rightfully make the blanket assertion against all of Eliezer’s posts that they assume what they are trying to prove. It’s pointless to start from the position that you are an algorithm in a Blocks World, and argue from there against free will. There’s some good stuff in there, but it’s not going to convince someone who isn’t already reductionist or determinist.)
When you must compare expected value in possible worlds in which there is no free will, to expected value in possible worlds in which there is free will
I have stated exactly what I mean by the term “free will” and it makes this sentence nonsense; there is no world in which you do not have free will. And I see no way that your will could possibly be any freer than it already is. There is no possible amendment to reality which you can consistently describe, that would make your free will any freer than it is in our own timeless and deterministic (though branching) universe.
What do you mean by “free will” that makes your sentence non-nonsense? Don’t say “if we did actually have free will”, tell me how reality could be different.
in our own timeless and deterministic (though branching) universe.
That’s the part I don’t buy. I’m not saying it’s false, but I don’t see any good reason to think it’s true. (I think I read the posts where you explained why you believe it, but I might have missed some.)
I can’t state exactly what I mean by “free will”, any more than I can state exactly what I mean by “consciousness”. No one has come up with a reductionist account of either. But since I actually do believe in consciousness, I can’t dismiss free will as nonsense.
A clarification added in response to the instantaneous orgy of downvotes: I realize that Eliezer has provided a reductionist explanation for how he thinks “free will” should be interpreted, and for why people believe in it. That is not what I mean. I mean that no one has come up with a reductionist account for how what people actually mean by “free will” could work in the physical world. Just as no one has come up with a reductionist account for how what people mean by “consciousness” could work in the physical world.
If you find a reason to disagree with this, it means that you have a tremendously important insight, and should probably write a little comment to share your revelation with us on a reductionist implementation of naive free will, or consciousness.
I can’t state exactly what I mean by “free will”, any more than I can state exactly what I mean by “consciousness”. No one has come up with a reductionist account of either.
This is not only incorrect, but is in dismissive denial of statements to the opposite made by people in response to your questions. One thing is to consider an argument incorrect or to be unwilling to accept it, another is to fail to understand the argument to the point of denying its very existence.
You should be more specific: Point out which part of my statement is incorrect, and what statements I am dismissively denying.
A reductionist account of causality does not count as a reductionist account of free will. Saying, “The world is deterministic, therefore ‘free will’ actually means the uninteresting concept X that is not what anybody means by ‘free will’” does not count as a deterministic account of free will.
What I mean is that no one has provided a reductionist account of how the naive notion of free will could work. Not that no one has provided a reductionist account of how the world actually works and what “free will” maps onto in that world.
I’m also curious why it’s bad for me to dismissively deny statements made to me, but okay for you to dismissively deny my statements as incorrect.
What I mean is that no one has provided a reductionist account of how the naive notion of free will could work.
Because that would be as silly as seeking a reductionist account of how souls or gods could “work”—the only way you’re going to get one is by explaining how the brain tends to believe these (purely mental) phenomena actually exist.
Free will is just the feeling that more than one choice is possible, just like a soul or a god is just the feeling of agency, detached from an actual agent.
All three are descriptions of mental phenomena, rather than having anything to do with a physical reality outside the brain.
Again—yes, I agree that what you say is almost certainly true. The reason I said that no one has provided a reductionist account of how the naive notion of free will could work, was to point out its similarity to the question of consciousness, which seems as nonsensical as free will, and yet exists; and thereby show that there is a possibility that there is something to the naive notion. And as long as there is some probability epsilon > 0 of that, then we have the situation I described above when performing expectation maximization.
BTW, your response is an assertion, or at best an explaining-away; not a proof.
The mistake you’re making is that determinism does not mean your decisions are irrelevant. The universe doesn’t swoop in and force you to decide a certain way even though you’d rather not. Determinism only means that your decisions, by being part of physical reality rather than existing outside it, result from the physical events that led to them. You aren’t free to make events happen without a cause, but you can still look at evidence and come to correct conclusions.
If you can’t choose whether you believe, then you don’t choose whether you
believe. You just believe or not. The full equation still captures the
correctness of your belief, however you arrived at it. There’s nothing
inconsistent about thinking that you are forced to not believe and that seeing the
equation is (part of) what forced you.
(I avoid the phrase “free will” because there are so many
different definitions. You seem to be using one that involves choice, while
Eliezer uses one based on control. As I understand it, the two of you would
disagree about whether a TV remote in a deterministic universe has free will.)
If you can’t choose whether you believe, then you don’t choose whether you believe. You just believe or not. The full equation still captures the correctness of your belief, however you arrived at it. There’s nothing inconsistent about thinking that you are forced to not believe and that seeing the equation is (part of) what forced you.
And Alicorn said:
What if we’re in a possible world where we can’t choose not to consider those worlds? ;)
And before either of those, I said:
“But”, you might object, “what should you do if you are a computer program, running in a deterministic language on deterministic hardware?”
The answer is that in that case, you do what you will do. You might adopt the view that you have no free will, and you might be right.
These all seem to mean the same thing. When you try to argue against what someone said by agreeing with him, someone is failing to communicate.
Brian, my objection is not based on the case fb. It’s based on the cases Fb and fB. fB is a mistake that you had to make. Fb, “choosing to believe that you can’t choose to believe”, is a mistake you didn’t have to make.
Yes. I started writing my reply before Alicorn said anything, took a
short break, posted it, and was a bit surprised to see a whole
discussion had happened under my nose.
But I don’t see how what you originally said is the same as what you
ended up saying.
At first, you said not to consider f because there’s no point. My
response was that the equation correctly includes f regardless of your
ability to choose based on the solution.
Now you are saying that Fb is different from (inferior to?) fB.
Compassion (in a certain sense) may be part of your answer.
If I (as Prisoner A) have a term in my utility function such that an injury to Prisoner B is an injury to me (discounted), it can make ‘Cooperate’ much more attractive.
I might have enough compassion to be willing to do 6 months in jail if it will spare Prisoner B a 2-year prison term (or more).
His point there is, the values in the matrix are supposed to represent the participants’ utility, rather than jail time, which accounts for your compassion for your friend. If it were simply prison sentences, your reasoning would apply, which is why EY says the true Prisoner’s Dilemma requires convoluted, unusual scenarios, and why normal presentations of the PD don’t make the situation clear.
That Prisoner A is completely and utterly selfish is part of the Prisoner’s Dilemma. If the prisoner’s not selfish, it’s not the Prisoner’s Dilemma anymore.
EDIT: Of course, this is only true if the numbers in the matrix represent years spent in jail, not utilons.
Of course, this might still be muddy if you recast the payoff matrix in utilons, or (to abstract away less) adjust the “external” payoff matrices so that the “internal” payoff matrices match those of the original problem.
I suspect I’m not smart enough to play on this site. I’m quite unsure I can even parse your sentence correctly, and I can’t imagine a reason to adjust the external payoff matrices (they were given by Wei Dai, that is the original problem I’m discussing) so the internal payoff mtrices match something. I’m baffled.
See Cyan’s comment below. Do not be dispirited by lolspeak.
Also, the reason to adjust the payoff matrices in the original problem is so that your ‘internal’ payoff matrices match those of Wei Dai’s problem, or to put it another way, consider the problem in the least convenient possible world. Basically, the prisoner’s dilemma is still there if you take the problem to be in utilons, which take into account things like your ‘compassion’ (in this case, valuing the reward given to the other person). I can’t quite figure out what your formula for discounting is above, so let me simplify...
It would be remiss for me to not do the math, though it is not my forte:
Suppose the matrix represents jelly beans for you or the opponent, each worth 1 utilon. Further suppose that you get .25 utilons for each jelly bean the opponent gets, due to your ‘compassion’. Now take this payoff matrix (in jellybeans):
375/500 -150/600
600/0 75/100
Which becomes in your ‘internal’ matrix (in utilons):
500/500 0/600
600/0 100/100
Now cooperation is dominated by defection for the ‘compassionate’ person.
Someone please note if my numbers don’t work out—it’s early here.
But maybe I just think I do. I thought I understood that narrow part of Wei Dai’s post on a problem that maybe defeats TDT. I had no idea that compassion had already been considered and compensated out of consideration. And that’s such common shared knowledge here in the LessWrong community that it need not be mentioned.
I have a lot to learn. I now see I was very arrogant think I could contribute here. I should read the archives & wiki before I post. I apologize.
<<Begins to compute an estimated time to de-lurk. They collectively write several times faster than I can read, even if I don’t slow down to mull it over. Hmmm… >>
If this is correct, then it amounts to a profound philosophical and scientific achievement.
Not by my standards.
Free will is about as easy as a problem can get and still be Confusing. Plenty of moderately good reductionists have refused to be confused by it. Killing off the problem entirely is more like dropping nuclear weapons to obliterate the last remnants of a dead horse than any great innovation within the field of reductionism.
There are non-reductionist philosophers who would think of reducing free will as a great and difficult achievement, but by reductionist standards it’s a mostly-solved problem already.
Formal cooperation in the one-shot PD, now that should be interesting.
Free will is counted as one of the great problems of philosophy. Wikipedia Lists it as a “central problem of metaphysics”. SEP has a whole, long article on it along with others on: “compatibilism”, “causal determinism” , “free will and fatalism”, “divine foreknowledge”, “incompatibilism (nondeterministic) theories of free will” and “arguments for incompatibilism”.
If you really have “nuked the dead donkey” here, you would cut out a lot of literature. Furthermore, religious people would no longer be able to use “free will” as a magic incantation with which to defend God.
The only reason free will is regarded as a problem of philosophy is that philosophers are in the rather bizarre habit of defining it as “your actions are uncaused”—it should be no surprise that a nonsensical definition leads to problems!
When we use the correct definition—the one that corresponds to how the term is actually used—“your actions are caused by your own decisions, as opposed to by external coercion”—the problem doesn’t arise.
Dennett and others have used multi-ton high explosives on the dead donkey. Why would nuclear weapons make a further difference?
People respond to math more than to words.
Er… no they don’t?
Some do.
rather, if one challenges a valid verbal theory one can usually find some way of convincing people that there is some “wiggle room”, that it may or may not be valid, etc. But a mathematical theory has, I think, an air of respectability that will make people pay attention, even if they don’t like it, and especially if they don’t actually understand the mathematics.
If your theory finds applications, (which, given the robotics revolution we seem to be in the middle of is not vastly unlikely), then it will further marginalize those who stick to the old convenient confusion about free will.
Of course, given what has happened with evolution (smart Christians accept it, but find excuses to still believe in God), I suspect that it will only have an incremental impact on religiosity, even amongst the elite.
Free will seems like a pretty boring topic to me. The main recent activity I have noticed in the area was Daniel Dennett’s “Freedom Evolves” book. That book was pretty boring and mostly wrong—I thought. It was curious to see Daniel Dennett make such a mess of the subject, though.
As it happens, I’m reading through Freedom Evolves right now; up to chapter 3, and while I don’t quite buy his ideas on inevitability, it so far doesn’t strike me as a mess?
I liked the bit on memes. Most of the rest of it was a lot of word games, IMO.
Here is what I don’t understand about the free will problem. I know this is a simple objection, so there must be a standard reply to it; but I don’t know what that reply is.
Denote F as a world in which free will exists, f as one in which it doesn’t. Denote B as a world in which you believe in free will, and b as one in which you don’t. Let a combination of the two, e.g., FB, denote the utility you derive from having that belief in that world. Suppose FB > Fb and fb > fB (being correct > being wrong).
The expected utility of B is FB x p(F) + fB x (1-p(F)). Expected utility of b is Fb x p(F) + fb x (1-p(F)). Choose b if Fb x p(F) + fb x (1-p(F)) > FB x p(F) + fB x (1-p(F)).
But, that’s not right in this case! You shouldn’t consider worlds of type f in your decision, because if you’re in one of those worlds, your decision is pre-ordained. It doesn’t make any sense to “choose” not to believe in free will—that belief may be correct, but if it is correct, then you can’t choose it.
Over worlds of type F, the expected utility of B is FB x p(F), and the utility of b is Fb x p(F), and FB > Fb. So you always choose B.
Saying that you shouldn’t do something because it’s preordained whether you do it or not is a very confused way of looking at things. Christine Korsgaard, by whom I am normally unimpressed but who has a few quotables, says:
(From “The Authority of Reflection”)
I don’t understand what that Korsgaard quote is trying to say.
I didn’t say that. I said that, when making a choice, you shouldn’t consider, in your set of possible worlds, possible worlds in which you can’t make that choice.
It’s certainly not as confused a way of looking at things as choosing to believe that you can’t choose what to believe.
I should have said you shouldn’t try to consider those worlds. If you are in f, then it may be that you will consider such possible worlds; and there’s no shouldness about it.
“But”, you might object, “what should you do if you are a computer program, running in a deterministic language on deterministic hardware?”
The answer is that in that case, you do what you will do. You might adopt the view that you have no free will, and you might be right.
The 2-sentence version of what I’m saying is that, if you don’t believe in free will, you might be making an error that you could have avoided. But if you believe in free will, you can’t be making an error that you could have avoided.
In the context of the larger paper, the most charitable way of interpreting her (IMO) is that whether we have free will or not, we have the subjective impression of it, this impression is simply not going anywhere, and so it makes no sense to try to figure out how a lack of free will ought to influence our behavior, because then we’ll just sit around waiting for our lack of free will to pick us up out of our chair and make us water our houseplants and that’s not going to happen.
What if we’re in a possible world where we can’t choose not to consider those worlds? ;)
“Choosing to believe that you can’t choose what to believe” is not a way of looking at things; it’s a possible state of affairs, in which one has a somewhat self-undermining and false belief. Now, believing that one can choose to believe that one cannot choose what to believe is a way of looking at things, and might even be true. There is some evidence that people can choose to believe self-undermining false things, so believing that one could choose to believe a particular self-undermining false thing which happens to have recursive bearing on the choice to believe it isn’t so far out.
I am unable to attach a truth condition to these sentences—I can’t imagine two different ways that reality could be which would make the statements true or alternatively false.
http://wiki.lesswrong.com/wiki/Free_will_(solution)
Do you mean that the phrases “free will exists” and “free will does not exist” are both incoherent?
If I want to, I can assign a meaning to “free will” in which it is tautologically true of causal universes as such, and applied to agents, is true of some agents but not others. But you used the term, you tell me what it means to you.
You used the term first. You called it a “dead horse” and “about as easy as a problem can get and still be Confusing”. I would think this meant that you have a clear concept of what it means. And it can’t be a tautology, because tautologies are not dead horses.
I can at least say that, to me, “Free will exists” implies “No Omega can predict with certainty whether I will one-box or two-box.” (This is not an “if and only if” because I don’t want to say that a random process has free will; nor that an undecidable algorithm has free will.)
I thought about saying: “Free will does not exist” if and only if “Consciousness is epiphenomenal”. That sounds dangerously tautological, but closer to what I mean.
I can’t think how to say anything more descriptive than what I wrote in my first comment above. I understand that saying there is free will seems to imply that I am not an algorithm; and that that seems to require some weird spiritualism or vitalism. But that is vague and fuzzy to me; whereas it is clear that it doesn’t make sense to worry about what I should do in the worlds where I can’t actually choose what I will do. I choose to live with the vague paradox rather than the clear-cut one.
ADDED: I should clarify that I don’t believe in free will. I believe there is no such thing. But, when choosing how to act, I don’t consider that possibility, because of the reasons I gave previously.
Then you’ve got the naive incoherent version of “free will” stuck in your head. Read the links.
http://wiki.lesswrong.com/wiki/Free_will
http://wiki.lesswrong.com/wiki/Freewill(solution)
All right, I read all of the non-italicized links, except for the “All posts on Less Wrong tagged Free Will”, trusting that one of them would say something relevant to what I’ve said here. But alas, no.
All of those links are attempts to argue about the truth value of “there is free will”, or about whether the concept of free will is coherent, or about what sort of mental models might cause someone to believe in free will.
None of those things are at issue here. What I am talking about is what happens when you are trying to compute something over different possible worlds, where what your computation actually does is different in these different worlds. When you must compare expected value in possible worlds in which there is no free will, to expected value in possible worlds in which there is free will, and then make a choice; what that choice actually does is not independent of what possible world you end up in. This means that you can’t apply expectation-maximization in the usual way. The counterintuitive result, I think, is that you should act in the way that maximizes expected value given that there is free will, regardless of the computed expected value given that there is not free will.
As I mentioned, I don’t believe in free will. But I think, based on a history of other concepts or frameworks that seemed paradoxical but were eventually worked out satisfactorily, that it’s possible there’s something to the naive notion of “free will”.
We have a naive notion of “free will” which, so far, no one has been able to connect up with our understanding of physics in a coherent way. This is powerful evidence that it doesn’t exist, or isn’t even a meaningful concept. It isn’t proof, however; I could say the same thing about “consciousness”, which as far as I can see really shouldn’t exist.
All attempts that I’ve seen so far to parse out what free will means, including Eliezer’s careful and well-written essays linked to above, fail to noticeably reduce the probability I assign to there being naive “free will”, because the probability that there is some error in the description or mapping or analogies made is always much higher than the very-low prior probability that I assign to there being “free will”.
I’m not arguing in favor of free will. I’m arguing that, when considering an action to take that is conditioned on the existence of free will, you should not do the usual expected-utility calculations, because the answer to the free will question determines what it is you’re actually doing when you choose an action to take, in a way that has an asymmetry such that, if there is any possibility epsilon > 0 that free will exists, you should assume it exists.
(BTW, I think a philosopher who wished to defend free will could rightfully make the blanket assertion against all of Eliezer’s posts that they assume what they are trying to prove. It’s pointless to start from the position that you are an algorithm in a Blocks World, and argue from there against free will. There’s some good stuff in there, but it’s not going to convince someone who isn’t already reductionist or determinist.)
I have stated exactly what I mean by the term “free will” and it makes this sentence nonsense; there is no world in which you do not have free will. And I see no way that your will could possibly be any freer than it already is. There is no possible amendment to reality which you can consistently describe, that would make your free will any freer than it is in our own timeless and deterministic (though branching) universe.
What do you mean by “free will” that makes your sentence non-nonsense? Don’t say “if we did actually have free will”, tell me how reality could be different.
That’s the part I don’t buy. I’m not saying it’s false, but I don’t see any good reason to think it’s true. (I think I read the posts where you explained why you believe it, but I might have missed some.)
I can’t state exactly what I mean by “free will”, any more than I can state exactly what I mean by “consciousness”. No one has come up with a reductionist account of either. But since I actually do believe in consciousness, I can’t dismiss free will as nonsense.
A clarification added in response to the instantaneous orgy of downvotes: I realize that Eliezer has provided a reductionist explanation for how he thinks “free will” should be interpreted, and for why people believe in it. That is not what I mean. I mean that no one has come up with a reductionist account for how what people actually mean by “free will” could work in the physical world. Just as no one has come up with a reductionist account for how what people mean by “consciousness” could work in the physical world.
If you find a reason to disagree with this, it means that you have a tremendously important insight, and should probably write a little comment to share your revelation with us on a reductionist implementation of naive free will, or consciousness.
This is not only incorrect, but is in dismissive denial of statements to the opposite made by people in response to your questions. One thing is to consider an argument incorrect or to be unwilling to accept it, another is to fail to understand the argument to the point of denying its very existence.
You should be more specific: Point out which part of my statement is incorrect, and what statements I am dismissively denying.
A reductionist account of causality does not count as a reductionist account of free will. Saying, “The world is deterministic, therefore ‘free will’ actually means the uninteresting concept X that is not what anybody means by ‘free will’” does not count as a deterministic account of free will.
What I mean is that no one has provided a reductionist account of how the naive notion of free will could work. Not that no one has provided a reductionist account of how the world actually works and what “free will” maps onto in that world.
I’m also curious why it’s bad for me to dismissively deny statements made to me, but okay for you to dismissively deny my statements as incorrect.
Because that would be as silly as seeking a reductionist account of how souls or gods could “work”—the only way you’re going to get one is by explaining how the brain tends to believe these (purely mental) phenomena actually exist.
Free will is just the feeling that more than one choice is possible, just like a soul or a god is just the feeling of agency, detached from an actual agent.
All three are descriptions of mental phenomena, rather than having anything to do with a physical reality outside the brain.
Again—yes, I agree that what you say is almost certainly true. The reason I said that no one has provided a reductionist account of how the naive notion of free will could work, was to point out its similarity to the question of consciousness, which seems as nonsensical as free will, and yet exists; and thereby show that there is a possibility that there is something to the naive notion. And as long as there is some probability epsilon > 0 of that, then we have the situation I described above when performing expectation maximization.
BTW, your response is an assertion, or at best an explaining-away; not a proof.
The mistake you’re making is that determinism does not mean your decisions are irrelevant. The universe doesn’t swoop in and force you to decide a certain way even though you’d rather not. Determinism only means that your decisions, by being part of physical reality rather than existing outside it, result from the physical events that led to them. You aren’t free to make events happen without a cause, but you can still look at evidence and come to correct conclusions.
If you can’t choose whether you believe, then you don’t choose whether you believe. You just believe or not. The full equation still captures the correctness of your belief, however you arrived at it. There’s nothing inconsistent about thinking that you are forced to not believe and that seeing the equation is (part of) what forced you.
(I avoid the phrase “free will” because there are so many different definitions. You seem to be using one that involves choice, while Eliezer uses one based on control. As I understand it, the two of you would disagree about whether a TV remote in a deterministic universe has free will.)
edit: missing word, extra word
Brian said:
And Alicorn said:
And before either of those, I said:
These all seem to mean the same thing. When you try to argue against what someone said by agreeing with him, someone is failing to communicate.
Brian, my objection is not based on the case fb. It’s based on the cases Fb and fB. fB is a mistake that you had to make. Fb, “choosing to believe that you can’t choose to believe”, is a mistake you didn’t have to make.
Yes. I started writing my reply before Alicorn said anything, took a short break, posted it, and was a bit surprised to see a whole discussion had happened under my nose.
But I don’t see how what you originally said is the same as what you ended up saying.
At first, you said not to consider f because there’s no point. My response was that the equation correctly includes f regardless of your ability to choose based on the solution.
Now you are saying that Fb is different from (inferior to?) fB.
Eliezer_Yudkowsky wrote on 19 August 2009 03:24:46PM:
Tversky demonstrated: One experiment based on the simple dilemma found that approximately 40% of participants played “cooperate” (i.e., stayed silent). Hmmm...
Compassion (in a certain sense) may be part of your answer.
If I (as Prisoner A) have a term in my utility function such that an injury to Prisoner B is an injury to me (discounted), it can make ‘Cooperate’ much more attractive.
I might have enough compassion to be willing to do 6 months in jail if it will spare Prisoner B a 2-year prison term (or more).
For example, given the external payoff matrix given by Wei Dai (http://lesswrong.com/lw/15z/ingredients_of_timeless_decision_theory/11w9) (19 August 2009 07:08:23AM):
My INTERNAL payoff matrix becomes:
And ‘Cooperate’ now strictly dominates using elementary game theory.
Thank you for your time and consideration.
RickJS
While a good question, Eliezer_Yudkowsky has already thoroughly answered it in The True Prisoner’s Dilemma.
His point there is, the values in the matrix are supposed to represent the participants’ utility, rather than jail time, which accounts for your compassion for your friend. If it were simply prison sentences, your reasoning would apply, which is why EY says the true Prisoner’s Dilemma requires convoluted, unusual scenarios, and why normal presentations of the PD don’t make the situation clear.
That Prisoner A is completely and utterly selfish is part of the Prisoner’s Dilemma. If the prisoner’s not selfish, it’s not the Prisoner’s Dilemma anymore.
EDIT: Of course, this is only true if the numbers in the matrix represent years spent in jail, not utilons.
inorite?!
Of course, this might still be muddy if you recast the payoff matrix in utilons, or (to abstract away less) adjust the “external” payoff matrices so that the “internal” payoff matrices match those of the original problem.
Inorite? What is that?
I suspect I’m not smart enough to play on this site. I’m quite unsure I can even parse your sentence correctly, and I can’t imagine a reason to adjust the external payoff matrices (they were given by Wei Dai, that is the original problem I’m discussing) so the internal payoff mtrices match something. I’m baffled.
“inorite”.
See Cyan’s comment below. Do not be dispirited by lolspeak.
Also, the reason to adjust the payoff matrices in the original problem is so that your ‘internal’ payoff matrices match those of Wei Dai’s problem, or to put it another way, consider the problem in the least convenient possible world. Basically, the prisoner’s dilemma is still there if you take the problem to be in utilons, which take into account things like your ‘compassion’ (in this case, valuing the reward given to the other person). I can’t quite figure out what your formula for discounting is above, so let me simplify...
It would be remiss for me to not do the math, though it is not my forte:
Suppose the matrix represents jelly beans for you or the opponent, each worth 1 utilon. Further suppose that you get .25 utilons for each jelly bean the opponent gets, due to your ‘compassion’. Now take this payoff matrix (in jellybeans):
Which becomes in your ‘internal’ matrix (in utilons):
Now cooperation is dominated by defection for the ‘compassionate’ person.
Someone please note if my numbers don’t work out—it’s early here.
Ah. Thanks! I think I get that.
But maybe I just think I do. I thought I understood that narrow part of Wei Dai’s post on a problem that maybe defeats TDT. I had no idea that compassion had already been considered and compensated out of consideration. And that’s such common shared knowledge here in the LessWrong community that it need not be mentioned.
I have a lot to learn. I now see I was very arrogant think I could contribute here. I should read the archives & wiki before I post. I apologize.
<<Begins to compute an estimated time to de-lurk. They collectively write several times faster than I can read, even if I don’t slow down to mull it over. Hmmm… >>