For convenience. If you show me a few examples where believing that I don’t have free will helps me get what I want, I might start caring about the actual structure of my mental algorithms as seen from the outside.
It is beneficial to believe you don’t have free will if you don’t have free will. From Surely You’re Joking, Mr. Feynman!:
When the real demonstration came he had us walk on stage, and he hypnotized us in front of the whole Princeton Graduate College. This time the effect was stronger; I guess I had learned how to become hypnotized. The hypnotist made various demonstrations, having me do things that I couldn’t normally do, and at the end he said that after I came out of hypnosis, instead of returning to my seat directly, which was the natural way to go, I would walk all the way around the room and go to my seat from the back.
All through the demonstration I was vaguely aware of what was going on, and cooperating with the things the hypnotist said, but this time I decided, “Damn it, enough is enough! I’m gonna go straight to my seat.”
When it was time to get up and go off the stage, I started to walk straight to my seat. But then an annoying feeling came over me: I felt so uncomfortable that I couldn’t continue. I walked all the way around the hall.
I was hypnotized in another situation some time later by a woman. While I was hypnotized she said, “I’m going to light a match, blow it out, and immediately touch the back of your hand with it. You will feel no pain.”
I thought, “Baloney!” She took a match, lit it, blew it out, and touched it to the back of my hand. It felt slightly warm. My eyes were closed throughout all of this, but I was thinking, “That’s easy. She lit one match, but touched a different match to my hand. There’s nothin’ to that; it’s a fake!”
When I came out of the hypnosis and looked at the back of my hand, I got the biggest surprise: There was a burn on the back of my hand. Soon a blister grew, and it never hurt at all, even when it broke.
So I found hypnosis to be a very interesting experience. All the time you’re saying to yourself, “I could do that, but I won’t”—which is just
another way of saying that you can’t.
All right, suppose all that is true, and that people can be hypnotized so that they literally can’t break away from the hypnotizing effect until released by the hypnotist.
That suggests that I should believe that hypnotism is dangerous. It would be useful to be aware of this danger so that I can avoid being manipulated by a malicious hypnotist, since it turns out that what appears to be parlor tricks are actually mind control. Great.
But, if I understand it correctly, which I’m not sure that I do, a world without free will is like a world where we are always hypnotized.
Once you’re under the hypnotist’s spell, it doesn’t do any good to realize that you have no free will. You’re still stuck. You will still get burned or embarrassed if the hypnotist wants to burn you.
So if I’m already under the “hypnotist’s” spell, in a Universe where the hypnotist is just an impersonal combination of an alien evolution process and preset physical constants, why would I want to know that? What good would the information do me?
I’m sorry, I’m not maintaining that free will is incompatible with determinism, only that sometimes free will is not present, even though it appears to be. When hypnotized, Richard Feynman did not have (or, possibly, had to a greatly reduced extent) free will in the sense that he had free will under normal circumstances—and yet subjectively he noticed no difference.
It appears to me that you created your bottom line from observing your subjective impression of free will. I suggest that you strike out the entire edifice you built on these data—it is built on sand, not stone.
I see; I did misunderstand, but I think I get your point now. You’re not claiming that if only Mr. Feynman had known about the limits of free will he could have avoided a burn; you’re saying that, like all good rationalists everywhere, I should only want to believe true things, and it is unlikely that “I have free will” is a true thing, because sometimes smart people think that and turn out to be wrong.
Well, OK, fair enough, but it turns out that I get a lot of utility out of believing that I have free will. I’m happy to set aside that belief if there’s some specific reason why the belief is likely to harm me or stop me from getting what I want. One of the things I want is to never believe a logically inconsistent set of facts, and one of the things I want is to never ignore the appropriately validated direct evidence of my senses. That’s still not enough, though, to get me to “don’t believe things that have a low Bayesian prior and little or no supporting evidence.” I don’t get any utility out of being a Bayesianist per se; worshipping Bayes is just a means to an end for me, and I can’t find the end when it comes to rejecting the hypothesis of free will.
Robin, I’ve liked your comments both on this thread and others that we’ve had, but I can’t afford to continue the discussion any time soon—I need to get back to my thesis, which is due in a couple of weeks. Feel free to get in the last word; I’ll read it and think about it, but I won’t respond.
My last word, as you have been so generous as to give it to me, is that I actually do think you have free will. I believe you are wrong about what it is made of, just as the pre-classical Greeks were wrong about the shape of the Earth, but I don’t disagree that you have it.
Good luck on your thesis—I won’t distract you any more.
I place a very low probability on my having genuine ‘free will’ but I act as if I do because if I don’t it doesn’t matter what I do. It also seems to me that people who accept nihilism have life outcomes that I do not desire to share and so the expected utility of acting as if I have free will is high even absent my previous argument. It’s a bit of a Pascal’s Wager.
Why do you define “free will” to refer to something that does not exist, when the thing which does exist—will unconstrained by circumstance or compulsion—is useful to refer to? For one, its absence is one indicator of an invalid contract.
I’m not exactly sure what you’re accusing me of. I think Freedom Evolves is about the best exposition of how I conceive of free will. I am also a libertarian. I find it personally useful to believe in free will irrespective of arguments about determinism and I think we should have political systems that assume free will. I still have some mental gymnastics to perform to reconcile a deterministic material universe with my own personal intuitive conception of free will but I don’t think that really matters.
I don’t really understand what you mean when you use the word ‘libertarian’ - it doesn’t seem particularly related to my understanding. I mean it in the political sense. Perhaps there is a philosophical sense that you are using?
Libertarian is the name for someone who believes free will exists and that free will is incompatible with determinism. Lol, it didn’t even occur to me you could be talking about politics.
Ok, I’ve done some googling and think I understand what you meant when you used the word. I’d never heard it in that context before. I guess philosophically I’m something like a compatibilist then, but I’m more of an ’it’s largely irrelevant’ist.
But, if I understand it correctly, which I’m not sure that I do, a world without free will is like a world where we are always hypnotized.
No! A world without libertarian free will is a world exactly like this one.
ETA: Robin’s point, I gather, is that a world without libertarian free will is a world where hypnotism is possible. Which, as it turns out, is this world.
I was actually making a lesser point: that the introspective appearance of free will is not even a reliable indicator of the presence of free will, much less a reliable guide to the nature of free will.
Edit: From which your interpretation follows, I suppose.
Why? How an algorithm feels is not a reliable indicator of its internal structure.
For convenience. If you show me a few examples where believing that I don’t have free will helps me get what I want, I might start caring about the actual structure of my mental algorithms as seen from the outside.
It is beneficial to believe you don’t have free will if you don’t have free will. From Surely You’re Joking, Mr. Feynman!:
All right, suppose all that is true, and that people can be hypnotized so that they literally can’t break away from the hypnotizing effect until released by the hypnotist.
That suggests that I should believe that hypnotism is dangerous. It would be useful to be aware of this danger so that I can avoid being manipulated by a malicious hypnotist, since it turns out that what appears to be parlor tricks are actually mind control. Great.
But, if I understand it correctly, which I’m not sure that I do, a world without free will is like a world where we are always hypnotized.
Once you’re under the hypnotist’s spell, it doesn’t do any good to realize that you have no free will. You’re still stuck. You will still get burned or embarrassed if the hypnotist wants to burn you.
So if I’m already under the “hypnotist’s” spell, in a Universe where the hypnotist is just an impersonal combination of an alien evolution process and preset physical constants, why would I want to know that? What good would the information do me?
I’m sorry, I’m not maintaining that free will is incompatible with determinism, only that sometimes free will is not present, even though it appears to be. When hypnotized, Richard Feynman did not have (or, possibly, had to a greatly reduced extent) free will in the sense that he had free will under normal circumstances—and yet subjectively he noticed no difference.
It appears to me that you created your bottom line from observing your subjective impression of free will. I suggest that you strike out the entire edifice you built on these data—it is built on sand, not stone.
I see; I did misunderstand, but I think I get your point now. You’re not claiming that if only Mr. Feynman had known about the limits of free will he could have avoided a burn; you’re saying that, like all good rationalists everywhere, I should only want to believe true things, and it is unlikely that “I have free will” is a true thing, because sometimes smart people think that and turn out to be wrong.
Well, OK, fair enough, but it turns out that I get a lot of utility out of believing that I have free will. I’m happy to set aside that belief if there’s some specific reason why the belief is likely to harm me or stop me from getting what I want. One of the things I want is to never believe a logically inconsistent set of facts, and one of the things I want is to never ignore the appropriately validated direct evidence of my senses. That’s still not enough, though, to get me to “don’t believe things that have a low Bayesian prior and little or no supporting evidence.” I don’t get any utility out of being a Bayesianist per se; worshipping Bayes is just a means to an end for me, and I can’t find the end when it comes to rejecting the hypothesis of free will.
Robin, I’ve liked your comments both on this thread and others that we’ve had, but I can’t afford to continue the discussion any time soon—I need to get back to my thesis, which is due in a couple of weeks. Feel free to get in the last word; I’ll read it and think about it, but I won’t respond.
Understood.
My last word, as you have been so generous as to give it to me, is that I actually do think you have free will. I believe you are wrong about what it is made of, just as the pre-classical Greeks were wrong about the shape of the Earth, but I don’t disagree that you have it.
Good luck on your thesis—I won’t distract you any more.
I place a very low probability on my having genuine ‘free will’ but I act as if I do because if I don’t it doesn’t matter what I do. It also seems to me that people who accept nihilism have life outcomes that I do not desire to share and so the expected utility of acting as if I have free will is high even absent my previous argument. It’s a bit of a Pascal’s Wager.
Why do you define “free will” to refer to something that does not exist, when the thing which does exist—will unconstrained by circumstance or compulsion—is useful to refer to? For one, its absence is one indicator of an invalid contract.
I’m not exactly sure what you’re accusing me of. I think Freedom Evolves is about the best exposition of how I conceive of free will. I am also a libertarian. I find it personally useful to believe in free will irrespective of arguments about determinism and I think we should have political systems that assume free will. I still have some mental gymnastics to perform to reconcile a deterministic material universe with my own personal intuitive conception of free will but I don’t think that really matters.
I’m confused. I haven’t read Freedom Evolves but Dennet is a compatiblist, afaik.
I think you’re saying you’re a compatibilist but act as if libertarianism were true, but I’m not sure.
I don’t really understand what you mean when you use the word ‘libertarian’ - it doesn’t seem particularly related to my understanding. I mean it in the political sense. Perhaps there is a philosophical sense that you are using?
Libertarian is the name for someone who believes free will exists and that free will is incompatible with determinism. Lol, it didn’t even occur to me you could be talking about politics.
I swear, if there ever exists a Less Wrong drinking game, “naming collision” would be at least “finish the glass”.
Ok, I’ve done some googling and think I understand what you meant when you used the word. I’d never heard it in that context before. I guess philosophically I’m something like a compatibilist then, but I’m more of an ’it’s largely irrelevant’ist.
I see. The word “genuine” is important, then—a nod to the “wretched subterfuge” attitude toward compatibilist free will. I withdraw my implications.
(I read Elbow Room, myself.)
No! A world without libertarian free will is a world exactly like this one.
ETA: Robin’s point, I gather, is that a world without libertarian free will is a world where hypnotism is possible. Which, as it turns out, is this world.
I was actually making a lesser point: that the introspective appearance of free will is not even a reliable indicator of the presence of free will, much less a reliable guide to the nature of free will.
Edit: From which your interpretation follows, I suppose.