I quite understand that your “choice” is neither caused or random but a third value that is neither.
OK, good, I thought so. You seemed pretty smart.
I can say lots of things about “random” and “causally determined” that distinguish these properties.
Why don’t you go ahead and do that, for a paragraph or so, and I’ll see if I can complete the pattern for you and give you the kind of description you’re looking for. To me it just seems obvious what a choice is, in the same way that I know what “truth” is and what “good” is, but if you can manage to describe the meaning of “random” analytically then I can probably copy it for the word “chosen.” If I can’t, that will surprise me.
But surely you can recognize that you aren’t actually that different from all the other objects you discover in the world and likely work the same way they do.
you’re hardly the first person who has wanted this kind of free will and went about inventing a third kind of thing to prove that it existed.
Have I waxed poetic about souls and destiny and homunculi? I don’t remember “inventing” a third kind of thing. I’m just sort of pointing at my experience of choice and labeling it “choice.” If you insist that what I think is choice is really something else, you’re welcome to prove it to me with direct evidence, but I’m not really interested in Bayesian inferences here. I am unconvinced that brains and rocks are in the same reference class. I do not accept the physicalist-reductionist hypothesis as literally true, despite its excellent track record at producing useful models for predicting the future. I understand that the vast majority of people on this site -do- accept that hypothesis. I do not have the stamina or inclination to hold the field on that issue against an entire community of intelligent debaters.
Why don’t you go ahead and do that, for a paragraph or so, and I’ll see if I can complete the pattern for you and give you the kind of description you’re looking for. To me it just seems obvious what a choice is, in the same way that I know what “truth” is and what “good” is, but if you can manage to describe the meaning of “random” analytically then I can probably copy it for the word “chosen.” If I can’t, that will surprise me.
It’s obvious to you what “truth” is and what “goodness” is? Really? I think I can say clever and right things about these concepts because I’ve done a lot of studying and thinking. But the answers don’t seem obvious at all to me. Anyway, causality and randomness. Clearly huge topics about which lots have been said.
I believe a causal event is a kind of regularity, extended in spacetime, which has a variable that can be manipulated by hypothetical agent at one end to control a variable at the other end (usually the effect part is later in time). So by altering the velocity of an asteroid, the mean temperature of the planet Earth can be dramatically altered, for example. On a micro-level, intervening on a neuron and causing it to fire at a certain rate will lead to adjacent neurons firing. Altering the social mores of a society can cause a man not to return a wallet. For any one event to occur a large amount of variable have to be right and any one of those variable can be altered so as to alter the event, so these simple examples are overly simple. Lots more has been said if you’re interested. Pearl and Woodward are good authors.
Randomness might be more difficult since it isn’t obvious ontological randomness even exists. Epistemological randomness does: rolling a dice is a good example we have no way to predict the outcome. But in principle we could predict the outcome. Some interpretations of quantum mechanics do involve ontological randomness. Such events can be distinguished from causal events in that the valuable of the resulting variable cannot be controlled by any agent, not because no agent is powerful enough but because there are no variables which can be intervened on to alter the outcome in the way desired. There is no possibility of controlling such events. It is possible quantum indeterminacy is just the product of a hidden variable we don’t know about or that the apparent randomness is actually just a product of anthropics, every possible state gets observed and every outcome seems random because “you” only get to observe one and can’t communicate with the other “you’s”.
Have I waxed poetic about souls and destiny and homunculi? I don’t remember “inventing” a third kind of thing. I’m just sort of pointing at my experience of choice and labeling it “choice.” If you insist that what I think is choice is really something else, you’re welcome to prove it to me with direct evidence, but I’m not really interested in Bayesian inferences here.
I don’t have a problem with you pointing at an experience and labeling it “choice”. I do that too. You make choices. It’s just what it is to make a choice is one of these two things, a caused event or an uncaused event. You invent a third kind of thing when you come up with with a new kind of event which isn’t seen anywhere else, and declare it to be fundamental. And the way many philosophers have historically dealt with this exact problem is by positing souls and homunculi, “agent causation” and whatnot. When you decide that your experience of choice is a fundamental feature of the world you’re doing the exact same thing- any claim that something is irreducible is the same as a claim that something belongs in our basic ontology. The fact that you didn’t do this in verse just means I’m not annoyed, it’s still the same mistake.
I am unconvinced that brains and rocks are in the same reference class. I do not accept the physicalist-reductionist hypothesis as literally true, despite its excellent track record at producing useful models for predicting the future. I understand that the vast majority of people on this site -do- accept that hypothesis. I do not have the stamina or inclination to hold the field on that issue against an entire community of intelligent debaters.
I’ve been known to be more tolerant that others of unorthodoxy on this matter and I doubt many more would join in. Most people probably have the same arguments anyway. You’re not obligated to but I’d be interested in hearing your reasons for not accepting the hypothesis. However, my definition of truth is something like “the limit of useful modeling” so we might have to sort truth out a bit too. If you preface the discussion to demonstrate that you’re aware the position is unpopular already and you’re just trying to work this out you can probably avoid a karma hit. I’ll vote you up it it happens.
f you preface the discussion to demonstrate that you’re aware the position is unpopular already and you’re just trying to work this out you can probably avoid a karma hit.
Sure, consider it prefaced. I’m not trying to convince anybody; I’m just sharing my views because one or two users seem curious about them, and because I might learn something this way. It’s not very important to me. If anyone would like me to stop talking about this topic on Less Wrong, feel free to say so explicitly, and I will be glad to oblige you.
It’s obvious to you what “truth” is and what “goodness” is? Really?
I don’t mean that the entire contents, in detail, of what is and is not inside the box marked “true” is known to me. That would be ridiculous. I just mean that I know which box I’m talking about, and so do you. Sophisticated discussions about what “true” means (as opposed to discussion about whether some specific claim X is true) generally do more harm than good. You can tell cute stories about The Simple Truth, and that may help startle some philosophers into realizing where they’ve gone off-course, but mostly you’re just lending a little color to the Reflexive Property or the Identity Property: a = a.
Some interpretations of quantum mechanics do involve ontological randomness. Such events can be distinguished from causal events in that the valuable of the resulting variable cannot be controlled by any agent, not because no agent is powerful enough but because there are no variables which can be intervened on to alter the outcome in the way desired. There is no possibility of controlling such events.
I can probably work with this. I expect you will still think I’m postulating unnecessary ontological entities, and, given your epistemological value system, you’ll be right. Still, maybe the details will interest you.
Some interpretations of conscious awareness do involve ontological choice. Such events can be distinguished from random events in that the value of the resulting variable can be controlled by exactly one agent, as opposed to zero agents, as in the case of a truly random variable. The agent in question could be taken to be some subset of the neurons in the brain, or some subset of a person’s conscious awareness, or some kind of minimally intervening deity. It is not clear exactly who or what the agent is.
Conscious events can be distinguished from caused events in that conventional measures of kinetic power and information-theoretic power are bad predictors of a hypothetical agent’s ability to manipulate the outcome of a conscious event. Whether because the relevant interactions among neurons, given their level of chaotic complexity, occur in a slice of spacetime that is small enough to be resistant to external computation, or because the event is driven by some process outside the well-understood laws of physics, a conscious event is difficult or impossible to control from outside the relevant consciousness. Thus, instead of a single output depending subtly on many other variables, the output depends almost exclusively on a single input or small set of inputs.
You’re not obligated to but I’d be interested in hearing your reasons for not accepting the hypothesis.
I’d be happy to explain it in August, when I’ll be bored silly. At the moment, I’m pretty busy with my law school thesis, which is on antitrust law and has little to do with either free will or reductionism. Feel free to comment on any of my posts around that time, or to send your contact info to zelinsky a t gm ail dot com. Zelinsky is a rationalist friend of mine who agrees with you and only knows one person who thinks like me, so he’ll know who it’s for.
Thanks for bearing with me so far and for responding to arguments that must no doubt strike you as woefully unenlightened with a healthy measure of respect and patience. I really am done with both the free will discussion and the reductionist discussion for now, but I enjoyed discussing them with you, and consider it well worth the karma I ‘spent’. If you can think of any ways that what you see as my misunderstanding of free will or reductionism is likely to interfere with my attempts to help refine LW’s understanding of Goodhart’s Law, please let me know, and I’ll vote them up.
For convenience. If you show me a few examples where believing that I don’t have free will helps me get what I want, I might start caring about the actual structure of my mental algorithms as seen from the outside.
It is beneficial to believe you don’t have free will if you don’t have free will. From Surely You’re Joking, Mr. Feynman!:
When the real demonstration came he had us walk on stage, and he hypnotized us in front of the whole Princeton Graduate College. This time the effect was stronger; I guess I had learned how to become hypnotized. The hypnotist made various demonstrations, having me do things that I couldn’t normally do, and at the end he said that after I came out of hypnosis, instead of returning to my seat directly, which was the natural way to go, I would walk all the way around the room and go to my seat from the back.
All through the demonstration I was vaguely aware of what was going on, and cooperating with the things the hypnotist said, but this time I decided, “Damn it, enough is enough! I’m gonna go straight to my seat.”
When it was time to get up and go off the stage, I started to walk straight to my seat. But then an annoying feeling came over me: I felt so uncomfortable that I couldn’t continue. I walked all the way around the hall.
I was hypnotized in another situation some time later by a woman. While I was hypnotized she said, “I’m going to light a match, blow it out, and immediately touch the back of your hand with it. You will feel no pain.”
I thought, “Baloney!” She took a match, lit it, blew it out, and touched it to the back of my hand. It felt slightly warm. My eyes were closed throughout all of this, but I was thinking, “That’s easy. She lit one match, but touched a different match to my hand. There’s nothin’ to that; it’s a fake!”
When I came out of the hypnosis and looked at the back of my hand, I got the biggest surprise: There was a burn on the back of my hand. Soon a blister grew, and it never hurt at all, even when it broke.
So I found hypnosis to be a very interesting experience. All the time you’re saying to yourself, “I could do that, but I won’t”—which is just
another way of saying that you can’t.
All right, suppose all that is true, and that people can be hypnotized so that they literally can’t break away from the hypnotizing effect until released by the hypnotist.
That suggests that I should believe that hypnotism is dangerous. It would be useful to be aware of this danger so that I can avoid being manipulated by a malicious hypnotist, since it turns out that what appears to be parlor tricks are actually mind control. Great.
But, if I understand it correctly, which I’m not sure that I do, a world without free will is like a world where we are always hypnotized.
Once you’re under the hypnotist’s spell, it doesn’t do any good to realize that you have no free will. You’re still stuck. You will still get burned or embarrassed if the hypnotist wants to burn you.
So if I’m already under the “hypnotist’s” spell, in a Universe where the hypnotist is just an impersonal combination of an alien evolution process and preset physical constants, why would I want to know that? What good would the information do me?
I’m sorry, I’m not maintaining that free will is incompatible with determinism, only that sometimes free will is not present, even though it appears to be. When hypnotized, Richard Feynman did not have (or, possibly, had to a greatly reduced extent) free will in the sense that he had free will under normal circumstances—and yet subjectively he noticed no difference.
It appears to me that you created your bottom line from observing your subjective impression of free will. I suggest that you strike out the entire edifice you built on these data—it is built on sand, not stone.
I see; I did misunderstand, but I think I get your point now. You’re not claiming that if only Mr. Feynman had known about the limits of free will he could have avoided a burn; you’re saying that, like all good rationalists everywhere, I should only want to believe true things, and it is unlikely that “I have free will” is a true thing, because sometimes smart people think that and turn out to be wrong.
Well, OK, fair enough, but it turns out that I get a lot of utility out of believing that I have free will. I’m happy to set aside that belief if there’s some specific reason why the belief is likely to harm me or stop me from getting what I want. One of the things I want is to never believe a logically inconsistent set of facts, and one of the things I want is to never ignore the appropriately validated direct evidence of my senses. That’s still not enough, though, to get me to “don’t believe things that have a low Bayesian prior and little or no supporting evidence.” I don’t get any utility out of being a Bayesianist per se; worshipping Bayes is just a means to an end for me, and I can’t find the end when it comes to rejecting the hypothesis of free will.
Robin, I’ve liked your comments both on this thread and others that we’ve had, but I can’t afford to continue the discussion any time soon—I need to get back to my thesis, which is due in a couple of weeks. Feel free to get in the last word; I’ll read it and think about it, but I won’t respond.
My last word, as you have been so generous as to give it to me, is that I actually do think you have free will. I believe you are wrong about what it is made of, just as the pre-classical Greeks were wrong about the shape of the Earth, but I don’t disagree that you have it.
Good luck on your thesis—I won’t distract you any more.
I place a very low probability on my having genuine ‘free will’ but I act as if I do because if I don’t it doesn’t matter what I do. It also seems to me that people who accept nihilism have life outcomes that I do not desire to share and so the expected utility of acting as if I have free will is high even absent my previous argument. It’s a bit of a Pascal’s Wager.
Why do you define “free will” to refer to something that does not exist, when the thing which does exist—will unconstrained by circumstance or compulsion—is useful to refer to? For one, its absence is one indicator of an invalid contract.
I’m not exactly sure what you’re accusing me of. I think Freedom Evolves is about the best exposition of how I conceive of free will. I am also a libertarian. I find it personally useful to believe in free will irrespective of arguments about determinism and I think we should have political systems that assume free will. I still have some mental gymnastics to perform to reconcile a deterministic material universe with my own personal intuitive conception of free will but I don’t think that really matters.
I don’t really understand what you mean when you use the word ‘libertarian’ - it doesn’t seem particularly related to my understanding. I mean it in the political sense. Perhaps there is a philosophical sense that you are using?
Libertarian is the name for someone who believes free will exists and that free will is incompatible with determinism. Lol, it didn’t even occur to me you could be talking about politics.
Ok, I’ve done some googling and think I understand what you meant when you used the word. I’d never heard it in that context before. I guess philosophically I’m something like a compatibilist then, but I’m more of an ’it’s largely irrelevant’ist.
But, if I understand it correctly, which I’m not sure that I do, a world without free will is like a world where we are always hypnotized.
No! A world without libertarian free will is a world exactly like this one.
ETA: Robin’s point, I gather, is that a world without libertarian free will is a world where hypnotism is possible. Which, as it turns out, is this world.
I was actually making a lesser point: that the introspective appearance of free will is not even a reliable indicator of the presence of free will, much less a reliable guide to the nature of free will.
Edit: From which your interpretation follows, I suppose.
OK, good, I thought so. You seemed pretty smart.
Why don’t you go ahead and do that, for a paragraph or so, and I’ll see if I can complete the pattern for you and give you the kind of description you’re looking for. To me it just seems obvious what a choice is, in the same way that I know what “truth” is and what “good” is, but if you can manage to describe the meaning of “random” analytically then I can probably copy it for the word “chosen.” If I can’t, that will surprise me.
Have I waxed poetic about souls and destiny and homunculi? I don’t remember “inventing” a third kind of thing. I’m just sort of pointing at my experience of choice and labeling it “choice.” If you insist that what I think is choice is really something else, you’re welcome to prove it to me with direct evidence, but I’m not really interested in Bayesian inferences here. I am unconvinced that brains and rocks are in the same reference class. I do not accept the physicalist-reductionist hypothesis as literally true, despite its excellent track record at producing useful models for predicting the future. I understand that the vast majority of people on this site -do- accept that hypothesis. I do not have the stamina or inclination to hold the field on that issue against an entire community of intelligent debaters.
It’s obvious to you what “truth” is and what “goodness” is? Really? I think I can say clever and right things about these concepts because I’ve done a lot of studying and thinking. But the answers don’t seem obvious at all to me. Anyway, causality and randomness. Clearly huge topics about which lots have been said.
I believe a causal event is a kind of regularity, extended in spacetime, which has a variable that can be manipulated by hypothetical agent at one end to control a variable at the other end (usually the effect part is later in time). So by altering the velocity of an asteroid, the mean temperature of the planet Earth can be dramatically altered, for example. On a micro-level, intervening on a neuron and causing it to fire at a certain rate will lead to adjacent neurons firing. Altering the social mores of a society can cause a man not to return a wallet. For any one event to occur a large amount of variable have to be right and any one of those variable can be altered so as to alter the event, so these simple examples are overly simple. Lots more has been said if you’re interested. Pearl and Woodward are good authors.
Randomness might be more difficult since it isn’t obvious ontological randomness even exists. Epistemological randomness does: rolling a dice is a good example we have no way to predict the outcome. But in principle we could predict the outcome. Some interpretations of quantum mechanics do involve ontological randomness. Such events can be distinguished from causal events in that the valuable of the resulting variable cannot be controlled by any agent, not because no agent is powerful enough but because there are no variables which can be intervened on to alter the outcome in the way desired. There is no possibility of controlling such events. It is possible quantum indeterminacy is just the product of a hidden variable we don’t know about or that the apparent randomness is actually just a product of anthropics, every possible state gets observed and every outcome seems random because “you” only get to observe one and can’t communicate with the other “you’s”.
I don’t have a problem with you pointing at an experience and labeling it “choice”. I do that too. You make choices. It’s just what it is to make a choice is one of these two things, a caused event or an uncaused event. You invent a third kind of thing when you come up with with a new kind of event which isn’t seen anywhere else, and declare it to be fundamental. And the way many philosophers have historically dealt with this exact problem is by positing souls and homunculi, “agent causation” and whatnot. When you decide that your experience of choice is a fundamental feature of the world you’re doing the exact same thing- any claim that something is irreducible is the same as a claim that something belongs in our basic ontology. The fact that you didn’t do this in verse just means I’m not annoyed, it’s still the same mistake.
I’ve been known to be more tolerant that others of unorthodoxy on this matter and I doubt many more would join in. Most people probably have the same arguments anyway. You’re not obligated to but I’d be interested in hearing your reasons for not accepting the hypothesis. However, my definition of truth is something like “the limit of useful modeling” so we might have to sort truth out a bit too. If you preface the discussion to demonstrate that you’re aware the position is unpopular already and you’re just trying to work this out you can probably avoid a karma hit. I’ll vote you up it it happens.
Sure, consider it prefaced. I’m not trying to convince anybody; I’m just sharing my views because one or two users seem curious about them, and because I might learn something this way. It’s not very important to me. If anyone would like me to stop talking about this topic on Less Wrong, feel free to say so explicitly, and I will be glad to oblige you.
I don’t mean that the entire contents, in detail, of what is and is not inside the box marked “true” is known to me. That would be ridiculous. I just mean that I know which box I’m talking about, and so do you. Sophisticated discussions about what “true” means (as opposed to discussion about whether some specific claim X is true) generally do more harm than good. You can tell cute stories about The Simple Truth, and that may help startle some philosophers into realizing where they’ve gone off-course, but mostly you’re just lending a little color to the Reflexive Property or the Identity Property: a = a.
I can probably work with this. I expect you will still think I’m postulating unnecessary ontological entities, and, given your epistemological value system, you’ll be right. Still, maybe the details will interest you.
Some interpretations of conscious awareness do involve ontological choice. Such events can be distinguished from random events in that the value of the resulting variable can be controlled by exactly one agent, as opposed to zero agents, as in the case of a truly random variable. The agent in question could be taken to be some subset of the neurons in the brain, or some subset of a person’s conscious awareness, or some kind of minimally intervening deity. It is not clear exactly who or what the agent is.
Conscious events can be distinguished from caused events in that conventional measures of kinetic power and information-theoretic power are bad predictors of a hypothetical agent’s ability to manipulate the outcome of a conscious event. Whether because the relevant interactions among neurons, given their level of chaotic complexity, occur in a slice of spacetime that is small enough to be resistant to external computation, or because the event is driven by some process outside the well-understood laws of physics, a conscious event is difficult or impossible to control from outside the relevant consciousness. Thus, instead of a single output depending subtly on many other variables, the output depends almost exclusively on a single input or small set of inputs.
I’d be happy to explain it in August, when I’ll be bored silly. At the moment, I’m pretty busy with my law school thesis, which is on antitrust law and has little to do with either free will or reductionism. Feel free to comment on any of my posts around that time, or to send your contact info to zelinsky a t gm ail dot com. Zelinsky is a rationalist friend of mine who agrees with you and only knows one person who thinks like me, so he’ll know who it’s for.
Thanks for bearing with me so far and for responding to arguments that must no doubt strike you as woefully unenlightened with a healthy measure of respect and patience. I really am done with both the free will discussion and the reductionist discussion for now, but I enjoyed discussing them with you, and consider it well worth the karma I ‘spent’. If you can think of any ways that what you see as my misunderstanding of free will or reductionism is likely to interfere with my attempts to help refine LW’s understanding of Goodhart’s Law, please let me know, and I’ll vote them up.
Why? How an algorithm feels is not a reliable indicator of its internal structure.
For convenience. If you show me a few examples where believing that I don’t have free will helps me get what I want, I might start caring about the actual structure of my mental algorithms as seen from the outside.
It is beneficial to believe you don’t have free will if you don’t have free will. From Surely You’re Joking, Mr. Feynman!:
All right, suppose all that is true, and that people can be hypnotized so that they literally can’t break away from the hypnotizing effect until released by the hypnotist.
That suggests that I should believe that hypnotism is dangerous. It would be useful to be aware of this danger so that I can avoid being manipulated by a malicious hypnotist, since it turns out that what appears to be parlor tricks are actually mind control. Great.
But, if I understand it correctly, which I’m not sure that I do, a world without free will is like a world where we are always hypnotized.
Once you’re under the hypnotist’s spell, it doesn’t do any good to realize that you have no free will. You’re still stuck. You will still get burned or embarrassed if the hypnotist wants to burn you.
So if I’m already under the “hypnotist’s” spell, in a Universe where the hypnotist is just an impersonal combination of an alien evolution process and preset physical constants, why would I want to know that? What good would the information do me?
I’m sorry, I’m not maintaining that free will is incompatible with determinism, only that sometimes free will is not present, even though it appears to be. When hypnotized, Richard Feynman did not have (or, possibly, had to a greatly reduced extent) free will in the sense that he had free will under normal circumstances—and yet subjectively he noticed no difference.
It appears to me that you created your bottom line from observing your subjective impression of free will. I suggest that you strike out the entire edifice you built on these data—it is built on sand, not stone.
I see; I did misunderstand, but I think I get your point now. You’re not claiming that if only Mr. Feynman had known about the limits of free will he could have avoided a burn; you’re saying that, like all good rationalists everywhere, I should only want to believe true things, and it is unlikely that “I have free will” is a true thing, because sometimes smart people think that and turn out to be wrong.
Well, OK, fair enough, but it turns out that I get a lot of utility out of believing that I have free will. I’m happy to set aside that belief if there’s some specific reason why the belief is likely to harm me or stop me from getting what I want. One of the things I want is to never believe a logically inconsistent set of facts, and one of the things I want is to never ignore the appropriately validated direct evidence of my senses. That’s still not enough, though, to get me to “don’t believe things that have a low Bayesian prior and little or no supporting evidence.” I don’t get any utility out of being a Bayesianist per se; worshipping Bayes is just a means to an end for me, and I can’t find the end when it comes to rejecting the hypothesis of free will.
Robin, I’ve liked your comments both on this thread and others that we’ve had, but I can’t afford to continue the discussion any time soon—I need to get back to my thesis, which is due in a couple of weeks. Feel free to get in the last word; I’ll read it and think about it, but I won’t respond.
Understood.
My last word, as you have been so generous as to give it to me, is that I actually do think you have free will. I believe you are wrong about what it is made of, just as the pre-classical Greeks were wrong about the shape of the Earth, but I don’t disagree that you have it.
Good luck on your thesis—I won’t distract you any more.
I place a very low probability on my having genuine ‘free will’ but I act as if I do because if I don’t it doesn’t matter what I do. It also seems to me that people who accept nihilism have life outcomes that I do not desire to share and so the expected utility of acting as if I have free will is high even absent my previous argument. It’s a bit of a Pascal’s Wager.
Why do you define “free will” to refer to something that does not exist, when the thing which does exist—will unconstrained by circumstance or compulsion—is useful to refer to? For one, its absence is one indicator of an invalid contract.
I’m not exactly sure what you’re accusing me of. I think Freedom Evolves is about the best exposition of how I conceive of free will. I am also a libertarian. I find it personally useful to believe in free will irrespective of arguments about determinism and I think we should have political systems that assume free will. I still have some mental gymnastics to perform to reconcile a deterministic material universe with my own personal intuitive conception of free will but I don’t think that really matters.
I’m confused. I haven’t read Freedom Evolves but Dennet is a compatiblist, afaik.
I think you’re saying you’re a compatibilist but act as if libertarianism were true, but I’m not sure.
I don’t really understand what you mean when you use the word ‘libertarian’ - it doesn’t seem particularly related to my understanding. I mean it in the political sense. Perhaps there is a philosophical sense that you are using?
Libertarian is the name for someone who believes free will exists and that free will is incompatible with determinism. Lol, it didn’t even occur to me you could be talking about politics.
I swear, if there ever exists a Less Wrong drinking game, “naming collision” would be at least “finish the glass”.
Ok, I’ve done some googling and think I understand what you meant when you used the word. I’d never heard it in that context before. I guess philosophically I’m something like a compatibilist then, but I’m more of an ’it’s largely irrelevant’ist.
I see. The word “genuine” is important, then—a nod to the “wretched subterfuge” attitude toward compatibilist free will. I withdraw my implications.
(I read Elbow Room, myself.)
No! A world without libertarian free will is a world exactly like this one.
ETA: Robin’s point, I gather, is that a world without libertarian free will is a world where hypnotism is possible. Which, as it turns out, is this world.
I was actually making a lesser point: that the introspective appearance of free will is not even a reliable indicator of the presence of free will, much less a reliable guide to the nature of free will.
Edit: From which your interpretation follows, I suppose.