I don’t think the right way to deal with it is to declare that nothing is beautiful, good, or conscious.
Yes, obviously. But it is also a waste of time trying to get everyone to agree on what is beautiful, so too it is a waste of time trying to get everyone to agree on what is free will. Like I said, it’s really quibbling over terminology, which is almost always a waste of time.
Having said which, I think I can give a not-too-hopeless criterion distinguishing agents we might reasonably want to say have free will from those we don’t. X has free will in regard to action Y if and only if every good explanation for why X did Y goes via X’s preference for Y or decision to do Y or something of the kind.
OK, that’s not entirely unreasonable, but on that definition no reliably predictable agent has free will because there is always another good explanation that does not appeal to the agent’s desires, namely, whatever model would be used by a reliable predictor.
One unsatisfactory feature of this criterion is that it appeals to the notions of preference and decision, which aren’t necessarily any easier to define clearly than “free will” itself.
Indeed.
I would actually say that a chess-playing computer does something rather like deciding, and I might even claim it has a little bit of free will!
OK, then you’re intuitive definition of “free will” is very different from mine. I would not say that a chess playing computer has free will, at least not given current chess-playing technology. On my view of free will, a chess playing computer with free will should be able to decide, for example, that it didn’t want to play chess any more.
It sounds as if you’re proposing something like “not being reliably predictable”, but surely that won’t do; do you want to say a (quantum) random number generator has free will?
I’d say that not being reliably predictable is a necessary but not sufficient condition.
I think ialdabaoth actually came pretty close to getting it right:
‘free will’ isn’t a binary thing; it’s a relational measurement with a spectrum. And predictability is explicitly incompatible with it, in the same way that entropy measurements depend on how much predictive information you have about a system. (I suspect that ‘entropy’ and ‘free will’ are essentially identical terms, with the latter applying to systems that we want to anthropomorphize.)
no reliably predictable agent has free will because there is always another good explanation that does not appeal to the agent’s desires, namely, whatever model would be used by a reliable predictor.
I think that’s wrong for two reasons. The first is that the model might explicitly include the agent’s desires. The second is that a model might predict much better than it explains. (Though exactly what constitutes good explanation is another thing people may reasonably disagree on.)
a chess playing computer with free will should be able to decide, for example, that it didn’t want to play chess any more.
I think that’s better understood as a limit on its intelligence than on its freedom. It doesn’t have the mental apparatus to form thoughts about whether or not to play chess (except in so far as it can resign any given game, of course). It may be that we shouldn’t try to talk about whether an agent has free will unless it has some notion of its own decision-making process, in which case I’d say not that the chess program lacks free will, but that it’s the wrong kind of thing to have or lack free will. (If you have no will, it makes no sense to ask whether it is free.)
not being reliably predictable is a necessary but not sufficient condition.
Your objection to compatibilism was, unless I badly misunderstood, that no one has given a good compatibilist criterion for when something has free will. My objection was that you haven’t given a good incompatibilist criterion either. The fact that you can state a necessary condition doesn’t help with that; the compatibilist can state necessary conditions too.
I think ialdabaoth actually came pretty close to getting it right
There seem to me to be a number of quite different ways to interpret what he wrote. I am guessing that you mean something like: “I define free will to be unpredictability, with the further condition that we apply it only to agents we wish to anthropomorphize”. I suppose that gets around my random number generator example, but not really in a very satisfactory way.
So, anyway, suppose someone offers me a bribe. You know me well, and in particular you know that (1) I don’t want to do the thing they’re hoping to bribe me to, (2) I care a lot about my integrity, (3) I care a lot about my perceived integrity, and (4) the bribe is not large relative to how much money I have. You conclude, with great confidence, that I will refuse the bribe. Do you really want to say that this indicates that I didn’t freely refuse the bribe?
On another occasion I’m offered another bribe. But this time some evildoer with very strange preferences gets hold of me and compels me, at gunpoint, to decide whether to take it by flipping a coin. My decision is now maximally unpredictable. Is it maximally free?
I think the answers to the questions in those paragraphs should both be “no”, and accordingly I think unpredictability and freedom can’t be so close to being the same thing.
the model might explicitly include the agent’s desires
OK, let me try a different counter-argument then: do you believe we have free will to choose our desires? I don’t. For example, I desire chocolate. This is not something I chose, it’s something that happened to me. I have no idea how I could go about deciding not to desire chocolate. (I suppose I could put myself through some sort of aversion therapy, but that’s not the same thing. That’s deciding to try to train myself not to desire chocolate.)
If we don’t have the freedom to choose our desires, then on what basis is it reasonable to call decisions that take those non-freely-chosen desires into account “free will”?
a model might predict much better than it explains
This is a very deep topic that is treated extensively in David Deutsch’s book, “The Beginning of Infinity” (also “The Fabric of Reality”, particularly chapter 7). If you want to go down that rabbit hole you need to read at least Chapter 7 of TFOR first, otherwise I’ll have to recapitulate Deutsch’s argument. The bottom line is that there is good reason to believe that theories with high predictive power but low explanatory power are not possible.
If you have no will, it makes no sense to ask whether it is free.
Sure. Do you distinguish between “will” and “desire”?
the compatibilist can state necessary conditions too.
Really? What are they?
Do you really want to say that this indicates that I didn’t freely refuse the bribe?
Yes.
Is it maximally free?
Yes, which is to say, not free at all. It is exactly as free as the first case.
The only difference between the two cases is in your awareness of the mechanism behind the decision-making process. In the first case, the mechanism that caused you to choose to refuse the bribe is inside your brain and not accessible to your conscious self. In the second case, (at least part of) the mechanism that causes you to make the choice is more easily accessible to your conscious self. But this is a thin reed because the inaccessibility of your internal decision making process is (almost certainly) a technological limitation, not a fundamental difference between the two cases.
If we don’t have the freedom to choose our desires, then on what basis is it reasonable to call decision that take those non-freely-chosen desires into account “free will”?
If Jewishness is inherited from one’s mother, and a person’s great^200000-grandmother [EDITED to fix an off-by-1000x error, oops] was more like a chimpanzee than a modern human and had neither ethnicity nor religion as we now understand them, then on what basis is it reasonable to call that person Jewish?
If sentences are made up of letters and letters have no meaning, then on what basis is it reasonable to say that sentences have meaning?
It is not always best to make every definition recurse as far back as it possibly can.
David Deutsch’s book [...] also [...]
I have read both books. I do not think chapter 7 of TFoR shows that theories with high predictive power but low explanatory power are impossible, but it is some time since I read the book and I have just now only glanced at it rather than rereading it in depth. If you reckon Deutsch says that predictive power guarantees explanatory power, could you remind me where in the chapter he does it? Or, if you have an argument that starts from what Deutsch does in that chapter and concludes that predictive power guarantees explanatory power, could you sketch it? (I do not guarantee to agree with everything Deutsch says.)
Do you distinguish between “will” and “desire”?
I seldom use the word “will” other than in special contexts like “free will”. Why do you ask?
What are they [sc. necessary conditions for free will that a compatibilist might state]?
One such might be: “For an action to be freely willed, the causes leading up to it must go via a process of conscious decision by the agent.”
[...] not free at all. It is exactly as free as the first case.
Meh, OK. So let me remind you that the question we were (I thought) discussing at this point was: are there clearer-cut satisfactory criteria for “free will” available to incompatibilists than to compatibilists? Now, of course if you say that by definition nothing counts as an instance of free will then that’s a nice clear-cut criterion, but it also has (so far as it goes) nothing at all to do with freedom or will or anything else.
I think you’re saying something a bit less content-free than that; let me paraphrase and you can correct me if I’m getting it wrong. “Free will means unpredictability-in-principle. Everything is in fact predictable in principle, and therefore nothing is actually an instance of free will.” That’s less content-free because we can then ask: OK, what if you’re wrong about everything being predictable in principle; or what if you’re right but we ask about a hypothetical different world where some things aren’t predictable in principle?
Let’s ask that. Imagine a world in which some sort of objective-collapse quantum mechanics is correct, and many things ultimately happen entirely at random. And let’s suppose that whether or not the brain uses quantum effects in any “interesting” way, it is at least affected by them in a chaos-theory sort of way: that is, sometimes microscale randomness arising from quantum mechanics ends up having macroscale effects on what your brain does. And now let’s situate my two hypothetical examples in this hypothetical world. In this world, of course, nothing is entirely predictable, but some things are much more predictable than others. In particular, the first version of me (deciding whether to take the bribe on the basis of my moral principles and preferences and so forth, which ends up being very predictable because the bribe is small and my principles and preferences strong) is much more predictable (both in principle and in practice) in this world than the second version (deciding, at gunpoint, on the basis of what I will now make a quantum random number generator rather than a coin flip). In this world, would you accordingly say that first-me is choosing much less freely than second-me?
The only difference between the two cases is your awareness [...]
I don’t think that’s correct. For instance, in the second case I am coerced by another agent, and in the first I’m not; in the first case my decision is a consequence of my preferences regarding the action in question, and in the second it isn’t (though it is a consequence of my preference for living over dying; but I remark that your predictability criterion gives the exact same result if in the second case the random number generator is wired directly into my brain so as to control my actions with no conscious involvement on my part at all).
You may prefer notions of free will with a sort of transitive property, where if X is free and X is caused by Y1,...,Yn (and nothing else) then one of the Y must be free. (Or some more sophisticated variant taking into account the fact that freedom comes in degrees, that the notion of “cause” is kinda problematic, etc.) I see no reason why we have to define free will in such a way. We are happy to say that a brain is intelligent even though it is made of neurons which are not intelligent, that a statue resembles Albert Einstein even though it is made of atoms that do not resemble Einstein, that a woolly jumper is warm even though it is made of individual fibres that aren’t, etc.
It is not always best to make every definition recurse as far back as it possibly can.
Of course. Does this mean that you concede that our desires are not freely chosen?
I have read both books.
Oh, good!
I do not think chapter 7 of TFoR shows that theories with high predictive power but low explanatory power are impossible
You’re right, the argument in chapter 7 is not complete, it’s just the 80⁄20 part of Deutsch’s argument, so it’s what I point people to first. And non-explanatory models with predictive power are not impossible, they’re just extremely unlikely (probability indistinguishable from zero). The reason they are extremely unlikely is that in a finite universe like ours there can exist only a finite amount of data, but there are an infinite number of theories consistent with that data, nearly all of which have low predictive power. Explanatory power turns out to be the only known effective filter for theories with high predictive power. Hence, it is overwhelmingly likely that a theory with high predictive power will have high explanatory power.
In this world, would you accordingly say that first-me is choosing much less freely than second-me?
No.
First, I disagree with “Free will means unpredictability-in-principle.” It doesn’t mean UIP, it simply requires UIP. Necessary, not sufficient.
Second, to be “real” free will, there would have to be some circumstances where you accept the bribe and surprise me. In this respect, you’ve chosen a bad example to make your point, so let me propose a better one: we’re in a restaurant and I know you love burgers and pasta, both of which are on the menu. I know you’ll choose one or the other, but I have no idea which. In that case, it’s possible that you are making the choice using “real” free will.
in the second case I am coerced by another agent, and in the first I’m not
Not so. In the first case you are being coerced by your sense of morality, or your fear of going to prison, or something like that. That’s exactly what makes your choice not to take the bribe predictable. The only difference is that the mechanism by which you are being coerced in the second case is a little more overt.
You may prefer notions of free will with a sort of transitive property
No, what I require is a notion of free will that is the same for all observers, including a hypothetical one that can predict anything that can be predicted in principle. (I also want to give this hypothetical observer an oracle for the halting problem because I don’t think that Turing machines exercise “free will” or “decide” whether or not to halt.) This is simply the same criterion I apply to any phenomenon that someone claims is objectively real.
Does this mean that you concede that our desires are not freely chosen?
I think some of our desires are more freely chosen than others. I do not think an action chosen on account of a not-freely-chosen desire is necessarily best considered unfree for that reason.
[...] are not impossible [...]
That isn’t quite what you said before, but I’m happy for you to amend what you wrote.
The reason they are extremely unlikely [...]
It seems to me that the argument you’re now making has almost nothing to do with the argument in chapter 7 of Deutsch’s book. That doesn’t (of course) in any way make it a bad argument, but I’m now wondering why you said what you did about Deutsch’s books.
Anyway. I think almost all the work in your argument (at least so far as it’s relevant to what we’re discussing here) is done by the following statement: “Explanatory power turns out to be the only known effective filter for theories with high predictive power.” I think this is incorrect; simplicity plus past predictive success is a pretty decent filter too. (Theories with these properties have not infrequently turned out to be embeddable in theories with good explanatory power, of course, as when Mendeleev’s empirically observed periodicity was explained in terms of electron shells, and the latter further explained in terms of quantum mechanics.)
It doesn’t mean UIP, it simply requires UIP.
OK, but in that case either you owe us something nearer to necessary and sufficient conditions, or else you need to retract your claim that incompatibilism does better than compatibilism in the “is there a nice clear criterion?” test. Also, if you aren’t claiming anything close to “free will = UIP” then I no longer know what you meant by saying that ialdabaoth got it more or less right.
to be “real” free will, there would have to be some circumstances where [...]
Sure. That would be why I said “with great confidence” rather than “with absolute certainty”. I might, indeed, take the bribe after all, despite all those very strong reasons to expect me not to. But it’s extremely unlikely. (So no, I don’t agree that I’ve “chosen a bad example”; rather, I think you misunderstood the example I gave.)
let me propose a better one
If you say “you chose a bad example to make your point, so let me propose a better one” and then give an example that doesn’t even vaguely gesture in the direction of making my point, I’m afraid I start to doubt that you are arguing in good faith.
Not so. In the first case you are being coerced by [...]
The things you describe me as being “coerced by” are (1) not agents and (2) not external to me. These are not irrelevant details, they are central to the intuitive meaning of “free will” that we’re looking for philosophically respectable approximations to. (Perhaps you disagree with my framing of the issue. I take it that that’s generally the right way to think about questions like “what is free will?”.)
In particular, I think your claim about “the only difference” is flatly wrong.
what I require is a notion of free will that is the same for all observers, including a hypothetical one that can predict anything that can be predicted in principle.
That sounds sensible on first reading, but I think actually it’s a bit like saying “what I require is a notion of right and wrong that is the same for all observers, including a hypothetical one that doesn’t care about suffering” and inferring that our notions of right and wrong shouldn’t have anything to do with suffering. Our words and concepts need to be useful to us, and if some such concept would be uninteresting to a hypothetical superbeing that can predict anything that’s predictable in principle, that is not sufficient reason for us not to use it. Still more when your hypothetical superbeing needs capabilities that are probably not even in principle possible within our universe.
(I think, in fact, that even such a superbeing might have reason to talk about something like “free will”, if it’s talking about very-limited beings like us.)
any phenomenon that someone claims is objectively real.
I haven’t, as it happens, been claiming that free will is “objectively real”. All I claim is that it may be a useful notion. Perhaps it’s only as “objectively real” as, say, chess; that is, it applies to us, and what it is is fundamentally dependent on our cognitive and other peculiarities, and a world of your hypothetical superbeings might be no more interested in it than they presumably would be in chess, but you can still ask “to what extent is X exercising free will?” in the same way as you could ask “is X a better move than Y, for a human player with a human opponent?”.
an example that doesn’t even vaguely gesture in the direction of making my point
Sorry about that. I really was trying to be helpful.
I haven’t, as it happens, been claiming that free will is “objectively real”. All I claim is that it may be a useful notion.
Well, heck, what are we arguing about then? Of course it’s a useful notion.
chess
A better analogy would be “simultaneous events at different locations in space.” Chess is a mathematical abstraction that is the same for all observers. Simultaneity, like free will, depends on your point of view.
You’re arguing that no one has it and AIUI that nothing in the universe ever could have it. Doesn’t seem that useful to me.
Chess is a mathematical abstraction that is the same for all observers.
I did consider substituting something like cricket or baseball for that reason. But I think the idea that free will is viewpoint-dependent depends heavily on what notion of free will you’re working with. I’m still not sure what yours actually is, but mine doesn’t have that property, out at any rate doesn’t have it to do great an extent as yours seems to.
Free will is a useful notion because we have the perception of having it, and so it’s useful to be able to talk about whatever it is that we perceive ourselves to have even though we don’t really have it. It’s useful in the same way that it’s useful to talk about, say, “the force of gravity” even though in reality there is no such thing. (That’s actually a pretty good analogy. The force of gravity is a reasonable approximation to the truth for nearly all everyday purposes even though conceptually it is completely wrong. Likewise with free will.)
You said that a chess-playing computer has (some) free will. I disagree (obviously because I don’t think anything has free will). Do you think Pachinko machines have free will? Do they “decide” which way to go when they hit a pin? Does the atmosphere have free will? Does it decide where tornadoes appear?
When I say “real free will” I mean this:
Decisions are made by my conscious self. This rules out pachinko machines, the atmosphere, and chess-playing computers having free will.
Before I make a decision, it must be actually possible for me to choose more than one alternative. Ergo, if I am reliably predictable, I cannot have free will because if I am reliably predictable then it is not possible for me to choose more than one alternative. I can only choose the alternative that a hypothetical predictor would reliably predict.
I don’t know how to make it any clearer than that.
it’s useful to be able to talk about whatever it is that we perceive ourselves to have even though we don’t really have it.
I think it’s more helpful to talk about whatever we have that we’re trying to talk about, even if some of what we say about it isn’t quite right, which is why I prefer notions of free will that don’t become necessarily wrong if the universe is deterministic or there’s an omnipotent god or whatever.
I agree that gravity makes a useful analogy. Gravity behaves in a sufficiently force-like way (at least in regions of weakish spacetime curvature, like everywhere any human being could possibly survive) that I think for most purposes it is much better to say “there is, more or less, a force of gravity, but note that in some situations we’ll need to talk about it differently” than “there is no force of gravity”. And I would say the same about “free will”.
Do you think Pachinko machines have free will?
I don’t know much about Pachinko machines, but I don’t think they have any processes going on in them that at all resemble human deliberation, in which case I would not want to describe them as having free will even to the (very attenuated) extent that a chess program might have.
Does the atmosphere have free will?
Again, I don’t think there are any sort of deliberative processes going on there, so no free will.
I mean this: [...] Decisions are made by my conscious self.
So there are two parts to this, and I’m not sure to what extent you actually intend them both. Part 1: decisions are made by conscious agents. Part 2: decisions are made, more specifically, by those agents’ conscious “parts” (of course this terminology doesn’t imply an actual physical division).
it must be actually possible for me to choose more than one alternative
Of course “actually possible” is pretty problematic language; what counts as possible? If I’m understanding you right, you’d cash it out roughly as follows: look at the probability distribution of possible outcomes in advance of the decision; then freedom = entropy of that probability distribution (or something of the kind).
So then freedom depends on what probability distribution you take, and you take the One True Measure of freedom to be what you get for an observer who knows everything about the universe immediately before the decision is made (more precisely, everything in the past light-cone of the decision); if the universe is deterministic then that’s enough to determine the answer after the decision is made too, so no decisions are free.
One obvious problem with this is that our actual universe is not deterministic in the relevant sense. We can make a device based on radioactive decay or something for which knowledge of all that can be known in advance of its operation is not sufficient to tell you what it will output. For all we know, some or all of our decisions are actually affected enough by “amplified” quantum effects that they can’t be reliably predicted even by an observer with access to everything in their past light-cone.
It might be worse. Perhaps some of our decisions are so affected and some not. If so, there’s no reason (that I can see) to expect any connection between “degree of influence from quantum randomness” and any of the characteristics we generally think of as distinguishing free from not-so-free—practical predictability by non-omniscient observers, the perception of freeness that you mentioned before, external constraints, etc.
It doesn’t seem to me that predictability by a hypothetical “past-omniscient” observer has much connection with what in other contexts we call free will. Why make it part of the definition?
I prefer notions of free will that don’t become necessarily wrong if the universe is deterministic or there’s an omnipotent god or whatever.
That’s like saying, “I prefer triangles with four sides.” You are, of course, free to prefer whatever you want and to use words however you want. But the word “free” has an established meaning in English which is fundamentally incompatible with determinism. Free means, “not under the control or in the power of another; able to act or be done as one wishes.” If my actions are determined by physics or by God, I am not free.
I don’t think they have any processes going on in them that at all resemble human deliberation
And you think chess-playing machines do?
BTW, if your standard for free will is “having processing that resembles human deliberation” then you’ve simply defined free will as “something that humans have” in which case the question of whether or not humans have free will becomes very uninteresting because the answer is tautologically “yes”.
So there are two parts to this
I’d call them two “interpretations” rather than two “parts”. But I intended the latter: to qualify as free will on my view, decisions have to be made by the conscious part of a conscious agent. If I am conscious but I base my decision on a coin flip, that’s not free will.
“actually possible” is pretty problematic language; what counts as possible?
Whatever is not impossible. In this case (and we’ve been through this) if I am reliably predictable then it is impossible for me to do anything other than what a hypothetical reliable predictor predicts. That is what “reliably predictable” means. That is why not being reliably predictable is a necessary but not sufficient condition for free will. It’s really not complicated.
Why make it part of the definition?
Because that is what the “free” part of “free will” means. If I am faced with a choice between A and B and a reliable predictor predicts I am going to choose A, then I cannot choose B (again, this is what “reliably predictor” means). If I cannot choose B then I am not free.
the word “free” has an established meaning in English which is fundamentally incompatible with determinism.
I don’t think that’s at all clear, and the fact that a clear majority of philosophers are compatibilists indicates that a bunch of people who spend their lives thinking about this sort of thing also don’t think it’s impossible for “free” to mean something compatible with determinism.
Let’s take a look at that definition of yours, and see what it says if my decisions are determined by the laws of physics. “Not under the control or in the power of another”? That’s OK; the laws of physics, whatever they are, are not another agent. “Able to act or be done as one wishes”? That’s OK too; of course in this scenario what I wish is also determined by the laws of physics, but the definition doesn’t say anything about that.
(I wouldn’t want to claim that the definition you selected is a perfect one, of course.)
And you think chess-playing machines do [sc. have processes going on in them that at all resemble human deliberation]?
Yup. Much much simpler, of course. Much more limited, much more abstract. But yes, a tree-search with an evaluation at the leaves does indeed resemble human deliberation somewhat. (Do I need to keep repeating in each comment that all I claim is that arguably chess-playing programs have a very little bit of free will?)
if your standard for free will is “having processing that resembles human deliberation”
Nope. But not having such processing seems like a good indication of not having free will, because whatever free will is it has to be something to do with making decisions, and nothing a pachinko machine or the weather does seems at all decision-like, and I think the absence of any process that looks at all like deliberation seems to me to be a large part of why. (Though I would be happy to reconsider in the face of something that behaves in ways that seem sufficiently similar to, e.g., apparently-free humans despite having very different internals.)
if I am reliably predictable then it is impossible for me to do anything other than what a hypothetical reliable predictor predicts
I have pointed out more than once that in this universe there is never prediction that reliable, and anything less reliable makes the word “impossible” inappropriate. For whatever reason, you’ve never seen fit even to acknowledge my having done so.
But let’s set that aside. I shall restate your claim in a form I think better. “If you are reliably predictable, then it is impossible for your choice and the predictor’s prediction not to match.” Consider a different situation, where instead of being predicted your action is being remembered. If it’s reliably rememberable, then it is impossible for your action and the remember’s memory not to match—but I take it you wouldn’t dream of suggesting that that involves any constraint on your freedom.
So why should it be different in this case? One reason would be if the predictor, unlike the rememberer, were causing your decision. But that’s not so; the prediction and your decision are alike consequences of earlier states of the world. So I guess the reason is because the successful prediction indicates the fact that your decision is a consequence of earlier states of the world. But in that case none of what you’re saying is an argument for incompatibilism; it is just a restatement of incompatibilism.
It’s really not complicated.
Please consider the possibility that other people besides yourself have thought about this stuff, are reasonably intelligent, and may disagree with you for reasons other than being too stupid to see what is obvious to you.
again, this is what “reliable predictor” means
No. It means you will not choose B, which is not necessarily the same as that you cannot choose B. And (I expect I have said this at least once already in this discussion) words like “cannot” and “impossible” have a wide variety of meanings and I see no compelling reason why the only one to use when contemplating “free will” is the particular one you have in mind.
This would not be the first time in history that the philosophical community was wrong about something.
Do I need to keep repeating in each comment that all I claim is that arguably chess-playing programs have a very little bit of free will?
No, I get that. But “a very little bit” is still distinguishable from zero, yes?
nothing a pachinko machine or the weather does seems at all decision-like
Nothing about it seems human decision-like. But that’s a prejudice because you happen to be human. See below...
I would be happy to reconsider in the face of something that behaves in ways that seem sufficiently similar to, e.g., apparently-free humans despite having very different internals.
I believe that intelligent aliens could exist (in fact, almost certainly do exist). I also believe that fully intelligent computers are possible, and might even be constructed in our lifetime. I believe that any philosophy worth adhering to ought to be IA-ready and AI-ready, that is, it should not fall apart in the face of intelligent aliens or artificial intelligence. (Aside: This is the reason I do not self-identify as a “humanist”.)
Also, it is far from clear that chess computers work anything at all like humans. The hypothesis that humans make decisions by heuristic search has been pretty much disproven by >50 years of failed AI research.
I have pointed out more than once that in this universe there is never prediction that reliable, and anything less reliable makes the word “impossible” inappropriate. For whatever reason, you’ve never seen fit even to acknowledge my having done so.
I hereby acknowledge your having pointed this out. But it’s irrelevant. All I require for my argument to hold is predictability in principle, not predictability in fact. That’s why I always speak of a hypothetical rather than an actual predictor. In fact, my hypothetical predictor even has an oracle for the halting problem (which is almost certainly not realizable in this universe) because I don’t believe that Turing machines exercise free will when “deciding” whether or not to halt.
it is just a restatement of incompatibilism.
That’s possible. But just because incompatibalism is a tautology does not make it untrue.
I don’t think it is a tautology. The state of affairs for a reliable predictor to exist would be that there is something that causes both my action and the prediction, and that whatever this is is accessible to the predictor before it is accessible to me (otherwise it’s not a prediction). That doesn’t feel like a tautology to me, but I’m not going to argue about it. Either way, it’s true.
Please consider the possibility that other people besides yourself have thought about this stuff
Of course. As soon as someone presents a cogent argument I’m happy to consider it. I haven’t heard one yet (despite having read this ).
It means you will not choose B, which is not necessarily the same as that you cannot choose B.
That’s really the crux of the matter I suppose. It reminds me of the school of thought on the problem of theodicy which says that God could eliminate evil from the world, but he chooses not to for some reason that is beyond our comprehension (but is nonetheless wise and good and loving). This argument has always struck me as a cop-out. If God’s failure to use His super-powers for good is reliably predictable, then that to me is indistinguishable from God not having those super powers to begin with.
You can see the absurdity of it by observing that this same argument can be applied to anything, not just God. I can argue with equal validity that rocks can fly, they just choose not to. Or that I could, if I wanted to, mount an argument for my position that is so compelling that you would have no choice but to accept it, but I choose not to because I am benevolent and I don’t want to shatter your illusion of free will.
I don’t see any possible way to distinguish between “can not” and “with 100% certainty will not”. If they can’t be distinguished, they must be the same.
I already pointed out that your own choice of definition doesn’t have the property you claimed (being fundamentally incompatible with determinism). I think that suffices to make my point.
This would not be the first time in history that the philosophical community was wrong about something.
Very true. But if you are claiming that some philosophical proposition is (not merely true but) obvious and indeed true by definition, then firm disagreement by a majority of philosophers should give you pause.
You could still be right, of course. But I think you’d need to offer more and better justification than you have so far, to be at all convincing.
But “a very little bit” is still distinguishable from zero, yes?
Well, the actual distinguishing might be tricky, especially as all I’ve claimed is that arguably it’s so. But: yes, I have suggested—to be precise about my meaning—that some reasonable definitions of “free will” may have the consequence that a chess-playing program has a teeny-tiny bit of free will, in something like the same way as John McCarthy famously suggested that a thermostat has (in a very aetiolated sense) beliefs.
Nothing about it seems human decision-like.
Nothing about it seems decision-like at all. My notion of what is and what isn’t a decision is doubtless influenced by the fact that the most interesting decision-making agents I am familiar with are human, which is why an abstract resemblance to human decision-making is something I look for. I have only a limited and (I fear) unreliable idea of what other forms decision-making can take. As I said, I’ll happily revise this in the light of new data.
I believe that any philosophy worth adhering to ought to be IA-ready and AI-ready
Me too; if you think that what I have said about decision-making isn’t, then either I have communicated poorly or you have understood poorly or both. More precisely: my opinions about decision-making surely aren’t altogether IA/AI-ready, for the rather boring reason that I don’t know enough about what intelligent aliens or artificial intelligences might be like for my opinions to be well-adjusted for them. But I do my best, such as it is.
The hypothesis that humans make decisions by heuristic search has been pretty much disproven
First: No, it hasn’t. The hypothesis that humans make all their decisions by heuristic search certainly seems pretty unlikely at this point, but so what? Second and more important: I was not claiming that humans make decisions by tree searching. (Though, as it happens, when playing chess we often do—though our trees are quite different from the computers’.) I claim, rather, that humans make decisions by a process along the lines of: consider possible actions, envisage possible futures in each case, evaluate the likely outcomes, choose something that appears good. Which also happens to be an (extremely handwavy and abstract) description of what a chess-playing program does.
All I require for my argument to hold is predictability in principle, not predictability in fact.
Perfectly reliable prediction is not possible in principle in our universe. Not even with a halting oracle.
because I don’t believe that Turing machines exercise free will when “deciding” whether or not to halt.
I think the fact that you never actually get to observe the event of “such-and-such a TM not halting” means you don’t really need to worry about that. In any case, there seems to me something just a little improper about finagling your definition in this way to make it give the results you want: it’s as if you chose a definition in some principled way, found it gave an answer you didn’t like, and then looked for a hack to make it give a different answer.
just because incompatibilism is a tautology does not make it untrue.
Of course not. But what I said was neither (1) that incompatibilism is a tautology nor (2) that that makes it untrue. I said that (1) your argument was a tautology, which (2) makes it a bad argument.
As soon as someone presents a cogent argument I’m happy to consider it.
I think that may say more about your state of mind than about the available arguments. In any case, the fact that you don’t find any counterarguments to your position cogent is not (to my mind) good reason for being rudely patronizing to others who are not convinced by what you say.
It reminds me of [...]
I regret to inform you that “argument X has been deployed in support of wrong conclusion Y” is not good reason to reject argument X—unless the inference from X to Y is watertight, which in this case I hope you agree it is not.
I don’t see any possible way to distinguish between “can not” and “with 100% certainty will not”.
This troubles me not a bit, because you can never say “with 100% certainty will not” about anything with any empirical content. Not even if you happen to be a perfect reasoner and possess a halting oracle.
And at degrees of certainty less than 100%, it seems to me that “almost certainly will not” and “very nearly cannot” are quite different concepts and are not so very difficult to disentangle, at least in some cases. Write down ten common English boys’ names. Invite me to choose names for ten boys. Can I choose the names you wrote down? Of course. Will I? Almost certainly not. If the notion of possibility you’re working with leads you to a different conclusion, so much the worse for that notion of possibility.
Sorry, it is not my intention to be either rude or patronizing. But there are some aspects of this discussion that I find rather frustrating, and I’m sorry if that frustration occasionally manifests itself as rudeness.
you can never say “with 100% certainty will not” about anything with any empirical content
Of course I can: with 100% certainty, no one will exhibit a working perpetual motion machine today. With 100% certainty, no one will exhibit superluminal communication today. With 100% certainty, the sun will not rise in the west tomorrow. With 100% certainty, I will not be the president of the United States tomorrow.
Nothing about [a pachinko machine] seems decision-like at all.
a thermostat has (in a very aetiolated sense) beliefs.
Do you believe that a thermostat makes decisions? Do you believe that a thermostat has (a little bit of) free will?
Perfectly reliable prediction is not possible in principle in our universe. Not even with a halting oracle.
I presume you mean “perfectly reliable prediction of everything is not possible in principle.” Because perfectly reliable prediction of some things (in principle) is clearly possible. And perfectly reliable prediction of some things (in principle) with a halting oracle is possible by definition.
with 100% certainty, no one will exhibit a working perpetual motion machine today
100%? Really? Not just “close to 100%, so let’s round it up” but actual complete certainty?
I too am a believer in the Second Law of Thermodynamics, but I don’t see on what grounds anyone can be 100% certain that the SLoT is universally correct. I say this mostly on general principles—we could just have got the physics wrong. More specifically, there are a few entropy-related holes in our current understanding of the world—e.g., so far as I know no one currently has a good answer to “why is the entropy so low at the big bang?” nor to “is information lost when things fall into black holes?”—so just how confidently would you bet that figuring out all the details of quantum gravity and of the big bang won’t reveal any loopholes?
Now, of course there’s a difference between “the SLoT has loopholes” and “someone will reveal a way to exploit those loopholes tomorrow”. The most likely possible-so-far-as-I-know worlds in which perpetual motion machines are possible are ones in which we discover the fact (if at all) after decades of painstaking theorizing and experiment, and in which actual construction of a perpetual motion machine depends on somehow getting hold of a black hole of manageable size and doing intricate things with it. But literally zero probability that some crazy genius has done it in his basement and is now ready to show it off? Nope. Very small indeed, but not literally zero.
the sun will not rise in the west. [...] I will not be the president of the United States
Again, not zero. Very very very tiny, but not zero.
Do you believe that a thermostat makes decisions?
It does something a tiny bit like making decisions. (There is a certain class of states of affairs it systematically tries to bring about.) However, there’s nothing in what it does that looks at all like a deliberative process, so I wouldn’t say it has free will even to the tiny extent that maybe a chess-playing computer does.
For the avoidance of doubt: The level of decision-making, free will, intelligence, belief-having, etc., that these simple (or in the case of the chess program not so very simple) devices exhibit is so tiny that for most purposes it is much simpler and more helpful simply to say: No, these devices are not intelligent, do not have beliefs, etc. Much as for most purposes it is much simpler and more helpful to say: No, the coins in your pocket are not held away from the earth by the gravitational pull of your body. Or, for that matter: No, there is no chance that I will be president of the United States tomorrow.
Where are you heading with these questions? I mean, are you expecting them to help achieve mutual understanding, or are you playing to the gallery and trying to get me to say things that sound silly? (If so, I think you may have misjudged the gallery.)
perfectly reliable prediction of some things (in principle) is clearly possible.
Empirical things? Do you not, in fact, believe in quantum mechanics? Or do you think “in half the branches, by measure, X will happen, and in the other half Y will happen” counts as a perfectly reliable prediction of whether X or Y will happen?
is possible by definition.
Only perfectly non-empirical things. Sure, you can “predict” that a given Turing machine will halt. But you might as well say that (even without a halting oracle) you can “predict” that 3x4=12. As soon as that turns into “this actual multiplying device, right here, will get 12 when it tries to multiply 3 by 4”, you’re in the realm of empirical things, and all kinds of weird things happen with nonzero probability. You build your Turing machine but it malfunctions and enters an infinite loop. (And then terminates later when the sun enters its red giant phase and obliterates it. Well done, I guess, but then your prediction that that other Turing machine would never terminate isn’t looking so good.) You build your multiplication machine and a cosmic ray changes the answer from 12 to 14. You arrange pebbles in a 3x4 grid, but immediately before you count the resulting pebbles all the elementary particles in one of the pebbles just happen to turn up somewhere entirely different, as permitted (albeit with staggeringly small probability) by fundamental physics.
[EDITED to fix formatting screwage; silly me, using an asterisk to denote multiplication.]
Where are you heading with these questions? I mean, are you expecting them to help achieve mutual understanding,
I’m not sure what I “expect” but yes, I am trying to achieve mutual understanding. I think we have a fundamental disconnect in our intuitions of what “free will” means and I’m trying to get a handle on what it is. If you think that a thermostat has even a little bit of free will then we’ll just have to agree to disagree. If you think even a Nest thermostat, which does some fairly complicated processing before “deciding” whether or not to turn on the heat has even a little bit of free will then we’ll just have to agree to disagree. If you think that an industrial control computer, or an airplane autopilot, which do some very complicated processing before “deciding” what to do have even a little bit of free will then we’ll have to agree to disagree. Likewise for weather systems, pachinko machines, geiger counters, and computers searching for a counterexample to the Collatz conjecture. If you think any of these things has even a little bit of free will then we will simply have to agree to disagree.
100%? Really? Not just “close to 100%, so let’s round it up” but actual complete certainty?
Most people, by “100% certainty”, mean “certain enough that for all practical purposes it can be treated as 100%”. Not treating the statement as meaning that is just Internet literalness of the type that makes people say everyone on the Internet has Aspergers.
Most people, by “100% certainty”, mean “certain enough that for all practical purposes it can be treated as 100%”.
Not in the context of discussions of omniscience and whether pachinko machines have free will :-/
that makes people say everyone on the Internet has Aspergers.
People who say this should go back to their TVs and Bud Lights and not try to overexert themselves with complicated things like that system of intertubes.
However, in this particular discussion the distinction between certainty and something-closely-resembling-certainty is actually important, for reasons I mentioned earlier.
Are you seriously arguing that “free” in “free will” might mean the same thing as (say) “free” in “free beer”? Come on.
What ontological category does physics have in your view of the world?
That’s a very good question, and it depends (ironically) on which of two possible definitions of physics you’re referring to. If you mean physics-the-scientific-enterprise (let’s call that physics1) then it exists in the ontological category of human activity (along with things like “commerce”). If you mean the underlying processes which are the object of study in physics1 (let’s call that physics2) then I’d put those in the ontological category of objective reality.
Note that ontological categories are not mutually exclusive. Existence is a vector space. Physics1 is also part of objective reality, because it is an emergent property of physics2.
You can see free will as 1 d : enjoying personal freedom : not subject to the control or domination of another. There no other person who controls your actions.
The next definitions is: 2 a : not determined by anything beyond its own nature or being : choosing or capable of choosing for itself
I think you can make a good case that the way someone’s neurons work is part of their own nature or being.
You ontological model that there’s an enity called physics_2 that causes neurons to do something that not in their nature or being is problematic
I think this is a difference in the definition of the word “I”, which can reasonably be taken to mean at least three different things:
The totality of my brain and body and all of the processes that go on there. On this definition, “I have lungs” is a true statement.
My brain and all of the computational processes that go on there (but not the biological processes). On this definition, “I have lungs” is a false statement, but “I control my breathing” is a true statement.
That subset of the computational processes going on in my brain that we call “conscious.” On this view, the statement, “I control my breathing” is partially true. You can decide to stop breathing for a while, but there are hard limits on how long you can keep it up.
To me, the question of whether I have free will is only interesting on definition #3 because my conscious self is the part of me that cares about such things. If my conscious self is being coerced or conned, then I (#3) don’t really care whether the origin of that coercion is internal (part of my sub-conscious or my physiology) or external.
Basically after you previously argued that there only one reasonable definition of free will you now moved to the position that there are multiple reasonable definitions and you have particular reasons why you prefer to focus on a specific one?
Is that a reasonable description of your position?
No, not even remotely close. We seem to have a serious disconnect here.
For starters, I don’t think I ever gave a definition of “free will”. I have listed what I feel to be (two) necessary conditions for it, but I don’t think I ever gave sufficient conditions, which would be necessary for a definition. I’m not sure I even know what sufficient conditions would be. (But I think those necessary conditions, plus the known laws of physics, are enough to show that humans don’t have free will, so I think my position is sound even in the absence of a definition.) And I did opine at one point that there is only one reasonable interpretation of the word “free” in a context of a discussion of “free will.” But that is not at all the same thing as arguing that there is only one reasonable definition of “free will.” Also, the question of what “I” means is different from the question of what “free will” means. But both are (obviously) relevant to the question of whether or not I have free will.
The reason I brought up the definition of “I” is because you wrote:
You ontological model that there’s an enity called physics_2 that causes neurons to do something that not in their nature or being is problematic
That is not my position. (And ontology is a bit of a red herring here.) I can’t even imagine what it means for a neuron to “do something that not in their nature or being”, let alone that this departure from “nature or being” could be caused by physics. That’s just bizarre. What did I say that made you think I believed this?
I can’t define “free will” just like I can’t define “pornography.” But I have an intuition about free will (just like I have one about porn) that tells me that, whatever it is, it is not something that is possessed by pachinko machines, individual photons, weather systems, or a Turing machine doing a straightforward search for a counter-example to the Collatz conjecture. I also believe that “will not with 100% reliability” is logically equivalent to “can not” in that there is no way to distinguish these two situations. If you wish to dispute this, you will have to explain to me how I can determine whether the reason that the moon doesn’t leave earth orbit is because it can’t or because it chooses not to.
I can’t even imagine what it means for a neuron to “do something that not in their nature or being”, let alone that this departure from “nature or being” could be caused by physics. That’s just bizarre. What did I say that made you think I believed this?
I thought you made an argument that physical determinism somehow means that there’s no free will because physics is causes an effect to happen. If I misunderstood that you make the argument feel free to point that out.
Given the dictionary definition of “free” that seems to be flawed.
I can’t define “free will” just like I can’t define “pornography.”
That’s an appeal to the authority of your personal intuition. It prevents your statements from being falsifiable. It moves the statements into to vague to be wrong territory.
If I have a conversation with a person who has akrophobie to debug then I’m going to use words in a way where I only care about the effect of the words but not whether my sentences make falsifiable statements. If I however want to have a rational discussion on LW than I strive to use rational language. Language that makes concrete claims that allow others to engage with me in rational discourse.
Again that’s what distinguish rational!LW from rational!NewAtheist. If you don’t simply want to have a replacement of religion, but care about reasoning than it’s useful to not be to vague to be wrong.
The thing you wrote about only calling the part of you I that corresponds to your conscious mind looks to me like subclinical depersonalization disorder. A notion of the self that can be defended but that’s unhealthy to have.
I not only have lungs. My lungs are part of the person that I happen to be.
If you wish to dispute this, you will have to explain to me how I can determine whether the reason that the moon doesn’t leave earth orbit is because it can’t or because it chooses not to.
If we stay with the dictionary definition of freedom why look at the nature of the moon. Is the fact that it revolves around the earth an emergent property of how the complex internals of the moon work or isn’t it?
My math in that area isn’t perfect but objects that can be modeled by nontrival nondeterministic finite automatons might be a criteria.
Nontrival nondeterministic finite automatons can reasonably described as using heuristics to make choices. They make them based on the algorithm that’s programmed into them and that algorithm can by reasonably described as being part of the nature of a specific nondeterministic finite automatons.
I don’t think the way that the moon resolves around the earth is reasonably modeled with
nontrival nondeterministic finite automatons.
I thought you made an argument that physical determinism somehow means that there’s no free will because physics is causes an effect to happen.
No, that’s not my argument. My argument (well, one of them anyway) is that if I am reliably predictable, then it must be the case that I am deterministic, and therefore I cannot have free will.
I actually go even further than that. If I am not reliably predictable, then I might have free will, but my mere unpredictability is not enough to establish that I have free will. Weather systems are not reliably predictable, but they don’t have free will. It is not even the case that non-determinism is sufficient to establish free will. Photons are non-deterministic, but they don’t have free will.
That’s an appeal to the authority of your personal intuition.
Well, yeah, of course it is (though I would not call my intuitions an “authority”). This whole discussion starts from a subjective experience that I have (and that other people report having), namely, feeling like I have free will. I don’t know of any way to talk about a subjective experience without referring to my personal intuitions about it.
The difference between free will and other subjective experiences like, say, seeing color, is that seeing colors can be easily grounded in an objective external reality, whereas with free will it’s not so easy. In fact, no one has exhibited a satisfactory explanation of my subjective experience that is grounded in objective reality, hence my conclusion that my subjective experience of having free will is an illusion.
This whole discussion starts from a subjective experience that I have (and that other people report having), namely, feeling like I have free will.
To the extend that the subjective experience you call free will is independent on what other people mean with the term free will, the arguments about it aren’t that interesting for the general discussion about whether what’s commonly called free will exists.
More importantly concepts that start from “I have the feeling that X is true” usually produce models of reality that aren’t true in 100% of the cases. They make some decent predictions and fail predictions in other cases.
It’s usually possible to refine concepts to be better at predicting. It’s part of science to develop operationalized terms.
This started by you saying But the word "free" has an established meaning in English. That’s you pointing to a shared understanding of free and not you pointing to your private experience.
No, that’s not my argument. My argument (well, one of them anyway) is that if I am reliably predictable, then it must be the case that I am deterministic, and therefore I cannot have free will.
Human’s are not reliably predictive due to being NFA’s. Out of memory Heinz von Förster bring the example of a child answer the question of: “What’s 1+1?” with “Blue”. It needs a education to train children to actually give predicable answers to the question what’s “What’s 1+1?”.
Weather systems are not reliably predictable, but they don’t have free will.
I think the issue with why weather systems are not predictable is not because they aren’t free to make choices (if you use certain models) but because is about the part of “will”. Having a will is about having desires. The weather doesn’t have desires in the same sense that humans do and thus it has no free will.
I think that humans do have desire that influence the choices they make even in the absence of them being conscious of the desire creating the choice.
The difference between free will and other subjective experiences like, say, seeing color, is that seeing colors can be easily grounded in an objective external reality
Grounding the concept of color in external reality isn’t trival. There are many competing definitions. You can define it over what the human eye perceives which has a lot to do with human genetics that differ from person to person. You can define it over wave-lengths. . You can define it over RGB values.
It doesn’t make sense to argue that color doesn’t exist because the human qualia of color doesn’t map directly to the wave-length definition of color
With color the way you determine the difference between colors is also a fun topic. The W3C definition for example leads to strange consequences.
That’s you pointing to a shared understanding of free and not you pointing to your private experience.
You’re conflating two different things:
Attempting to communicate about a phenomenon which is rooted in a subjective experience.
Attempting to conduct that communication using words rather than, say, music or dance.
Talking about the established meaning of the word “free” has to do with #2, not #1. The fact that my personal opinion enters into the discussion has to do with #1, not #2.
I think that humans do have desire that influence the choices they make
Yes, of course I agree. But that’s not the question at issue. The question is not whether we have “desires” or “will” (we all agree that we do), the question is whether or not we have FREE will. I think it’s pretty clear that we do NOT have the freedom to choose our desires. At least I don’t seem to; maybe other people are different. So where does this alleged freedom enter the process?
Grounding the concept of color in external reality isn’t trival
I never said it was. In fact, the difficulty of grounding color perception in objective reality actually supports my position. One would expect that the grounding of free will perception in objective reality to be at least as difficult as grounding color perception, but I don’t see those who support the objective reality of free will undertaking such a project, at least not here.
I’m willing to be convinced that this free will thing is real, but as with any extraordinary claim the burden is on you to prove that it is, not on me to prove that it is not.
I’m willing to be convinced that this free will thing is real, but as with any extraordinary claim the burden is on you to prove that it is, not on me to prove that it is not.
Pretty much everyone perceives himself/herself freely making choices, so the claim that free will is real is consistent with most peoples’ direct experience. While this does not prove that free will is real, it does suggest that the claim that free will is real is not really any more extraordinary than the claim that it is not real. So, I do not think that the person claiming that free will is real has any greater burden of proof than the person who claims that it is not.
That’s not a valid argument for at least four reasons:
There are manyperceptualillusions, so the hypothesis that free will is an illusion is not a priori an extraordinary claim. (In fact, the feeling that you are living in a classical Galilean universe is a perceptual illusion!)
There is evidence that free will is in fact a perceptual illusion.
It makes evolutionary sense that the genes that built our brains would want to limit the extent to which they could become self-aware. If you knew that your strings were being pulled you might sink into existential despair, which is not generally salubrious to reproductive fitness.
We now understand quite a bit about how the brain works and about how computers work, and all the evidence indicates that the brain is a computer. More precisely, there is nothing a brain can do that a properly programmed Turing machine could not do, and therefore no property that a brain have that cannot be given to a Turing machine. Some Turing machines definitely do not have free will (if you believe that a thermostat has free will, well, we’re just going to have to agree to disagree about that). So if free will is a real thing you should be able to exhibit some way to distinguish those Turing machines that have free will from those that do not. I have heard no one propose such a criterion that doesn’t lead to conclusions that grate irredeemably upon my intuitions about what free will is (or what it would have to be if it were a real thing).
In this respect, free will really is very much like God except that the subjective experience of free will is more common than the subjective experience of the Presence of the Holy Spirit.
BTW, it is actually possible that the subjective experience of free will is not universal among humans. It is possible that some people don’t have this subjective perception, just as some people don’t experience the Presence of the Holy Spirit. It is possible that this lack of the subjective perception of free will is what leads some people to submit to the will of Allah, or to become Calvinists.
so the hypothesis that free will is an illusion is not a priori an extraordinary claim
I basically agree with that too—it is you rather than me who brought up the notion of extraordinary claims. It seems to me that the notion of extraordinary claims in this case is a red herring—that free will is real is a claim, and that free will is not real is a claim; I am simply arguing that neither claim has a greater burden of proof than the other. In fact, I think that there is room for reasonable people to disagree with regard to the free will question.
In fact, the feeling that you are living in a classical Galilean universe is a perceptual illusion!
I don’t know what that means exactly, but it sounds intriguing! Do you a link or a reference with additional information?
2 There is evidence that free will is in fact a perceptual illusion
None of those experiments provides strong evidence; the article you linked lists for several of the experiments objections to interpreting the experiment as evidence against free will (e.g., per the article, “Libet himself did not interpret his experiment as evidence of the inefficacy of conscious free will”). One thing in particular that I noticed is that many of the experiments dealt with more-less arbitrary decisions—e.g. when to flick one’s wrist, when to make brisk finger movements at arbitrary intervals, etc. Even if it could be shown that the brain somehow goes on autopilot when making trivial, arbitrary decisions that hold no significant consequences, it is not clear that this says anything about more significant decisions—e.g. what college to attend, how much one should spend on a house, etc.
3 It makes evolutionary sense that the genes that built our brains would want to limit the extent to which they could become self-aware. If you knew that your strings were being pulled you might sink into existential despair, which is not generally salubrious to reproductive fitness.
That is a reasonable statement and I have no argument with it. But, while it provides a possible explanation why we might perceive free will even if it does not exist, I don’t think that it provides significant evidence against free will.
4 We now understand quite a bit about how the brain works and about how computers work, and all the evidence indicates that the brain is a computer. More precisely, there is nothing a brain can do that a properly programmed Turing machine could not do
I agree with that.
and therefore no property that a brain have that cannot be given to a Turing machine. Some Turing machines definitely do not have free will… So if free will is a real thing you should be able to exhibit some way to distinguish those Turing machines that have free will from those that do not.
If that statement is valid, then it seems to me that the following statement is also valid:
“There is no property that a brain can have that cannot be given to a Turing machine. Some Turing machines definitely are not conscious. So if consciousness is a real thing you should be able to exhibit some way to distinguish those Turing machines that are conscious will from those that are not.”
So, do you believe that consciousness is a real thing? And, can a Turing machine be conscious? If so, how are we to distinguish those Turing machines that are conscious will from those that are not?
neither claim has a greater burden of proof than the other
That may be. Nonetheless, at the moment I believe that free is an illusion, and I have some evidence that supports that belief. I see no evidence to support the contrary belief. So if you want to convince me that free will is real then you’ll have to show me some evidence.
If you don’t care what I believe then you are under no obligations :-)
None of those experiments provides strong evidence
The fact that you can reliably predict some actions that people perceive as volitional up to ten seconds in advance seems like pretty strong evidence to me. But I suppose reasonable people could disagree about this. In any case, I didn’t say there was strong evidence, I just said there was some evidence.
So, do you believe that consciousness is a real thing?
That depends a little on what you mean by “a real thing.” Free will and consciousness are both real subjective experiences, but neither one is objectively real. Their natures are very similar. I might even go so far as to say that they are the same phenomenon. I recommend reading this book if you really want to understand it.
And, can a Turing machine be conscious?
Yes, of course. You would have to be a dualist to believe otherwise.
If so, how are we to distinguish those Turing machines that are conscious will from those that are not?
That’s very tricky. I don’t know. I’m pretty sure that our current methods of determining consciousness produce a lot of false negatives. But if a computer that could pass the Turing test told me it was conscious, and could describe for me what it’s like to be a conscious computer, I’d be inclined to believe it.
I don’t know what that means exactly, but it sounds intriguing! Do you a link or a reference with additional information?
It’s not that deep. It just means that your perception of reality is different from actual reality in some pretty fundamental ways. The sun appears to revolve around the earth, but it doesn’t. The chair you’re sitting on seems like a solid object, but it isn’t. “Up” always feels like it’s the same direction, but it’s not. And you feel like you have free will, but you don’t. :-)
If you don’t care what I believe then you are under no obligations
As a matter of fact, I think the free will question is an interesting question, but not an instrumentally important question; I can’t really think of anything I would do differently if I were to change my mind on the matter. This is especially true if you are right—in that case we’d both do whatever we’re going to do and it wouldn’t matter at all!
Free will and consciousness are both real subjective experiences, but neither one is objectively real. Their natures are very similar. I might even go so far as to say that they are the same phenomenon.
Interesting. The reason I asked the question is that there are some thinkers who deny the reality of free will but accept the reality of consciousness (e.g. Alex Rosenberg); I was curious if you are in that camp. It sounds as though you are not.
I recommend reading this book if you really want to understand (consciousness).
Glad to see you are open to at least some of Daniel Dennett’s views! (He’s a compatibilist, I believe.)
It’s not that deep. It (the idea that the feeling that you are living in a classical Galilean universe is a perceptual illusion) just means that your perception of reality is different from actual reality in some pretty fundamental ways. The sun appears to revolve around the earth, but it doesn’t. The chair you’re sitting on seems like a solid object, but it isn’t. “Up” always feels like it’s the same direction, but it’s not.
Understood. My confusion came from the term “Galilean Universe” which I assumed was a reference to Galileo (who was actually on-board with the idea of the Earth orbiting the Sun—that is one of the things that got him into some trouble with the authorities!)
we’d both do whatever we’re going to do and it wouldn’t matter at all!
Exactly right. I live my life as if I’m a classical conscious being with free will even though I know that metaphysically I’m not. It’s kind of fun knowing the truth though. It gives me a lot of peace of mind.
I was curious if you are in that camp.
I’m not familiar with Rosenberg so I couldn’t say.
Glad to see you are open to at least some of Daniel Dennett’s views! (He’s a compatibilist, I believe.)
Yes, I think you’re right. (That video is actually well worth watching!)
Galilean Universe
Sorry, my bad. I meant it in the sense of Galilean relativity (a.k.a. Newtonian relativity, though Galileo actually thought of it first) where time rather than the speed of light is the same for all observers.
(That’s actually a pretty good analogy. The force of gravity is a reasonable approximation to the truth for nearly all everyday purposes even though conceptually it is completely wrong. Likewise with free will.)
The common understanding of free will does run into a lot of problems when it comes to issues such as habit change.
There are people debating whether or not hypnosis can get people to do something against their free will, with happens to be a pretty bad question. Questions such as can people decide by free will not to have an allergic reaction? are misleading.
They or one of their matrilinear ancestors converted to Judaism?
In case it wasn’t clear: I was not posing “on what basis …” as a challenge, I was pointing out that it isn’t much of a challenge and that for similar reasons lisper’s parallel question about free will is not much of a challenge either.
Yes, obviously. But it is also a waste of time trying to get everyone to agree on what is beautiful, so too it is a waste of time trying to get everyone to agree on what is free will. Like I said, it’s really quibbling over terminology, which is almost always a waste of time.
OK, that’s not entirely unreasonable, but on that definition no reliably predictable agent has free will because there is always another good explanation that does not appeal to the agent’s desires, namely, whatever model would be used by a reliable predictor.
Indeed.
OK, then you’re intuitive definition of “free will” is very different from mine. I would not say that a chess playing computer has free will, at least not given current chess-playing technology. On my view of free will, a chess playing computer with free will should be able to decide, for example, that it didn’t want to play chess any more.
I’d say that not being reliably predictable is a necessary but not sufficient condition.
I think ialdabaoth actually came pretty close to getting it right:
I think that’s wrong for two reasons. The first is that the model might explicitly include the agent’s desires. The second is that a model might predict much better than it explains. (Though exactly what constitutes good explanation is another thing people may reasonably disagree on.)
I think that’s better understood as a limit on its intelligence than on its freedom. It doesn’t have the mental apparatus to form thoughts about whether or not to play chess (except in so far as it can resign any given game, of course). It may be that we shouldn’t try to talk about whether an agent has free will unless it has some notion of its own decision-making process, in which case I’d say not that the chess program lacks free will, but that it’s the wrong kind of thing to have or lack free will. (If you have no will, it makes no sense to ask whether it is free.)
Your objection to compatibilism was, unless I badly misunderstood, that no one has given a good compatibilist criterion for when something has free will. My objection was that you haven’t given a good incompatibilist criterion either. The fact that you can state a necessary condition doesn’t help with that; the compatibilist can state necessary conditions too.
There seem to me to be a number of quite different ways to interpret what he wrote. I am guessing that you mean something like: “I define free will to be unpredictability, with the further condition that we apply it only to agents we wish to anthropomorphize”. I suppose that gets around my random number generator example, but not really in a very satisfactory way.
So, anyway, suppose someone offers me a bribe. You know me well, and in particular you know that (1) I don’t want to do the thing they’re hoping to bribe me to, (2) I care a lot about my integrity, (3) I care a lot about my perceived integrity, and (4) the bribe is not large relative to how much money I have. You conclude, with great confidence, that I will refuse the bribe. Do you really want to say that this indicates that I didn’t freely refuse the bribe?
On another occasion I’m offered another bribe. But this time some evildoer with very strange preferences gets hold of me and compels me, at gunpoint, to decide whether to take it by flipping a coin. My decision is now maximally unpredictable. Is it maximally free?
I think the answers to the questions in those paragraphs should both be “no”, and accordingly I think unpredictability and freedom can’t be so close to being the same thing.
OK, let me try a different counter-argument then: do you believe we have free will to choose our desires? I don’t. For example, I desire chocolate. This is not something I chose, it’s something that happened to me. I have no idea how I could go about deciding not to desire chocolate. (I suppose I could put myself through some sort of aversion therapy, but that’s not the same thing. That’s deciding to try to train myself not to desire chocolate.)
If we don’t have the freedom to choose our desires, then on what basis is it reasonable to call decisions that take those non-freely-chosen desires into account “free will”?
This is a very deep topic that is treated extensively in David Deutsch’s book, “The Beginning of Infinity” (also “The Fabric of Reality”, particularly chapter 7). If you want to go down that rabbit hole you need to read at least Chapter 7 of TFOR first, otherwise I’ll have to recapitulate Deutsch’s argument. The bottom line is that there is good reason to believe that theories with high predictive power but low explanatory power are not possible.
Sure. Do you distinguish between “will” and “desire”?
Really? What are they?
Yes.
Yes, which is to say, not free at all. It is exactly as free as the first case.
The only difference between the two cases is in your awareness of the mechanism behind the decision-making process. In the first case, the mechanism that caused you to choose to refuse the bribe is inside your brain and not accessible to your conscious self. In the second case, (at least part of) the mechanism that causes you to make the choice is more easily accessible to your conscious self. But this is a thin reed because the inaccessibility of your internal decision making process is (almost certainly) a technological limitation, not a fundamental difference between the two cases.
(I see you’ve been downvoted. Not by me.)
If Jewishness is inherited from one’s mother, and a person’s great^200000-grandmother [EDITED to fix an off-by-1000x error, oops] was more like a chimpanzee than a modern human and had neither ethnicity nor religion as we now understand them, then on what basis is it reasonable to call that person Jewish?
If sentences are made up of letters and letters have no meaning, then on what basis is it reasonable to say that sentences have meaning?
It is not always best to make every definition recurse as far back as it possibly can.
I have read both books. I do not think chapter 7 of TFoR shows that theories with high predictive power but low explanatory power are impossible, but it is some time since I read the book and I have just now only glanced at it rather than rereading it in depth. If you reckon Deutsch says that predictive power guarantees explanatory power, could you remind me where in the chapter he does it? Or, if you have an argument that starts from what Deutsch does in that chapter and concludes that predictive power guarantees explanatory power, could you sketch it? (I do not guarantee to agree with everything Deutsch says.)
I seldom use the word “will” other than in special contexts like “free will”. Why do you ask?
One such might be: “For an action to be freely willed, the causes leading up to it must go via a process of conscious decision by the agent.”
Meh, OK. So let me remind you that the question we were (I thought) discussing at this point was: are there clearer-cut satisfactory criteria for “free will” available to incompatibilists than to compatibilists? Now, of course if you say that by definition nothing counts as an instance of free will then that’s a nice clear-cut criterion, but it also has (so far as it goes) nothing at all to do with freedom or will or anything else.
I think you’re saying something a bit less content-free than that; let me paraphrase and you can correct me if I’m getting it wrong. “Free will means unpredictability-in-principle. Everything is in fact predictable in principle, and therefore nothing is actually an instance of free will.” That’s less content-free because we can then ask: OK, what if you’re wrong about everything being predictable in principle; or what if you’re right but we ask about a hypothetical different world where some things aren’t predictable in principle?
Let’s ask that. Imagine a world in which some sort of objective-collapse quantum mechanics is correct, and many things ultimately happen entirely at random. And let’s suppose that whether or not the brain uses quantum effects in any “interesting” way, it is at least affected by them in a chaos-theory sort of way: that is, sometimes microscale randomness arising from quantum mechanics ends up having macroscale effects on what your brain does. And now let’s situate my two hypothetical examples in this hypothetical world. In this world, of course, nothing is entirely predictable, but some things are much more predictable than others. In particular, the first version of me (deciding whether to take the bribe on the basis of my moral principles and preferences and so forth, which ends up being very predictable because the bribe is small and my principles and preferences strong) is much more predictable (both in principle and in practice) in this world than the second version (deciding, at gunpoint, on the basis of what I will now make a quantum random number generator rather than a coin flip). In this world, would you accordingly say that first-me is choosing much less freely than second-me?
I don’t think that’s correct. For instance, in the second case I am coerced by another agent, and in the first I’m not; in the first case my decision is a consequence of my preferences regarding the action in question, and in the second it isn’t (though it is a consequence of my preference for living over dying; but I remark that your predictability criterion gives the exact same result if in the second case the random number generator is wired directly into my brain so as to control my actions with no conscious involvement on my part at all).
You may prefer notions of free will with a sort of transitive property, where if X is free and X is caused by Y1,...,Yn (and nothing else) then one of the Y must be free. (Or some more sophisticated variant taking into account the fact that freedom comes in degrees, that the notion of “cause” is kinda problematic, etc.) I see no reason why we have to define free will in such a way. We are happy to say that a brain is intelligent even though it is made of neurons which are not intelligent, that a statue resembles Albert Einstein even though it is made of atoms that do not resemble Einstein, that a woolly jumper is warm even though it is made of individual fibres that aren’t, etc.
Of course. Does this mean that you concede that our desires are not freely chosen?
Oh, good!
You’re right, the argument in chapter 7 is not complete, it’s just the 80⁄20 part of Deutsch’s argument, so it’s what I point people to first. And non-explanatory models with predictive power are not impossible, they’re just extremely unlikely (probability indistinguishable from zero). The reason they are extremely unlikely is that in a finite universe like ours there can exist only a finite amount of data, but there are an infinite number of theories consistent with that data, nearly all of which have low predictive power. Explanatory power turns out to be the only known effective filter for theories with high predictive power. Hence, it is overwhelmingly likely that a theory with high predictive power will have high explanatory power.
No.
First, I disagree with “Free will means unpredictability-in-principle.” It doesn’t mean UIP, it simply requires UIP. Necessary, not sufficient.
Second, to be “real” free will, there would have to be some circumstances where you accept the bribe and surprise me. In this respect, you’ve chosen a bad example to make your point, so let me propose a better one: we’re in a restaurant and I know you love burgers and pasta, both of which are on the menu. I know you’ll choose one or the other, but I have no idea which. In that case, it’s possible that you are making the choice using “real” free will.
Not so. In the first case you are being coerced by your sense of morality, or your fear of going to prison, or something like that. That’s exactly what makes your choice not to take the bribe predictable. The only difference is that the mechanism by which you are being coerced in the second case is a little more overt.
No, what I require is a notion of free will that is the same for all observers, including a hypothetical one that can predict anything that can be predicted in principle. (I also want to give this hypothetical observer an oracle for the halting problem because I don’t think that Turing machines exercise “free will” or “decide” whether or not to halt.) This is simply the same criterion I apply to any phenomenon that someone claims is objectively real.
I think some of our desires are more freely chosen than others. I do not think an action chosen on account of a not-freely-chosen desire is necessarily best considered unfree for that reason.
That isn’t quite what you said before, but I’m happy for you to amend what you wrote.
It seems to me that the argument you’re now making has almost nothing to do with the argument in chapter 7 of Deutsch’s book. That doesn’t (of course) in any way make it a bad argument, but I’m now wondering why you said what you did about Deutsch’s books.
Anyway. I think almost all the work in your argument (at least so far as it’s relevant to what we’re discussing here) is done by the following statement: “Explanatory power turns out to be the only known effective filter for theories with high predictive power.” I think this is incorrect; simplicity plus past predictive success is a pretty decent filter too. (Theories with these properties have not infrequently turned out to be embeddable in theories with good explanatory power, of course, as when Mendeleev’s empirically observed periodicity was explained in terms of electron shells, and the latter further explained in terms of quantum mechanics.)
OK, but in that case either you owe us something nearer to necessary and sufficient conditions, or else you need to retract your claim that incompatibilism does better than compatibilism in the “is there a nice clear criterion?” test. Also, if you aren’t claiming anything close to “free will = UIP” then I no longer know what you meant by saying that ialdabaoth got it more or less right.
Sure. That would be why I said “with great confidence” rather than “with absolute certainty”. I might, indeed, take the bribe after all, despite all those very strong reasons to expect me not to. But it’s extremely unlikely. (So no, I don’t agree that I’ve “chosen a bad example”; rather, I think you misunderstood the example I gave.)
If you say “you chose a bad example to make your point, so let me propose a better one” and then give an example that doesn’t even vaguely gesture in the direction of making my point, I’m afraid I start to doubt that you are arguing in good faith.
The things you describe me as being “coerced by” are (1) not agents and (2) not external to me. These are not irrelevant details, they are central to the intuitive meaning of “free will” that we’re looking for philosophically respectable approximations to. (Perhaps you disagree with my framing of the issue. I take it that that’s generally the right way to think about questions like “what is free will?”.)
In particular, I think your claim about “the only difference” is flatly wrong.
That sounds sensible on first reading, but I think actually it’s a bit like saying “what I require is a notion of right and wrong that is the same for all observers, including a hypothetical one that doesn’t care about suffering” and inferring that our notions of right and wrong shouldn’t have anything to do with suffering. Our words and concepts need to be useful to us, and if some such concept would be uninteresting to a hypothetical superbeing that can predict anything that’s predictable in principle, that is not sufficient reason for us not to use it. Still more when your hypothetical superbeing needs capabilities that are probably not even in principle possible within our universe.
(I think, in fact, that even such a superbeing might have reason to talk about something like “free will”, if it’s talking about very-limited beings like us.)
I haven’t, as it happens, been claiming that free will is “objectively real”. All I claim is that it may be a useful notion. Perhaps it’s only as “objectively real” as, say, chess; that is, it applies to us, and what it is is fundamentally dependent on our cognitive and other peculiarities, and a world of your hypothetical superbeings might be no more interested in it than they presumably would be in chess, but you can still ask “to what extent is X exercising free will?” in the same way as you could ask “is X a better move than Y, for a human player with a human opponent?”.
Sorry about that. I really was trying to be helpful.
Well, heck, what are we arguing about then? Of course it’s a useful notion.
A better analogy would be “simultaneous events at different locations in space.” Chess is a mathematical abstraction that is the same for all observers. Simultaneity, like free will, depends on your point of view.
You’re arguing that no one has it and AIUI that nothing in the universe ever could have it. Doesn’t seem that useful to me.
I did consider substituting something like cricket or baseball for that reason. But I think the idea that free will is viewpoint-dependent depends heavily on what notion of free will you’re working with. I’m still not sure what yours actually is, but mine doesn’t have that property, out at any rate doesn’t have it to do great an extent as yours seems to.
Free will is a useful notion because we have the perception of having it, and so it’s useful to be able to talk about whatever it is that we perceive ourselves to have even though we don’t really have it. It’s useful in the same way that it’s useful to talk about, say, “the force of gravity” even though in reality there is no such thing. (That’s actually a pretty good analogy. The force of gravity is a reasonable approximation to the truth for nearly all everyday purposes even though conceptually it is completely wrong. Likewise with free will.)
You said that a chess-playing computer has (some) free will. I disagree (obviously because I don’t think anything has free will). Do you think Pachinko machines have free will? Do they “decide” which way to go when they hit a pin? Does the atmosphere have free will? Does it decide where tornadoes appear?
When I say “real free will” I mean this:
Decisions are made by my conscious self. This rules out pachinko machines, the atmosphere, and chess-playing computers having free will.
Before I make a decision, it must be actually possible for me to choose more than one alternative. Ergo, if I am reliably predictable, I cannot have free will because if I am reliably predictable then it is not possible for me to choose more than one alternative. I can only choose the alternative that a hypothetical predictor would reliably predict.
I don’t know how to make it any clearer than that.
I think it’s more helpful to talk about whatever we have that we’re trying to talk about, even if some of what we say about it isn’t quite right, which is why I prefer notions of free will that don’t become necessarily wrong if the universe is deterministic or there’s an omnipotent god or whatever.
I agree that gravity makes a useful analogy. Gravity behaves in a sufficiently force-like way (at least in regions of weakish spacetime curvature, like everywhere any human being could possibly survive) that I think for most purposes it is much better to say “there is, more or less, a force of gravity, but note that in some situations we’ll need to talk about it differently” than “there is no force of gravity”. And I would say the same about “free will”.
I don’t know much about Pachinko machines, but I don’t think they have any processes going on in them that at all resemble human deliberation, in which case I would not want to describe them as having free will even to the (very attenuated) extent that a chess program might have.
Again, I don’t think there are any sort of deliberative processes going on there, so no free will.
So there are two parts to this, and I’m not sure to what extent you actually intend them both. Part 1: decisions are made by conscious agents. Part 2: decisions are made, more specifically, by those agents’ conscious “parts” (of course this terminology doesn’t imply an actual physical division).
Of course “actually possible” is pretty problematic language; what counts as possible? If I’m understanding you right, you’d cash it out roughly as follows: look at the probability distribution of possible outcomes in advance of the decision; then freedom = entropy of that probability distribution (or something of the kind).
So then freedom depends on what probability distribution you take, and you take the One True Measure of freedom to be what you get for an observer who knows everything about the universe immediately before the decision is made (more precisely, everything in the past light-cone of the decision); if the universe is deterministic then that’s enough to determine the answer after the decision is made too, so no decisions are free.
One obvious problem with this is that our actual universe is not deterministic in the relevant sense. We can make a device based on radioactive decay or something for which knowledge of all that can be known in advance of its operation is not sufficient to tell you what it will output. For all we know, some or all of our decisions are actually affected enough by “amplified” quantum effects that they can’t be reliably predicted even by an observer with access to everything in their past light-cone.
It might be worse. Perhaps some of our decisions are so affected and some not. If so, there’s no reason (that I can see) to expect any connection between “degree of influence from quantum randomness” and any of the characteristics we generally think of as distinguishing free from not-so-free—practical predictability by non-omniscient observers, the perception of freeness that you mentioned before, external constraints, etc.
It doesn’t seem to me that predictability by a hypothetical “past-omniscient” observer has much connection with what in other contexts we call free will. Why make it part of the definition?
That’s like saying, “I prefer triangles with four sides.” You are, of course, free to prefer whatever you want and to use words however you want. But the word “free” has an established meaning in English which is fundamentally incompatible with determinism. Free means, “not under the control or in the power of another; able to act or be done as one wishes.” If my actions are determined by physics or by God, I am not free.
And you think chess-playing machines do?
BTW, if your standard for free will is “having processing that resembles human deliberation” then you’ve simply defined free will as “something that humans have” in which case the question of whether or not humans have free will becomes very uninteresting because the answer is tautologically “yes”.
I’d call them two “interpretations” rather than two “parts”. But I intended the latter: to qualify as free will on my view, decisions have to be made by the conscious part of a conscious agent. If I am conscious but I base my decision on a coin flip, that’s not free will.
Whatever is not impossible. In this case (and we’ve been through this) if I am reliably predictable then it is impossible for me to do anything other than what a hypothetical reliable predictor predicts. That is what “reliably predictable” means. That is why not being reliably predictable is a necessary but not sufficient condition for free will. It’s really not complicated.
Because that is what the “free” part of “free will” means. If I am faced with a choice between A and B and a reliable predictor predicts I am going to choose A, then I cannot choose B (again, this is what “reliably predictor” means). If I cannot choose B then I am not free.
I don’t think that’s at all clear, and the fact that a clear majority of philosophers are compatibilists indicates that a bunch of people who spend their lives thinking about this sort of thing also don’t think it’s impossible for “free” to mean something compatible with determinism.
Let’s take a look at that definition of yours, and see what it says if my decisions are determined by the laws of physics. “Not under the control or in the power of another”? That’s OK; the laws of physics, whatever they are, are not another agent. “Able to act or be done as one wishes”? That’s OK too; of course in this scenario what I wish is also determined by the laws of physics, but the definition doesn’t say anything about that.
(I wouldn’t want to claim that the definition you selected is a perfect one, of course.)
Yup. Much much simpler, of course. Much more limited, much more abstract. But yes, a tree-search with an evaluation at the leaves does indeed resemble human deliberation somewhat. (Do I need to keep repeating in each comment that all I claim is that arguably chess-playing programs have a very little bit of free will?)
Nope. But not having such processing seems like a good indication of not having free will, because whatever free will is it has to be something to do with making decisions, and nothing a pachinko machine or the weather does seems at all decision-like, and I think the absence of any process that looks at all like deliberation seems to me to be a large part of why. (Though I would be happy to reconsider in the face of something that behaves in ways that seem sufficiently similar to, e.g., apparently-free humans despite having very different internals.)
I have pointed out more than once that in this universe there is never prediction that reliable, and anything less reliable makes the word “impossible” inappropriate. For whatever reason, you’ve never seen fit even to acknowledge my having done so.
But let’s set that aside. I shall restate your claim in a form I think better. “If you are reliably predictable, then it is impossible for your choice and the predictor’s prediction not to match.” Consider a different situation, where instead of being predicted your action is being remembered. If it’s reliably rememberable, then it is impossible for your action and the remember’s memory not to match—but I take it you wouldn’t dream of suggesting that that involves any constraint on your freedom.
So why should it be different in this case? One reason would be if the predictor, unlike the rememberer, were causing your decision. But that’s not so; the prediction and your decision are alike consequences of earlier states of the world. So I guess the reason is because the successful prediction indicates the fact that your decision is a consequence of earlier states of the world. But in that case none of what you’re saying is an argument for incompatibilism; it is just a restatement of incompatibilism.
Please consider the possibility that other people besides yourself have thought about this stuff, are reasonably intelligent, and may disagree with you for reasons other than being too stupid to see what is obvious to you.
No. It means you will not choose B, which is not necessarily the same as that you cannot choose B. And (I expect I have said this at least once already in this discussion) words like “cannot” and “impossible” have a wide variety of meanings and I see no compelling reason why the only one to use when contemplating “free will” is the particular one you have in mind.
How would you define it then?
This would not be the first time in history that the philosophical community was wrong about something.
No, I get that. But “a very little bit” is still distinguishable from zero, yes?
Nothing about it seems human decision-like. But that’s a prejudice because you happen to be human. See below...
I believe that intelligent aliens could exist (in fact, almost certainly do exist). I also believe that fully intelligent computers are possible, and might even be constructed in our lifetime. I believe that any philosophy worth adhering to ought to be IA-ready and AI-ready, that is, it should not fall apart in the face of intelligent aliens or artificial intelligence. (Aside: This is the reason I do not self-identify as a “humanist”.)
Also, it is far from clear that chess computers work anything at all like humans. The hypothesis that humans make decisions by heuristic search has been pretty much disproven by >50 years of failed AI research.
I hereby acknowledge your having pointed this out. But it’s irrelevant. All I require for my argument to hold is predictability in principle, not predictability in fact. That’s why I always speak of a hypothetical rather than an actual predictor. In fact, my hypothetical predictor even has an oracle for the halting problem (which is almost certainly not realizable in this universe) because I don’t believe that Turing machines exercise free will when “deciding” whether or not to halt.
That’s possible. But just because incompatibalism is a tautology does not make it untrue.
I don’t think it is a tautology. The state of affairs for a reliable predictor to exist would be that there is something that causes both my action and the prediction, and that whatever this is is accessible to the predictor before it is accessible to me (otherwise it’s not a prediction). That doesn’t feel like a tautology to me, but I’m not going to argue about it. Either way, it’s true.
Of course. As soon as someone presents a cogent argument I’m happy to consider it. I haven’t heard one yet (despite having read this ).
That’s really the crux of the matter I suppose. It reminds me of the school of thought on the problem of theodicy which says that God could eliminate evil from the world, but he chooses not to for some reason that is beyond our comprehension (but is nonetheless wise and good and loving). This argument has always struck me as a cop-out. If God’s failure to use His super-powers for good is reliably predictable, then that to me is indistinguishable from God not having those super powers to begin with.
You can see the absurdity of it by observing that this same argument can be applied to anything, not just God. I can argue with equal validity that rocks can fly, they just choose not to. Or that I could, if I wanted to, mount an argument for my position that is so compelling that you would have no choice but to accept it, but I choose not to because I am benevolent and I don’t want to shatter your illusion of free will.
I don’t see any possible way to distinguish between “can not” and “with 100% certainty will not”. If they can’t be distinguished, they must be the same.
I already pointed out that your own choice of definition doesn’t have the property you claimed (being fundamentally incompatible with determinism). I think that suffices to make my point.
Very true. But if you are claiming that some philosophical proposition is (not merely true but) obvious and indeed true by definition, then firm disagreement by a majority of philosophers should give you pause.
You could still be right, of course. But I think you’d need to offer more and better justification than you have so far, to be at all convincing.
Well, the actual distinguishing might be tricky, especially as all I’ve claimed is that arguably it’s so. But: yes, I have suggested—to be precise about my meaning—that some reasonable definitions of “free will” may have the consequence that a chess-playing program has a teeny-tiny bit of free will, in something like the same way as John McCarthy famously suggested that a thermostat has (in a very aetiolated sense) beliefs.
Nothing about it seems decision-like at all. My notion of what is and what isn’t a decision is doubtless influenced by the fact that the most interesting decision-making agents I am familiar with are human, which is why an abstract resemblance to human decision-making is something I look for. I have only a limited and (I fear) unreliable idea of what other forms decision-making can take. As I said, I’ll happily revise this in the light of new data.
Me too; if you think that what I have said about decision-making isn’t, then either I have communicated poorly or you have understood poorly or both. More precisely: my opinions about decision-making surely aren’t altogether IA/AI-ready, for the rather boring reason that I don’t know enough about what intelligent aliens or artificial intelligences might be like for my opinions to be well-adjusted for them. But I do my best, such as it is.
First: No, it hasn’t. The hypothesis that humans make all their decisions by heuristic search certainly seems pretty unlikely at this point, but so what? Second and more important: I was not claiming that humans make decisions by tree searching. (Though, as it happens, when playing chess we often do—though our trees are quite different from the computers’.) I claim, rather, that humans make decisions by a process along the lines of: consider possible actions, envisage possible futures in each case, evaluate the likely outcomes, choose something that appears good. Which also happens to be an (extremely handwavy and abstract) description of what a chess-playing program does.
Perfectly reliable prediction is not possible in principle in our universe. Not even with a halting oracle.
I think the fact that you never actually get to observe the event of “such-and-such a TM not halting” means you don’t really need to worry about that. In any case, there seems to me something just a little improper about finagling your definition in this way to make it give the results you want: it’s as if you chose a definition in some principled way, found it gave an answer you didn’t like, and then looked for a hack to make it give a different answer.
Of course not. But what I said was neither (1) that incompatibilism is a tautology nor (2) that that makes it untrue. I said that (1) your argument was a tautology, which (2) makes it a bad argument.
I think that may say more about your state of mind than about the available arguments. In any case, the fact that you don’t find any counterarguments to your position cogent is not (to my mind) good reason for being rudely patronizing to others who are not convinced by what you say.
I regret to inform you that “argument X has been deployed in support of wrong conclusion Y” is not good reason to reject argument X—unless the inference from X to Y is watertight, which in this case I hope you agree it is not.
This troubles me not a bit, because you can never say “with 100% certainty will not” about anything with any empirical content. Not even if you happen to be a perfect reasoner and possess a halting oracle.
And at degrees of certainty less than 100%, it seems to me that “almost certainly will not” and “very nearly cannot” are quite different concepts and are not so very difficult to disentangle, at least in some cases. Write down ten common English boys’ names. Invite me to choose names for ten boys. Can I choose the names you wrote down? Of course. Will I? Almost certainly not. If the notion of possibility you’re working with leads you to a different conclusion, so much the worse for that notion of possibility.
Sorry, it is not my intention to be either rude or patronizing. But there are some aspects of this discussion that I find rather frustrating, and I’m sorry if that frustration occasionally manifests itself as rudeness.
Of course I can: with 100% certainty, no one will exhibit a working perpetual motion machine today. With 100% certainty, no one will exhibit superluminal communication today. With 100% certainty, the sun will not rise in the west tomorrow. With 100% certainty, I will not be the president of the United States tomorrow.
Do you believe that a thermostat makes decisions? Do you believe that a thermostat has (a little bit of) free will?
I presume you mean “perfectly reliable prediction of everything is not possible in principle.” Because perfectly reliable prediction of some things (in principle) is clearly possible. And perfectly reliable prediction of some things (in principle) with a halting oracle is possible by definition.
100%? Really? Not just “close to 100%, so let’s round it up” but actual complete certainty?
I too am a believer in the Second Law of Thermodynamics, but I don’t see on what grounds anyone can be 100% certain that the SLoT is universally correct. I say this mostly on general principles—we could just have got the physics wrong. More specifically, there are a few entropy-related holes in our current understanding of the world—e.g., so far as I know no one currently has a good answer to “why is the entropy so low at the big bang?” nor to “is information lost when things fall into black holes?”—so just how confidently would you bet that figuring out all the details of quantum gravity and of the big bang won’t reveal any loopholes?
Now, of course there’s a difference between “the SLoT has loopholes” and “someone will reveal a way to exploit those loopholes tomorrow”. The most likely possible-so-far-as-I-know worlds in which perpetual motion machines are possible are ones in which we discover the fact (if at all) after decades of painstaking theorizing and experiment, and in which actual construction of a perpetual motion machine depends on somehow getting hold of a black hole of manageable size and doing intricate things with it. But literally zero probability that some crazy genius has done it in his basement and is now ready to show it off? Nope. Very small indeed, but not literally zero.
Again, not zero. Very very very tiny, but not zero.
It does something a tiny bit like making decisions. (There is a certain class of states of affairs it systematically tries to bring about.) However, there’s nothing in what it does that looks at all like a deliberative process, so I wouldn’t say it has free will even to the tiny extent that maybe a chess-playing computer does.
For the avoidance of doubt: The level of decision-making, free will, intelligence, belief-having, etc., that these simple (or in the case of the chess program not so very simple) devices exhibit is so tiny that for most purposes it is much simpler and more helpful simply to say: No, these devices are not intelligent, do not have beliefs, etc. Much as for most purposes it is much simpler and more helpful to say: No, the coins in your pocket are not held away from the earth by the gravitational pull of your body. Or, for that matter: No, there is no chance that I will be president of the United States tomorrow.
Where are you heading with these questions? I mean, are you expecting them to help achieve mutual understanding, or are you playing to the gallery and trying to get me to say things that sound silly? (If so, I think you may have misjudged the gallery.)
Empirical things? Do you not, in fact, believe in quantum mechanics? Or do you think “in half the branches, by measure, X will happen, and in the other half Y will happen” counts as a perfectly reliable prediction of whether X or Y will happen?
Only perfectly non-empirical things. Sure, you can “predict” that a given Turing machine will halt. But you might as well say that (even without a halting oracle) you can “predict” that 3x4=12. As soon as that turns into “this actual multiplying device, right here, will get 12 when it tries to multiply 3 by 4”, you’re in the realm of empirical things, and all kinds of weird things happen with nonzero probability. You build your Turing machine but it malfunctions and enters an infinite loop. (And then terminates later when the sun enters its red giant phase and obliterates it. Well done, I guess, but then your prediction that that other Turing machine would never terminate isn’t looking so good.) You build your multiplication machine and a cosmic ray changes the answer from 12 to 14. You arrange pebbles in a 3x4 grid, but immediately before you count the resulting pebbles all the elementary particles in one of the pebbles just happen to turn up somewhere entirely different, as permitted (albeit with staggeringly small probability) by fundamental physics.
[EDITED to fix formatting screwage; silly me, using an asterisk to denote multiplication.]
I’m not sure what I “expect” but yes, I am trying to achieve mutual understanding. I think we have a fundamental disconnect in our intuitions of what “free will” means and I’m trying to get a handle on what it is. If you think that a thermostat has even a little bit of free will then we’ll just have to agree to disagree. If you think even a Nest thermostat, which does some fairly complicated processing before “deciding” whether or not to turn on the heat has even a little bit of free will then we’ll just have to agree to disagree. If you think that an industrial control computer, or an airplane autopilot, which do some very complicated processing before “deciding” what to do have even a little bit of free will then we’ll have to agree to disagree. Likewise for weather systems, pachinko machines, geiger counters, and computers searching for a counterexample to the Collatz conjecture. If you think any of these things has even a little bit of free will then we will simply have to agree to disagree.
Most people, by “100% certainty”, mean “certain enough that for all practical purposes it can be treated as 100%”. Not treating the statement as meaning that is just Internet literalness of the type that makes people say everyone on the Internet has Aspergers.
Not in the context of discussions of omniscience and whether pachinko machines have free will :-/
People who say this should go back to their TVs and Bud Lights and not try to overexert themselves with complicated things like that system of intertubes.
I am aware of that, thanks.
However, in this particular discussion the distinction between certainty and something-closely-resembling-certainty is actually important, for reasons I mentioned earlier.
The dictionary disagrees.
Free
has many different meanings.What ontological category does
physics
have in your view of the world?Are you seriously arguing that “free” in “free will” might mean the same thing as (say) “free” in “free beer”? Come on.
That’s a very good question, and it depends (ironically) on which of two possible definitions of physics you’re referring to. If you mean physics-the-scientific-enterprise (let’s call that physics1) then it exists in the ontological category of human activity (along with things like “commerce”). If you mean the underlying processes which are the object of study in physics1 (let’s call that physics2) then I’d put those in the ontological category of objective reality.
Note that ontological categories are not mutually exclusive. Existence is a vector space. Physics1 is also part of objective reality, because it is an emergent property of physics2.
You can see free will as
1 d : enjoying personal freedom : not subject to the control or domination of another
. There no other person who controls your actions.The next definitions is:
2 a : not determined by anything beyond its own nature or being : choosing or capable of choosing for itself
I think you can make a good case that the way someone’s neurons work is part of their own nature or being.
You ontological model that there’s an enity called physics_2 that causes neurons to do something that not in their nature or being is problematic
I think this is a difference in the definition of the word “I”, which can reasonably be taken to mean at least three different things:
The totality of my brain and body and all of the processes that go on there. On this definition, “I have lungs” is a true statement.
My brain and all of the computational processes that go on there (but not the biological processes). On this definition, “I have lungs” is a false statement, but “I control my breathing” is a true statement.
That subset of the computational processes going on in my brain that we call “conscious.” On this view, the statement, “I control my breathing” is partially true. You can decide to stop breathing for a while, but there are hard limits on how long you can keep it up.
To me, the question of whether I have free will is only interesting on definition #3 because my conscious self is the part of me that cares about such things. If my conscious self is being coerced or conned, then I (#3) don’t really care whether the origin of that coercion is internal (part of my sub-conscious or my physiology) or external.
Basically after you previously argued that there only one reasonable definition of
free will
you now moved to the position that there are multiple reasonable definitions and you have particular reasons why you prefer to focus on a specific one?Is that a reasonable description of your position?
No, not even remotely close. We seem to have a serious disconnect here.
For starters, I don’t think I ever gave a definition of “free will”. I have listed what I feel to be (two) necessary conditions for it, but I don’t think I ever gave sufficient conditions, which would be necessary for a definition. I’m not sure I even know what sufficient conditions would be. (But I think those necessary conditions, plus the known laws of physics, are enough to show that humans don’t have free will, so I think my position is sound even in the absence of a definition.) And I did opine at one point that there is only one reasonable interpretation of the word “free” in a context of a discussion of “free will.” But that is not at all the same thing as arguing that there is only one reasonable definition of “free will.” Also, the question of what “I” means is different from the question of what “free will” means. But both are (obviously) relevant to the question of whether or not I have free will.
The reason I brought up the definition of “I” is because you wrote:
That is not my position. (And ontology is a bit of a red herring here.) I can’t even imagine what it means for a neuron to “do something that not in their nature or being”, let alone that this departure from “nature or being” could be caused by physics. That’s just bizarre. What did I say that made you think I believed this?
I can’t define “free will” just like I can’t define “pornography.” But I have an intuition about free will (just like I have one about porn) that tells me that, whatever it is, it is not something that is possessed by pachinko machines, individual photons, weather systems, or a Turing machine doing a straightforward search for a counter-example to the Collatz conjecture. I also believe that “will not with 100% reliability” is logically equivalent to “can not” in that there is no way to distinguish these two situations. If you wish to dispute this, you will have to explain to me how I can determine whether the reason that the moon doesn’t leave earth orbit is because it can’t or because it chooses not to.
Some people can, and it is not unhelpful to be able to do so.
I thought you made an argument that physical determinism somehow means that there’s no free will because physics is causes an effect to happen. If I misunderstood that you make the argument feel free to point that out.
Given the dictionary definition of “free” that seems to be flawed.
That’s an appeal to the authority of your personal intuition. It prevents your statements from being falsifiable. It moves the statements into to vague to be wrong territory.
If I have a conversation with a person who has akrophobie to debug then I’m going to use words in a way where I only care about the effect of the words but not whether my sentences make falsifiable statements. If I however want to have a rational discussion on LW than I strive to use rational language. Language that makes concrete claims that allow others to engage with me in rational discourse.
Again that’s what distinguish rational!LW from rational!NewAtheist. If you don’t simply want to have a replacement of religion, but care about reasoning than it’s useful to not be to vague to be wrong.
The thing you wrote about only calling the part of you I that corresponds to your conscious mind looks to me like subclinical depersonalization disorder. A notion of the self that can be defended but that’s unhealthy to have.
I not only have lungs. My lungs are part of the person that I happen to be.
If we stay with the dictionary definition of freedom why look at the nature of the moon. Is the fact that it revolves around the earth an emergent property of how the complex internals of the moon work or isn’t it?
My math in that area isn’t perfect but objects that can be modeled by nontrival nondeterministic finite automatons might be a criteria.
Nontrival nondeterministic finite automatons can reasonably described as using heuristics to make choices. They make them based on the algorithm that’s programmed into them and that algorithm can by reasonably described as being part of the nature of a specific nondeterministic finite automatons.
I don’t think the way that the moon resolves around the earth is reasonably modeled with nontrival nondeterministic finite automatons.
No, that’s not my argument. My argument (well, one of them anyway) is that if I am reliably predictable, then it must be the case that I am deterministic, and therefore I cannot have free will.
I actually go even further than that. If I am not reliably predictable, then I might have free will, but my mere unpredictability is not enough to establish that I have free will. Weather systems are not reliably predictable, but they don’t have free will. It is not even the case that non-determinism is sufficient to establish free will. Photons are non-deterministic, but they don’t have free will.
Well, yeah, of course it is (though I would not call my intuitions an “authority”). This whole discussion starts from a subjective experience that I have (and that other people report having), namely, feeling like I have free will. I don’t know of any way to talk about a subjective experience without referring to my personal intuitions about it.
The difference between free will and other subjective experiences like, say, seeing color, is that seeing colors can be easily grounded in an objective external reality, whereas with free will it’s not so easy. In fact, no one has exhibited a satisfactory explanation of my subjective experience that is grounded in objective reality, hence my conclusion that my subjective experience of having free will is an illusion.
To the extend that the subjective experience you call free will is independent on what other people mean with the term free will, the arguments about it aren’t that interesting for the general discussion about whether what’s commonly called free will exists.
More importantly concepts that start from “I have the feeling that X is true” usually produce models of reality that aren’t true in 100% of the cases. They make some decent predictions and fail predictions in other cases.
It’s usually possible to refine concepts to be better at predicting. It’s part of science to develop operationalized terms.
This started by you saying
But the word "free" has an established meaning in English
. That’s you pointing to a shared understanding offree
and not you pointing to your private experience.Human’s are not reliably predictive due to being NFA’s. Out of memory Heinz von Förster bring the example of a child answer the question of: “What’s 1+1?” with “Blue”. It needs a education to train children to actually give predicable answers to the question what’s “What’s 1+1?”.
I think the issue with why weather systems are not predictable is not because they aren’t free to make choices (if you use certain models) but because is about the part of “will”. Having a will is about having desires. The weather doesn’t have desires in the same sense that humans do and thus it has no free will.
I think that humans do have desire that influence the choices they make even in the absence of them being conscious of the desire creating the choice.
Grounding the concept of color in external reality isn’t trival. There are many competing definitions. You can define it over what the human eye perceives which has a lot to do with human genetics that differ from person to person. You can define it over wave-lengths. . You can define it over RGB values.
It doesn’t make sense to argue that color doesn’t exist because the human qualia of color doesn’t map directly to the wave-length definition of color
With color the way you determine the difference between colors is also a fun topic. The W3C definition for example leads to strange consequences.
You’re conflating two different things:
Attempting to communicate about a phenomenon which is rooted in a subjective experience.
Attempting to conduct that communication using words rather than, say, music or dance.
Talking about the established meaning of the word “free” has to do with #2, not #1. The fact that my personal opinion enters into the discussion has to do with #1, not #2.
Yes, of course I agree. But that’s not the question at issue. The question is not whether we have “desires” or “will” (we all agree that we do), the question is whether or not we have FREE will. I think it’s pretty clear that we do NOT have the freedom to choose our desires. At least I don’t seem to; maybe other people are different. So where does this alleged freedom enter the process?
I never said it was. In fact, the difficulty of grounding color perception in objective reality actually supports my position. One would expect that the grounding of free will perception in objective reality to be at least as difficult as grounding color perception, but I don’t see those who support the objective reality of free will undertaking such a project, at least not here.
I’m willing to be convinced that this free will thing is real, but as with any extraordinary claim the burden is on you to prove that it is, not on me to prove that it is not.
Pretty much everyone perceives himself/herself freely making choices, so the claim that free will is real is consistent with most peoples’ direct experience. While this does not prove that free will is real, it does suggest that the claim that free will is real is not really any more extraordinary than the claim that it is not real. So, I do not think that the person claiming that free will is real has any greater burden of proof than the person who claims that it is not.
That’s not a valid argument for at least four reasons:
There are many perceptual illusions, so the hypothesis that free will is an illusion is not a priori an extraordinary claim. (In fact, the feeling that you are living in a classical Galilean universe is a perceptual illusion!)
There is evidence that free will is in fact a perceptual illusion.
It makes evolutionary sense that the genes that built our brains would want to limit the extent to which they could become self-aware. If you knew that your strings were being pulled you might sink into existential despair, which is not generally salubrious to reproductive fitness.
We now understand quite a bit about how the brain works and about how computers work, and all the evidence indicates that the brain is a computer. More precisely, there is nothing a brain can do that a properly programmed Turing machine could not do, and therefore no property that a brain have that cannot be given to a Turing machine. Some Turing machines definitely do not have free will (if you believe that a thermostat has free will, well, we’re just going to have to agree to disagree about that). So if free will is a real thing you should be able to exhibit some way to distinguish those Turing machines that have free will from those that do not. I have heard no one propose such a criterion that doesn’t lead to conclusions that grate irredeemably upon my intuitions about what free will is (or what it would have to be if it were a real thing).
In this respect, free will really is very much like God except that the subjective experience of free will is more common than the subjective experience of the Presence of the Holy Spirit.
BTW, it is actually possible that the subjective experience of free will is not universal among humans. It is possible that some people don’t have this subjective perception, just as some people don’t experience the Presence of the Holy Spirit. It is possible that this lack of the subjective perception of free will is what leads some people to submit to the will of Allah, or to become Calvinists.
I agree with that
I basically agree with that too—it is you rather than me who brought up the notion of extraordinary claims. It seems to me that the notion of extraordinary claims in this case is a red herring—that free will is real is a claim, and that free will is not real is a claim; I am simply arguing that neither claim has a greater burden of proof than the other. In fact, I think that there is room for reasonable people to disagree with regard to the free will question.
I don’t know what that means exactly, but it sounds intriguing! Do you a link or a reference with additional information?
None of those experiments provides strong evidence; the article you linked lists for several of the experiments objections to interpreting the experiment as evidence against free will (e.g., per the article, “Libet himself did not interpret his experiment as evidence of the inefficacy of conscious free will”). One thing in particular that I noticed is that many of the experiments dealt with more-less arbitrary decisions—e.g. when to flick one’s wrist, when to make brisk finger movements at arbitrary intervals, etc. Even if it could be shown that the brain somehow goes on autopilot when making trivial, arbitrary decisions that hold no significant consequences, it is not clear that this says anything about more significant decisions—e.g. what college to attend, how much one should spend on a house, etc.
That is a reasonable statement and I have no argument with it. But, while it provides a possible explanation why we might perceive free will even if it does not exist, I don’t think that it provides significant evidence against free will.
I agree with that.
If that statement is valid, then it seems to me that the following statement is also valid:
“There is no property that a brain can have that cannot be given to a Turing machine. Some Turing machines definitely are not conscious. So if consciousness is a real thing you should be able to exhibit some way to distinguish those Turing machines that are conscious will from those that are not.”
So, do you believe that consciousness is a real thing? And, can a Turing machine be conscious? If so, how are we to distinguish those Turing machines that are conscious will from those that are not?
That may be. Nonetheless, at the moment I believe that free is an illusion, and I have some evidence that supports that belief. I see no evidence to support the contrary belief. So if you want to convince me that free will is real then you’ll have to show me some evidence.
If you don’t care what I believe then you are under no obligations :-)
The fact that you can reliably predict some actions that people perceive as volitional up to ten seconds in advance seems like pretty strong evidence to me. But I suppose reasonable people could disagree about this. In any case, I didn’t say there was strong evidence, I just said there was some evidence.
That depends a little on what you mean by “a real thing.” Free will and consciousness are both real subjective experiences, but neither one is objectively real. Their natures are very similar. I might even go so far as to say that they are the same phenomenon. I recommend reading this book if you really want to understand it.
Yes, of course. You would have to be a dualist to believe otherwise.
That’s very tricky. I don’t know. I’m pretty sure that our current methods of determining consciousness produce a lot of false negatives. But if a computer that could pass the Turing test told me it was conscious, and could describe for me what it’s like to be a conscious computer, I’d be inclined to believe it.
It’s not that deep. It just means that your perception of reality is different from actual reality in some pretty fundamental ways. The sun appears to revolve around the earth, but it doesn’t. The chair you’re sitting on seems like a solid object, but it isn’t. “Up” always feels like it’s the same direction, but it’s not. And you feel like you have free will, but you don’t. :-)
As a matter of fact, I think the free will question is an interesting question, but not an instrumentally important question; I can’t really think of anything I would do differently if I were to change my mind on the matter. This is especially true if you are right—in that case we’d both do whatever we’re going to do and it wouldn’t matter at all!
Interesting. The reason I asked the question is that there are some thinkers who deny the reality of free will but accept the reality of consciousness (e.g. Alex Rosenberg); I was curious if you are in that camp. It sounds as though you are not.
Glad to see you are open to at least some of Daniel Dennett’s views! (He’s a compatibilist, I believe.)
Understood. My confusion came from the term “Galilean Universe” which I assumed was a reference to Galileo (who was actually on-board with the idea of the Earth orbiting the Sun—that is one of the things that got him into some trouble with the authorities!)
Exactly right. I live my life as if I’m a classical conscious being with free will even though I know that metaphysically I’m not. It’s kind of fun knowing the truth though. It gives me a lot of peace of mind.
I’m not familiar with Rosenberg so I couldn’t say.
Yes, I think you’re right. (That video is actually well worth watching!)
Sorry, my bad. I meant it in the sense of Galilean relativity (a.k.a. Newtonian relativity, though Galileo actually thought of it first) where time rather than the speed of light is the same for all observers.
The common understanding of free will does run into a lot of problems when it comes to issues such as habit change.
There are people debating whether or not hypnosis can get people to do something against their free will, with happens to be a pretty bad question. Questions such as
can people decide by free will not to have an allergic reaction?
are misleading.Or you can convert into it.
I think you need at least a couple more zeroes in there for that to be right.
They or one of their matrilinear ancestors converted to Judaism?
In case it wasn’t clear: I was not posing “on what basis …” as a challenge, I was pointing out that it isn’t much of a challenge and that for similar reasons lisper’s parallel question about free will is not much of a challenge either.
Oooops! I meant there to be three more. Will fix. Thanks.