I already pointed out that your own choice of definition doesn’t have the property you claimed (being fundamentally incompatible with determinism). I think that suffices to make my point.
This would not be the first time in history that the philosophical community was wrong about something.
Very true. But if you are claiming that some philosophical proposition is (not merely true but) obvious and indeed true by definition, then firm disagreement by a majority of philosophers should give you pause.
You could still be right, of course. But I think you’d need to offer more and better justification than you have so far, to be at all convincing.
But “a very little bit” is still distinguishable from zero, yes?
Well, the actual distinguishing might be tricky, especially as all I’ve claimed is that arguably it’s so. But: yes, I have suggested—to be precise about my meaning—that some reasonable definitions of “free will” may have the consequence that a chess-playing program has a teeny-tiny bit of free will, in something like the same way as John McCarthy famously suggested that a thermostat has (in a very aetiolated sense) beliefs.
Nothing about it seems human decision-like.
Nothing about it seems decision-like at all. My notion of what is and what isn’t a decision is doubtless influenced by the fact that the most interesting decision-making agents I am familiar with are human, which is why an abstract resemblance to human decision-making is something I look for. I have only a limited and (I fear) unreliable idea of what other forms decision-making can take. As I said, I’ll happily revise this in the light of new data.
I believe that any philosophy worth adhering to ought to be IA-ready and AI-ready
Me too; if you think that what I have said about decision-making isn’t, then either I have communicated poorly or you have understood poorly or both. More precisely: my opinions about decision-making surely aren’t altogether IA/AI-ready, for the rather boring reason that I don’t know enough about what intelligent aliens or artificial intelligences might be like for my opinions to be well-adjusted for them. But I do my best, such as it is.
The hypothesis that humans make decisions by heuristic search has been pretty much disproven
First: No, it hasn’t. The hypothesis that humans make all their decisions by heuristic search certainly seems pretty unlikely at this point, but so what? Second and more important: I was not claiming that humans make decisions by tree searching. (Though, as it happens, when playing chess we often do—though our trees are quite different from the computers’.) I claim, rather, that humans make decisions by a process along the lines of: consider possible actions, envisage possible futures in each case, evaluate the likely outcomes, choose something that appears good. Which also happens to be an (extremely handwavy and abstract) description of what a chess-playing program does.
All I require for my argument to hold is predictability in principle, not predictability in fact.
Perfectly reliable prediction is not possible in principle in our universe. Not even with a halting oracle.
because I don’t believe that Turing machines exercise free will when “deciding” whether or not to halt.
I think the fact that you never actually get to observe the event of “such-and-such a TM not halting” means you don’t really need to worry about that. In any case, there seems to me something just a little improper about finagling your definition in this way to make it give the results you want: it’s as if you chose a definition in some principled way, found it gave an answer you didn’t like, and then looked for a hack to make it give a different answer.
just because incompatibilism is a tautology does not make it untrue.
Of course not. But what I said was neither (1) that incompatibilism is a tautology nor (2) that that makes it untrue. I said that (1) your argument was a tautology, which (2) makes it a bad argument.
As soon as someone presents a cogent argument I’m happy to consider it.
I think that may say more about your state of mind than about the available arguments. In any case, the fact that you don’t find any counterarguments to your position cogent is not (to my mind) good reason for being rudely patronizing to others who are not convinced by what you say.
It reminds me of [...]
I regret to inform you that “argument X has been deployed in support of wrong conclusion Y” is not good reason to reject argument X—unless the inference from X to Y is watertight, which in this case I hope you agree it is not.
I don’t see any possible way to distinguish between “can not” and “with 100% certainty will not”.
This troubles me not a bit, because you can never say “with 100% certainty will not” about anything with any empirical content. Not even if you happen to be a perfect reasoner and possess a halting oracle.
And at degrees of certainty less than 100%, it seems to me that “almost certainly will not” and “very nearly cannot” are quite different concepts and are not so very difficult to disentangle, at least in some cases. Write down ten common English boys’ names. Invite me to choose names for ten boys. Can I choose the names you wrote down? Of course. Will I? Almost certainly not. If the notion of possibility you’re working with leads you to a different conclusion, so much the worse for that notion of possibility.
Sorry, it is not my intention to be either rude or patronizing. But there are some aspects of this discussion that I find rather frustrating, and I’m sorry if that frustration occasionally manifests itself as rudeness.
you can never say “with 100% certainty will not” about anything with any empirical content
Of course I can: with 100% certainty, no one will exhibit a working perpetual motion machine today. With 100% certainty, no one will exhibit superluminal communication today. With 100% certainty, the sun will not rise in the west tomorrow. With 100% certainty, I will not be the president of the United States tomorrow.
Nothing about [a pachinko machine] seems decision-like at all.
a thermostat has (in a very aetiolated sense) beliefs.
Do you believe that a thermostat makes decisions? Do you believe that a thermostat has (a little bit of) free will?
Perfectly reliable prediction is not possible in principle in our universe. Not even with a halting oracle.
I presume you mean “perfectly reliable prediction of everything is not possible in principle.” Because perfectly reliable prediction of some things (in principle) is clearly possible. And perfectly reliable prediction of some things (in principle) with a halting oracle is possible by definition.
with 100% certainty, no one will exhibit a working perpetual motion machine today
100%? Really? Not just “close to 100%, so let’s round it up” but actual complete certainty?
I too am a believer in the Second Law of Thermodynamics, but I don’t see on what grounds anyone can be 100% certain that the SLoT is universally correct. I say this mostly on general principles—we could just have got the physics wrong. More specifically, there are a few entropy-related holes in our current understanding of the world—e.g., so far as I know no one currently has a good answer to “why is the entropy so low at the big bang?” nor to “is information lost when things fall into black holes?”—so just how confidently would you bet that figuring out all the details of quantum gravity and of the big bang won’t reveal any loopholes?
Now, of course there’s a difference between “the SLoT has loopholes” and “someone will reveal a way to exploit those loopholes tomorrow”. The most likely possible-so-far-as-I-know worlds in which perpetual motion machines are possible are ones in which we discover the fact (if at all) after decades of painstaking theorizing and experiment, and in which actual construction of a perpetual motion machine depends on somehow getting hold of a black hole of manageable size and doing intricate things with it. But literally zero probability that some crazy genius has done it in his basement and is now ready to show it off? Nope. Very small indeed, but not literally zero.
the sun will not rise in the west. [...] I will not be the president of the United States
Again, not zero. Very very very tiny, but not zero.
Do you believe that a thermostat makes decisions?
It does something a tiny bit like making decisions. (There is a certain class of states of affairs it systematically tries to bring about.) However, there’s nothing in what it does that looks at all like a deliberative process, so I wouldn’t say it has free will even to the tiny extent that maybe a chess-playing computer does.
For the avoidance of doubt: The level of decision-making, free will, intelligence, belief-having, etc., that these simple (or in the case of the chess program not so very simple) devices exhibit is so tiny that for most purposes it is much simpler and more helpful simply to say: No, these devices are not intelligent, do not have beliefs, etc. Much as for most purposes it is much simpler and more helpful to say: No, the coins in your pocket are not held away from the earth by the gravitational pull of your body. Or, for that matter: No, there is no chance that I will be president of the United States tomorrow.
Where are you heading with these questions? I mean, are you expecting them to help achieve mutual understanding, or are you playing to the gallery and trying to get me to say things that sound silly? (If so, I think you may have misjudged the gallery.)
perfectly reliable prediction of some things (in principle) is clearly possible.
Empirical things? Do you not, in fact, believe in quantum mechanics? Or do you think “in half the branches, by measure, X will happen, and in the other half Y will happen” counts as a perfectly reliable prediction of whether X or Y will happen?
is possible by definition.
Only perfectly non-empirical things. Sure, you can “predict” that a given Turing machine will halt. But you might as well say that (even without a halting oracle) you can “predict” that 3x4=12. As soon as that turns into “this actual multiplying device, right here, will get 12 when it tries to multiply 3 by 4”, you’re in the realm of empirical things, and all kinds of weird things happen with nonzero probability. You build your Turing machine but it malfunctions and enters an infinite loop. (And then terminates later when the sun enters its red giant phase and obliterates it. Well done, I guess, but then your prediction that that other Turing machine would never terminate isn’t looking so good.) You build your multiplication machine and a cosmic ray changes the answer from 12 to 14. You arrange pebbles in a 3x4 grid, but immediately before you count the resulting pebbles all the elementary particles in one of the pebbles just happen to turn up somewhere entirely different, as permitted (albeit with staggeringly small probability) by fundamental physics.
[EDITED to fix formatting screwage; silly me, using an asterisk to denote multiplication.]
Where are you heading with these questions? I mean, are you expecting them to help achieve mutual understanding,
I’m not sure what I “expect” but yes, I am trying to achieve mutual understanding. I think we have a fundamental disconnect in our intuitions of what “free will” means and I’m trying to get a handle on what it is. If you think that a thermostat has even a little bit of free will then we’ll just have to agree to disagree. If you think even a Nest thermostat, which does some fairly complicated processing before “deciding” whether or not to turn on the heat has even a little bit of free will then we’ll just have to agree to disagree. If you think that an industrial control computer, or an airplane autopilot, which do some very complicated processing before “deciding” what to do have even a little bit of free will then we’ll have to agree to disagree. Likewise for weather systems, pachinko machines, geiger counters, and computers searching for a counterexample to the Collatz conjecture. If you think any of these things has even a little bit of free will then we will simply have to agree to disagree.
100%? Really? Not just “close to 100%, so let’s round it up” but actual complete certainty?
Most people, by “100% certainty”, mean “certain enough that for all practical purposes it can be treated as 100%”. Not treating the statement as meaning that is just Internet literalness of the type that makes people say everyone on the Internet has Aspergers.
Most people, by “100% certainty”, mean “certain enough that for all practical purposes it can be treated as 100%”.
Not in the context of discussions of omniscience and whether pachinko machines have free will :-/
that makes people say everyone on the Internet has Aspergers.
People who say this should go back to their TVs and Bud Lights and not try to overexert themselves with complicated things like that system of intertubes.
However, in this particular discussion the distinction between certainty and something-closely-resembling-certainty is actually important, for reasons I mentioned earlier.
I already pointed out that your own choice of definition doesn’t have the property you claimed (being fundamentally incompatible with determinism). I think that suffices to make my point.
Very true. But if you are claiming that some philosophical proposition is (not merely true but) obvious and indeed true by definition, then firm disagreement by a majority of philosophers should give you pause.
You could still be right, of course. But I think you’d need to offer more and better justification than you have so far, to be at all convincing.
Well, the actual distinguishing might be tricky, especially as all I’ve claimed is that arguably it’s so. But: yes, I have suggested—to be precise about my meaning—that some reasonable definitions of “free will” may have the consequence that a chess-playing program has a teeny-tiny bit of free will, in something like the same way as John McCarthy famously suggested that a thermostat has (in a very aetiolated sense) beliefs.
Nothing about it seems decision-like at all. My notion of what is and what isn’t a decision is doubtless influenced by the fact that the most interesting decision-making agents I am familiar with are human, which is why an abstract resemblance to human decision-making is something I look for. I have only a limited and (I fear) unreliable idea of what other forms decision-making can take. As I said, I’ll happily revise this in the light of new data.
Me too; if you think that what I have said about decision-making isn’t, then either I have communicated poorly or you have understood poorly or both. More precisely: my opinions about decision-making surely aren’t altogether IA/AI-ready, for the rather boring reason that I don’t know enough about what intelligent aliens or artificial intelligences might be like for my opinions to be well-adjusted for them. But I do my best, such as it is.
First: No, it hasn’t. The hypothesis that humans make all their decisions by heuristic search certainly seems pretty unlikely at this point, but so what? Second and more important: I was not claiming that humans make decisions by tree searching. (Though, as it happens, when playing chess we often do—though our trees are quite different from the computers’.) I claim, rather, that humans make decisions by a process along the lines of: consider possible actions, envisage possible futures in each case, evaluate the likely outcomes, choose something that appears good. Which also happens to be an (extremely handwavy and abstract) description of what a chess-playing program does.
Perfectly reliable prediction is not possible in principle in our universe. Not even with a halting oracle.
I think the fact that you never actually get to observe the event of “such-and-such a TM not halting” means you don’t really need to worry about that. In any case, there seems to me something just a little improper about finagling your definition in this way to make it give the results you want: it’s as if you chose a definition in some principled way, found it gave an answer you didn’t like, and then looked for a hack to make it give a different answer.
Of course not. But what I said was neither (1) that incompatibilism is a tautology nor (2) that that makes it untrue. I said that (1) your argument was a tautology, which (2) makes it a bad argument.
I think that may say more about your state of mind than about the available arguments. In any case, the fact that you don’t find any counterarguments to your position cogent is not (to my mind) good reason for being rudely patronizing to others who are not convinced by what you say.
I regret to inform you that “argument X has been deployed in support of wrong conclusion Y” is not good reason to reject argument X—unless the inference from X to Y is watertight, which in this case I hope you agree it is not.
This troubles me not a bit, because you can never say “with 100% certainty will not” about anything with any empirical content. Not even if you happen to be a perfect reasoner and possess a halting oracle.
And at degrees of certainty less than 100%, it seems to me that “almost certainly will not” and “very nearly cannot” are quite different concepts and are not so very difficult to disentangle, at least in some cases. Write down ten common English boys’ names. Invite me to choose names for ten boys. Can I choose the names you wrote down? Of course. Will I? Almost certainly not. If the notion of possibility you’re working with leads you to a different conclusion, so much the worse for that notion of possibility.
Sorry, it is not my intention to be either rude or patronizing. But there are some aspects of this discussion that I find rather frustrating, and I’m sorry if that frustration occasionally manifests itself as rudeness.
Of course I can: with 100% certainty, no one will exhibit a working perpetual motion machine today. With 100% certainty, no one will exhibit superluminal communication today. With 100% certainty, the sun will not rise in the west tomorrow. With 100% certainty, I will not be the president of the United States tomorrow.
Do you believe that a thermostat makes decisions? Do you believe that a thermostat has (a little bit of) free will?
I presume you mean “perfectly reliable prediction of everything is not possible in principle.” Because perfectly reliable prediction of some things (in principle) is clearly possible. And perfectly reliable prediction of some things (in principle) with a halting oracle is possible by definition.
100%? Really? Not just “close to 100%, so let’s round it up” but actual complete certainty?
I too am a believer in the Second Law of Thermodynamics, but I don’t see on what grounds anyone can be 100% certain that the SLoT is universally correct. I say this mostly on general principles—we could just have got the physics wrong. More specifically, there are a few entropy-related holes in our current understanding of the world—e.g., so far as I know no one currently has a good answer to “why is the entropy so low at the big bang?” nor to “is information lost when things fall into black holes?”—so just how confidently would you bet that figuring out all the details of quantum gravity and of the big bang won’t reveal any loopholes?
Now, of course there’s a difference between “the SLoT has loopholes” and “someone will reveal a way to exploit those loopholes tomorrow”. The most likely possible-so-far-as-I-know worlds in which perpetual motion machines are possible are ones in which we discover the fact (if at all) after decades of painstaking theorizing and experiment, and in which actual construction of a perpetual motion machine depends on somehow getting hold of a black hole of manageable size and doing intricate things with it. But literally zero probability that some crazy genius has done it in his basement and is now ready to show it off? Nope. Very small indeed, but not literally zero.
Again, not zero. Very very very tiny, but not zero.
It does something a tiny bit like making decisions. (There is a certain class of states of affairs it systematically tries to bring about.) However, there’s nothing in what it does that looks at all like a deliberative process, so I wouldn’t say it has free will even to the tiny extent that maybe a chess-playing computer does.
For the avoidance of doubt: The level of decision-making, free will, intelligence, belief-having, etc., that these simple (or in the case of the chess program not so very simple) devices exhibit is so tiny that for most purposes it is much simpler and more helpful simply to say: No, these devices are not intelligent, do not have beliefs, etc. Much as for most purposes it is much simpler and more helpful to say: No, the coins in your pocket are not held away from the earth by the gravitational pull of your body. Or, for that matter: No, there is no chance that I will be president of the United States tomorrow.
Where are you heading with these questions? I mean, are you expecting them to help achieve mutual understanding, or are you playing to the gallery and trying to get me to say things that sound silly? (If so, I think you may have misjudged the gallery.)
Empirical things? Do you not, in fact, believe in quantum mechanics? Or do you think “in half the branches, by measure, X will happen, and in the other half Y will happen” counts as a perfectly reliable prediction of whether X or Y will happen?
Only perfectly non-empirical things. Sure, you can “predict” that a given Turing machine will halt. But you might as well say that (even without a halting oracle) you can “predict” that 3x4=12. As soon as that turns into “this actual multiplying device, right here, will get 12 when it tries to multiply 3 by 4”, you’re in the realm of empirical things, and all kinds of weird things happen with nonzero probability. You build your Turing machine but it malfunctions and enters an infinite loop. (And then terminates later when the sun enters its red giant phase and obliterates it. Well done, I guess, but then your prediction that that other Turing machine would never terminate isn’t looking so good.) You build your multiplication machine and a cosmic ray changes the answer from 12 to 14. You arrange pebbles in a 3x4 grid, but immediately before you count the resulting pebbles all the elementary particles in one of the pebbles just happen to turn up somewhere entirely different, as permitted (albeit with staggeringly small probability) by fundamental physics.
[EDITED to fix formatting screwage; silly me, using an asterisk to denote multiplication.]
I’m not sure what I “expect” but yes, I am trying to achieve mutual understanding. I think we have a fundamental disconnect in our intuitions of what “free will” means and I’m trying to get a handle on what it is. If you think that a thermostat has even a little bit of free will then we’ll just have to agree to disagree. If you think even a Nest thermostat, which does some fairly complicated processing before “deciding” whether or not to turn on the heat has even a little bit of free will then we’ll just have to agree to disagree. If you think that an industrial control computer, or an airplane autopilot, which do some very complicated processing before “deciding” what to do have even a little bit of free will then we’ll have to agree to disagree. Likewise for weather systems, pachinko machines, geiger counters, and computers searching for a counterexample to the Collatz conjecture. If you think any of these things has even a little bit of free will then we will simply have to agree to disagree.
Most people, by “100% certainty”, mean “certain enough that for all practical purposes it can be treated as 100%”. Not treating the statement as meaning that is just Internet literalness of the type that makes people say everyone on the Internet has Aspergers.
Not in the context of discussions of omniscience and whether pachinko machines have free will :-/
People who say this should go back to their TVs and Bud Lights and not try to overexert themselves with complicated things like that system of intertubes.
I am aware of that, thanks.
However, in this particular discussion the distinction between certainty and something-closely-resembling-certainty is actually important, for reasons I mentioned earlier.