Well, how did you mean your question? I mean, the answer is obviously, “of course you can act without guaranteed results, that’s every action anyone has ever taken ever.” Except that it’s an utterly inane result which the people in the free will community (mostly) don’t care about, and this entire debate is in the free will community, and needs to be understood in the context of compatibilism and incompatibilism.
See, there are numerous philosophers (and non-philosophers) whose model of free choice is “choice which could go either way, even under the exact same circumstances”—and they interpret it logically, that you could load the save file from before the decision and see them switch. If that’s the nature of a free decision, then you run into the problem of the soup, here—apparently, you’re only free to order the soup if you’ve got some measurable chance of not ordering the soup, despite that you’d have to be crazy or stupid to not order the soup. Which is counterintuitive, because nobody’s holding a gun to your head—it looks like an exemplar of a free decision unless you’re committed to that sort of philosophy.
Well, if it’s true that “causal determinism is presupposed in the very
concept of human action,” then it should stay true when I’m talking
about causal determinism and human action alone—and I should be able
to ask for an explanation.
In other words, how does the fact that people do order soup show that if
you load the save file, you’re guaranteed the same result? I know it’s
used as a step in proofs about “free will,” but I’m asking about the
step, not the proofs. Another proof rebutting some of the people who don’t like the
first proof isn’t an answer.
Wait, are you asking the empirical question, “do human decision-making processes operate in a deterministic fashion”? As far as I can tell, the answer is approximately “yes” (at least at scales typical of ordering food at a restaurant without influence from nondeterministic random-number generators), the aforementioned can-go-either-way philosophers are committed to the opposite answer (or to believing that we’re just automata), and the people you really should be asking are the cognitive scientists and neuroscientists. Of which I am neither.
I’ll try another rephrasing: I have been told that “causal determinism
is presupposed in the very concept of human action.” I look at the
concept of human action and see no presupposition about causal
determinism. So I ask, “Where in the concept is this presupposition? I
can’t find it.”
...my, I am an idiot. No, it certainly doesn’t look presupposed—I imagine someone is misunderstanding (Edit: or equivocating) the term “causal determinism”. Causality is presupposed, but not determinism.
I was also frustrated by Hill’s vagueness on what seemed to be an important point (perhaps he elaborates later?). In any case, I can tell you what I think Hill was thinking when he wrote that, though I’m not exceptionally confident about it.
The concept of human action—of making plans and following through with them—seems to be based on the assumption that the world is fundamentally predictable. We make decisions as if the future can to some extent be determined by a knowledge of the present, paired with a set of well-defined rules.
The natural objection to this would be that human action only presupposes some ability to predict the future, but not the perfect ability that might be possible if causal determinism is true. However, one could argue that it is far more natural to assume that the future is completely predictable, at least hypothetically, based on the fact that even our limited knowledge of the laws of nature seems to give us a good deal of predictive power. After all, there are many things we cannot yet do, but this would seem to be poor evidence that they are logically impossible.
So in my mind, Hill wasn’t trying to make a definitive case for causal determinism, only observing that it is the far more natural conclusion to draw, based on the planning-oriented way human beings interact with the world.
Well, how did you mean your question? I mean, the answer is obviously, “of course you can act without guaranteed results, that’s every action anyone has ever taken ever.” Except that it’s an utterly inane result which the people in the free will community (mostly) don’t care about, and this entire debate is in the free will community, and needs to be understood in the context of compatibilism and incompatibilism.
See, there are numerous philosophers (and non-philosophers) whose model of free choice is “choice which could go either way, even under the exact same circumstances”—and they interpret it logically, that you could load the save file from before the decision and see them switch. If that’s the nature of a free decision, then you run into the problem of the soup, here—apparently, you’re only free to order the soup if you’ve got some measurable chance of not ordering the soup, despite that you’d have to be crazy or stupid to not order the soup. Which is counterintuitive, because nobody’s holding a gun to your head—it looks like an exemplar of a free decision unless you’re committed to that sort of philosophy.
Well, if it’s true that “causal determinism is presupposed in the very concept of human action,” then it should stay true when I’m talking about causal determinism and human action alone—and I should be able to ask for an explanation.
In other words, how does the fact that people do order soup show that if you load the save file, you’re guaranteed the same result? I know it’s used as a step in proofs about “free will,” but I’m asking about the step, not the proofs. Another proof rebutting some of the people who don’t like the first proof isn’t an answer.
Wait, are you asking the empirical question, “do human decision-making processes operate in a deterministic fashion”? As far as I can tell, the answer is approximately “yes” (at least at scales typical of ordering food at a restaurant without influence from nondeterministic random-number generators), the aforementioned can-go-either-way philosophers are committed to the opposite answer (or to believing that we’re just automata), and the people you really should be asking are the cognitive scientists and neuroscientists. Of which I am neither.
I’ll try another rephrasing: I have been told that “causal determinism is presupposed in the very concept of human action.” I look at the concept of human action and see no presupposition about causal determinism. So I ask, “Where in the concept is this presupposition? I can’t find it.”
...my, I am an idiot. No, it certainly doesn’t look presupposed—I imagine someone is misunderstanding (Edit: or equivocating) the term “causal determinism”. Causality is presupposed, but not determinism.
I was also frustrated by Hill’s vagueness on what seemed to be an important point (perhaps he elaborates later?). In any case, I can tell you what I think Hill was thinking when he wrote that, though I’m not exceptionally confident about it.
The concept of human action—of making plans and following through with them—seems to be based on the assumption that the world is fundamentally predictable. We make decisions as if the future can to some extent be determined by a knowledge of the present, paired with a set of well-defined rules.
The natural objection to this would be that human action only presupposes some ability to predict the future, but not the perfect ability that might be possible if causal determinism is true. However, one could argue that it is far more natural to assume that the future is completely predictable, at least hypothetically, based on the fact that even our limited knowledge of the laws of nature seems to give us a good deal of predictive power. After all, there are many things we cannot yet do, but this would seem to be poor evidence that they are logically impossible.
So in my mind, Hill wasn’t trying to make a definitive case for causal determinism, only observing that it is the far more natural conclusion to draw, based on the planning-oriented way human beings interact with the world.