You essentially posit a “decision algorithm” to which you ascribe the sensations most people attribute to free will. I don’t think this is helpful and it seems like a cop-out to me. What if the way the brain makes decisions doesn’t translate well onto the philosophical apparatus of possibility and choice? You’re just trading “suggestively named LISP tokens” for suggestively named algorithms. But even if the brain does do something we could gloss in technical language as “making choices among possibilities” there still aren’t really possibilities and hence choices.
What it all comes down to, as you acknowledge (somewhat), is redefining terms. But if you’re going to do that, why not say, “none of this really matters, use language how you will”? Actually, a lot of your essays have these little disclaimers at the end, where you essentially say “at least that’s how I choose to use these words.” Why not headline with that?
There are basically three issues with any of these loaded terms—free will, choice, morality, consciousness, etc—that need to be addressed: (1) the word as a token and whether we want to define it and how; (2) matters the “common folk” want reassurance on, such as whether they should assume a fatalistic outlook in the face of determinism, whether their neighbors will go on killing sprees if morality isn’t made out of quarks, etc; (3) the philosophical problem of free will, problem of morality, etc.
Philosophers have made a living trying to convince us that their abstract arguments have some relevance to the concerns of the common man and that if we ignore them we’re being insensitive or reductionist and are guilty of scientism and fail to appreciate the relevance of the humanities. That’s egregious nonsense. Really these are three entirely separate issues. I get the impression that you actually think these problems are pseudo-problems but at the same time you tend to run issues 2 and 3 together in your discussions. Once you separate them out, though, I think the issues become trivial. It’s obvious determinism shouldn’t make us fatalistic because we weren’t fatalistic before and nothing has changed, it’s obvious we won’t engage in immoral behavior if morals aren’t “in the world” since we weren’t immoral before and nothing has changed, etc.
You essentially posit a “decision algorithm” to which you ascribe the sensations most people attribute to free will. I don’t think this is helpful and it seems like a cop-out to me. What if the way the brain makes decisions doesn’t translate well onto the philosophical apparatus of possibility and choice? You’re just trading “suggestively named LISP tokens” for suggestively named algorithms. But even if the brain does do something we could gloss in technical language as “making choices among possibilities” there still aren’t really possibilities and hence choices.
What it all comes down to, as you acknowledge (somewhat), is redefining terms. But if you’re going to do that, why not say, “none of this really matters, use language how you will”? Actually, a lot of your essays have these little disclaimers at the end, where you essentially say “at least that’s how I choose to use these words.” Why not headline with that?
There are basically three issues with any of these loaded terms—free will, choice, morality, consciousness, etc—that need to be addressed: (1) the word as a token and whether we want to define it and how; (2) matters the “common folk” want reassurance on, such as whether they should assume a fatalistic outlook in the face of determinism, whether their neighbors will go on killing sprees if morality isn’t made out of quarks, etc; (3) the philosophical problem of free will, problem of morality, etc.
Philosophers have made a living trying to convince us that their abstract arguments have some relevance to the concerns of the common man and that if we ignore them we’re being insensitive or reductionist and are guilty of scientism and fail to appreciate the relevance of the humanities. That’s egregious nonsense. Really these are three entirely separate issues. I get the impression that you actually think these problems are pseudo-problems but at the same time you tend to run issues 2 and 3 together in your discussions. Once you separate them out, though, I think the issues become trivial. It’s obvious determinism shouldn’t make us fatalistic because we weren’t fatalistic before and nothing has changed, it’s obvious we won’t engage in immoral behavior if morals aren’t “in the world” since we weren’t immoral before and nothing has changed, etc.