Are you asking for a procedure for identifying acts of free will (the doable kind of extensive definition) or a set of in-out exemplars (ostensive definition)?
By extensional definition I mean fencing off the notion of free will with a set of reasonably sharp (close to the free will/not free will boundary) examples of not having free will.
A rock not having free will is uncontroversial, but not sharp (very far from the boundary). I am looking for a set of examples where most people would agree that
It is an example of not having free will (uncontroversial)
It is hard to move it toward the “definitely free will” case without major disagreements from others (reasonably sharp).
Pretty sure I’m misparsing you somehow, but here are some things I might consider nonfree action :
A) an action is rewarded with a heroin fix; the actor is in withdrawal
B) an action will relieve extreme and urgent pain
C) an action is demanded by reflex (e.g. withdrawal from heat)
D) an action is demanded by an irresistably salient emotional appeal that the agent does not reflectively endorse (release the country-slaying neurotoxin, or I shall shoot your child)
I think these are very good examples, I would agree with C), disagree with D), require clarification on B) and have no strong opinion on A). Others might have different opinions. I further think that without amassing a wealth of examples like this and selecting a subset where there is a general agreement on which side of the fence they lie is necessary for a productive discussion of the issue.
If you intend to try again in the current open thread, feel free to transfer the examples.
Trying to clarify my intuitions re. B:
Consider Paul Atreides undergoing the gom jabbar; he will die unless he keeps his hand in the box. Given that he knows this, I count his success as a freely willed action; if (counterfactually) the pain had been sufficient to overcome him, withdrawing his hand would not have been freely willed, because it is counter to his consciously endorsed values (and, in this case, not subtle or confused values).
However, if (also counterfactually) the threat of death had not been present or known to him, then withdrawing his hand may have been a freely willed act (if the pain built slowly enough to be noticed rather than just triggering a burn-reaction).
It is hard to move it toward the “definitely free will” case without major disagreements from others (reasonably sharp).
And how should I make sense of that? Are you assuming that not only is the boundary fuzzy, but people disagree about the direction of motion there?
A person controlled by Borg implants seems like a good example of 1, but I think you’d find widespread agreement about what changes would make that person more or less free (except among those who insist the boundary is sharp and binary).
The boundary is certainly Sorites-fuzzy, not much can be done about that, I suspect.
people disagree about the direction of motion there?
I did not mean that, no, but who knows.
A person controlled by Borg implants seems like a good example of 1, but I think you’d find widespread agreement about what changes would make that person more or less free (except among those who insist the boundary is sharp and binary).
I tend to agree, but I can imagine a counterargument “but this person can still imagine other choices, and would follow them if not for the implants”. By the way, no need to go sci-fi, just replace Borg implants with voices in your head, or being physically restrained, etc.
As I said in my other replies, I don’t imagine how the issue of free will can be productively discussed without people agreeing on hat they mean by it in non-central cases.
Eliezer set the problem of dissolving the question of free will as a beginning exercise in the practice of dissolving problems. The link includes a link to his solution, but he recommends solving the problem on one’s own before reading his answer.
His solution seems satisfactory to me. I do not know if this solution can already be found in academic philosophy, or what academic philosophers think about it, but in shorter form it is stated in this Zen story.
which works well for the central example (mentally competent human in Western culture), but fails at the boundaries (unusual cultures, mental disorders, non-human animals, algorithms). Hence my original question.
He further elaborates
There—now my sensation of freedom indicates something coherent; and most of the time, I will have no reason to doubt the sensation’s veracity. I have no problems about saying that I have “free will” appropriately defined; so long as I am out of jail, uncertain of my own future decision, and living in a lawful universe that gave me emotions and morals whose interaction determines my choices.
Yet, in the next paragraph he states
Certainly I do not “lack free will” if that means I am in jail, or never uncertain of my future decisions, or in a brain-state where my emotions and morals fail to determine my actions in the usual way.
which seems to me to contradict the one before, as it expands the definition to include every possible human mind-state.
I do not recall him giving an example of a mind state which is clearly marked as “no free will”.
Certainly I do not “lack free will” if that means I am in jail, or never uncertain of my future decisions, or in a brain-state where my emotions and morals fail to determine my actions in the usual way.
which seems to me to contradict the one before, as it expands the definition to include every possible human mind-state.
I don’t know what E’s sentence is doing there, to the point that I suspect it’s been garbled by an editing error. But I don’t see why “having free will” should not include pretty much all mind states, short of being asleep or abnormalities such as drug addiction. The phenomenon he is pointing to, whatever its name, is something that human minds do.
I’m very unclear on your question, and where you think the contradiction lies. Being addicted to a drug that you will reliably seek despite considering it wrong would reduce your “free will,” as it would take you closer to being “never uncertain of my future decisions, or in a brain-state where my emotions and morals fail to determine my actions in the usual way.”
(I would personally not have included the “uncertain” part before encountering Eliezer’s work, but of course other writers do treat it as important.)
Can someone give an extensional definition of free will? Or link to one.
Presuming free will references something in the first place, it would reference an infinite set, by its nature. I don’t think you can have one.
I am not asking for an exhaustive definition, just a fence around the concept.
Are you asking for a procedure for identifying acts of free will (the doable kind of extensive definition) or a set of in-out exemplars (ostensive definition)?
By extensional definition I mean fencing off the notion of free will with a set of reasonably sharp (close to the free will/not free will boundary) examples of not having free will.
A rock not having free will is uncontroversial, but not sharp (very far from the boundary). I am looking for a set of examples where most people would agree that
It is an example of not having free will (uncontroversial)
It is hard to move it toward the “definitely free will” case without major disagreements from others (reasonably sharp).
Pretty sure I’m misparsing you somehow, but here are some things I might consider nonfree action :
A) an action is rewarded with a heroin fix; the actor is in withdrawal
B) an action will relieve extreme and urgent pain
C) an action is demanded by reflex (e.g. withdrawal from heat)
D) an action is demanded by an irresistably salient emotional appeal that the agent does not reflectively endorse (release the country-slaying neurotoxin, or I shall shoot your child)
I think these are very good examples, I would agree with C), disagree with D), require clarification on B) and have no strong opinion on A). Others might have different opinions. I further think that without amassing a wealth of examples like this and selecting a subset where there is a general agreement on which side of the fence they lie is necessary for a productive discussion of the issue.
If you intend to try again in the current open thread, feel free to transfer the examples.
Trying to clarify my intuitions re. B:
Consider Paul Atreides undergoing the gom jabbar; he will die unless he keeps his hand in the box. Given that he knows this, I count his success as a freely willed action; if (counterfactually) the pain had been sufficient to overcome him, withdrawing his hand would not have been freely willed, because it is counter to his consciously endorsed values (and, in this case, not subtle or confused values).
However, if (also counterfactually) the threat of death had not been present or known to him, then withdrawing his hand may have been a freely willed act (if the pain built slowly enough to be noticed rather than just triggering a burn-reaction).
And how should I make sense of that? Are you assuming that not only is the boundary fuzzy, but people disagree about the direction of motion there?
A person controlled by Borg implants seems like a good example of 1, but I think you’d find widespread agreement about what changes would make that person more or less free (except among those who insist the boundary is sharp and binary).
The boundary is certainly Sorites-fuzzy, not much can be done about that, I suspect.
I did not mean that, no, but who knows.
I tend to agree, but I can imagine a counterargument “but this person can still imagine other choices, and would follow them if not for the implants”. By the way, no need to go sci-fi, just replace Borg implants with voices in your head, or being physically restrained, etc.
As I said in my other replies, I don’t imagine how the issue of free will can be productively discussed without people agreeing on hat they mean by it in non-central cases.
Examples of free will: pretty much all of people’s everyday activities.
Examples of non-free will: being asleep.
Borderline examples: the will exercised by a being in a state of endarkenment, e.g. due to the three poisons).
“Free will” is a pleonasm. There are degrees of it, but there is not really such a thing as an unfree will.
Eliezer set the problem of dissolving the question of free will as a beginning exercise in the practice of dissolving problems. The link includes a link to his solution, but he recommends solving the problem on one’s own before reading his answer.
His solution seems satisfactory to me. I do not know if this solution can already be found in academic philosophy, or what academic philosophers think about it, but in shorter form it is stated in this Zen story.
Eliezer uses a compatibilist definition
which works well for the central example (mentally competent human in Western culture), but fails at the boundaries (unusual cultures, mental disorders, non-human animals, algorithms). Hence my original question.
He further elaborates
Yet, in the next paragraph he states
which seems to me to contradict the one before, as it expands the definition to include every possible human mind-state.
I do not recall him giving an example of a mind state which is clearly marked as “no free will”.
I don’t know what E’s sentence is doing there, to the point that I suspect it’s been garbled by an editing error. But I don’t see why “having free will” should not include pretty much all mind states, short of being asleep or abnormalities such as drug addiction. The phenomenon he is pointing to, whatever its name, is something that human minds do.
I’m very unclear on your question, and where you think the contradiction lies. Being addicted to a drug that you will reliably seek despite considering it wrong would reduce your “free will,” as it would take you closer to being “never uncertain of my future decisions, or in a brain-state where my emotions and morals fail to determine my actions in the usual way.”
(I would personally not have included the “uncertain” part before encountering Eliezer’s work, but of course other writers do treat it as important.)
Tried to clarify my question again.