A cognitive agent with intentions sounds like it’s at least in the same conceptual neighborhood as free will. Perhaps free will has roughly the same role in their models of moral action as intentions do in your model.
If a tornado kills someone we don’t say that it acted immorally but if a man does we do (typically). What’s the difference between the man and the tornado? While the tornado was just a force of nature, it seems like there’s some sense in which the man was an active agent, some way in which the man (unlike the tornado) had control of his actions, chose to kill, or willed the consequences of his actions.
One approach, which many philosophers have taken, is to give the label “free will” to that meaning of agency/control/choice/will/whatever which allows the man to have moral responsibility while the tornado does not, and then work to define what exactly it consists of. That might not be the best move to make, given the many existing definitions and connotations of the term “free will” and all of the attachments and confusions they create, but it’s not an inexplicable one.
If punishing tornados changed their behaviour, then we would try to punish tornados. An event appears to be intentional (chosen) when it’s controlled by contingencies of reward and punishment.
There are exceptions to this characterisation of will. When there is a power imbalance between those delegating rewards and punishments and those being influenced by rewards and punishments, the decision is sometimes seen as less than free, and deemed exploitation. Parents and governments are generally given more leeway with regards to power imbalances.
When particular rewards have negative social consequences, they’re sometimes called addictive. When particular punishments have negative social consequences, their use is sometimes called coercive and/or unjust.
A cognitive agent with intentions sounds like it’s at least in the same conceptual neighborhood as free will. Perhaps free will has roughly the same role in their models of moral action as intentions do in your model.
If a tornado kills someone we don’t say that it acted immorally but if a man does we do (typically). What’s the difference between the man and the tornado? While the tornado was just a force of nature, it seems like there’s some sense in which the man was an active agent, some way in which the man (unlike the tornado) had control of his actions, chose to kill, or willed the consequences of his actions.
One approach, which many philosophers have taken, is to give the label “free will” to that meaning of agency/control/choice/will/whatever which allows the man to have moral responsibility while the tornado does not, and then work to define what exactly it consists of. That might not be the best move to make, given the many existing definitions and connotations of the term “free will” and all of the attachments and confusions they create, but it’s not an inexplicable one.
If punishing tornados changed their behaviour, then we would try to punish tornados. An event appears to be intentional (chosen) when it’s controlled by contingencies of reward and punishment.
There are exceptions to this characterisation of will. When there is a power imbalance between those delegating rewards and punishments and those being influenced by rewards and punishments, the decision is sometimes seen as less than free, and deemed exploitation. Parents and governments are generally given more leeway with regards to power imbalances.
When particular rewards have negative social consequences, they’re sometimes called addictive. When particular punishments have negative social consequences, their use is sometimes called coercive and/or unjust.