I’ll cheat and give you the ontological answer upfront: you’re confusing the alternate worlds simulated in your decision algorithm with physically real worlds. And the practical answer: free will is a tool for predicting whether a person is amenable to persuasion.
Smith has a brain tumor such that he couldn’t have done otherwise
Smith either didn’t simulate alternate worlds, didn’t evaluate them correctly or the evaluation didn’t impact his decisionmaking; there is no process flow through outcome simulation that led to his action. Instead of “I want X dead → murder” it went “Tumor → murder”. Smith is unfree, despite both being physically determined.
Second, would a compatibilist think that a computer programmed with a chess-playing algorithm has free will or is responsible for its decisions?
Does the algorithm morally evaluate the outcomes of his moves? No. Hence it is not morally responsible. The algorithm does evaluate the outcomes of its moves for chess quality; hence it is responsible for its victory.
Is my dog in any sense “responsible” for peeing on the carpet?
Dogs can be trained to associate bad actions with guilt. There is a flow that leads from action prediction to moral judgment prediction; the dog is morally responsible. Animals that cannot do this are not.
Fourth, does it ever make sense to feel regret/remorse/guilt on a compatibilist view?
Sure. First of, note that our ontological restatement upfront completely removed the contradiction between free will and determinism, so the standard counterfactual arguments are back on the table. But also, I think the better approach is to think of these feelings as adaptations and social tools. “Does it make sense” = “is it coherent” + “is it useful”. It is coherent in the “counterfactuals exist in the prediction of the agent” model; it is useful in the “push game theory players into cooperate/cooperate” sense.
So essentially it’s a question of “COULD the actor have considered societal rules and consequences before acting”.
This makes sense on brain tumor vs not cases.
But what about “they got drunk and then committed murder”. They were unable to consider the consequences/not murder when drunk.
Hypothetically they don’t know that becoming drunk makes you homicidal.
Or the “ignorance of the law is not an excuse”.
A lot of these interpretations end up being “we know they probably didn’t have any form of ability to not commit this crime but we are going to punish anyway just in case they might”.
Maybe they read that particular law about importing shellfish buried in federal code and just lied and said they didn’t know.
Maybe they actually knew getting drunk makes them want to murder and they drank anyway.
It does still make sense for drunk murder, because it is well-known that getting drunk does impair judgement and the person chose to accept the consequences of operating with impaired judgement. They may not specifically have known that they would murder (and almost everyone doesn’t), but they are still held responsible for consequences of choosing to knowingly impair their own judgement. As a comparison, we (as a society) do not generally hold responsible those who become drunk or drugged involuntarily.
It’s the same principle of responsibility, just applied to an earlier action: the decision to impair their own judgement.
The same sort of thing applies to “ignorance of the law is no excuse”, though with thinner justification than when the principle was first established. It is the responsibility of all people in a state to pay enough attention to the laws governing their actions well enough to know which actions are certainly illegal and which may be dubious. If you are importing goods then it is your responsibility to know the laws relating to the goods you are importing. If you do not or cannot, then it is your responsibility to not import goods.
Again, the same principle of moral responsibility applied to an earlier action: the decision to carry out goods importation while knowing that they are not fully aware of the laws governing it. The problem is that the volume of law has increased to such an extent that keeping up with it even in one sub-field has become a full-time specialized occupation.
Right. Similarly if regular people don’t usually murder when drunk and YOU have neurological faults that make you drunkenly homicidal, see what I mean. It’s just like the law thing. It’s one thing if the law is simple and clear and well known, it’s another if you’re just helping out a friend by carrying a few live crayfish through customs or an endangered species.
The legal judgements are nonsense and unjust, the penalties are imposed for societal convenience.
I’m not actually sure what any of this has discussion sub-branch has to do with free will and moral responsibility? It seems to have gone off on a tangent about legal minutiae as opposed to whether moral responsibility is compatible with determinism. But I guess topic drift happens.
It may happen to be true that in some specific case, with much better medical technology, it could be proven that a specific person had neurological differences that meant that they would have no hesitation in murdering when drunk even when they would never do so while sober, and that it wasn’t known that this could be possible.
In this case sure, moral responsibility for this specific act seems to be reduced, apart from the general responsibility due to getting drunk knowing that getting drunk does impair judgement (including moral judgement). But absolutely they should be held very strongly responsible if they ever willingly get drunk knowing this, even if they don’t commit murder on that occasion!
This holds regardless of whether the world is deterministic or not.
I’ll cheat and give you the ontological answer upfront: you’re confusing the alternate worlds simulated in your decision algorithm with physically real worlds. And the practical answer: free will is a tool for predicting whether a person is amenable to persuasion.
Smith either didn’t simulate alternate worlds, didn’t evaluate them correctly or the evaluation didn’t impact his decisionmaking; there is no process flow through outcome simulation that led to his action. Instead of “I want X dead → murder” it went “Tumor → murder”. Smith is unfree, despite both being physically determined.
Does the algorithm morally evaluate the outcomes of his moves? No. Hence it is not morally responsible. The algorithm does evaluate the outcomes of its moves for chess quality; hence it is responsible for its victory.
Dogs can be trained to associate bad actions with guilt. There is a flow that leads from action prediction to moral judgment prediction; the dog is morally responsible. Animals that cannot do this are not.
Sure. First of, note that our ontological restatement upfront completely removed the contradiction between free will and determinism, so the standard counterfactual arguments are back on the table. But also, I think the better approach is to think of these feelings as adaptations and social tools. “Does it make sense” = “is it coherent” + “is it useful”. It is coherent in the “counterfactuals exist in the prediction of the agent” model; it is useful in the “push game theory players into cooperate/cooperate” sense.
So essentially it’s a question of “COULD the actor have considered societal rules and consequences before acting”.
This makes sense on brain tumor vs not cases.
But what about “they got drunk and then committed murder”. They were unable to consider the consequences/not murder when drunk.
Hypothetically they don’t know that becoming drunk makes you homicidal.
Or the “ignorance of the law is not an excuse”.
A lot of these interpretations end up being “we know they probably didn’t have any form of ability to not commit this crime but we are going to punish anyway just in case they might”.
Maybe they read that particular law about importing shellfish buried in federal code and just lied and said they didn’t know.
Maybe they actually knew getting drunk makes them want to murder and they drank anyway.
Punishment for societal convenience.
It does still make sense for drunk murder, because it is well-known that getting drunk does impair judgement and the person chose to accept the consequences of operating with impaired judgement. They may not specifically have known that they would murder (and almost everyone doesn’t), but they are still held responsible for consequences of choosing to knowingly impair their own judgement. As a comparison, we (as a society) do not generally hold responsible those who become drunk or drugged involuntarily.
It’s the same principle of responsibility, just applied to an earlier action: the decision to impair their own judgement.
The same sort of thing applies to “ignorance of the law is no excuse”, though with thinner justification than when the principle was first established. It is the responsibility of all people in a state to pay enough attention to the laws governing their actions well enough to know which actions are certainly illegal and which may be dubious. If you are importing goods then it is your responsibility to know the laws relating to the goods you are importing. If you do not or cannot, then it is your responsibility to not import goods.
Again, the same principle of moral responsibility applied to an earlier action: the decision to carry out goods importation while knowing that they are not fully aware of the laws governing it. The problem is that the volume of law has increased to such an extent that keeping up with it even in one sub-field has become a full-time specialized occupation.
Right. Similarly if regular people don’t usually murder when drunk and YOU have neurological faults that make you drunkenly homicidal, see what I mean. It’s just like the law thing. It’s one thing if the law is simple and clear and well known, it’s another if you’re just helping out a friend by carrying a few live crayfish through customs or an endangered species.
The legal judgements are nonsense and unjust, the penalties are imposed for societal convenience.
I’m not actually sure what any of this has discussion sub-branch has to do with free will and moral responsibility? It seems to have gone off on a tangent about legal minutiae as opposed to whether moral responsibility is compatible with determinism. But I guess topic drift happens.
It may happen to be true that in some specific case, with much better medical technology, it could be proven that a specific person had neurological differences that meant that they would have no hesitation in murdering when drunk even when they would never do so while sober, and that it wasn’t known that this could be possible.
In this case sure, moral responsibility for this specific act seems to be reduced, apart from the general responsibility due to getting drunk knowing that getting drunk does impair judgement (including moral judgement). But absolutely they should be held very strongly responsible if they ever willingly get drunk knowing this, even if they don’t commit murder on that occasion!
This holds regardless of whether the world is deterministic or not.