I’m not sure what exactly you mean by “can’t”. Imagine a program that searches for the maximum element of an array. From our perspective there’s only one value the program “can” return. But from the program’s perspective, before it’s scanned the whole array, it “can” return any value. Purely deterministic worlds can still contain agents that search for the best thing to do by using counterfactuals (“I could”, “I should”), if these agents don’t have complete knowledge of the world and of themselves. The concept of “free will” was pretty well-covered in the sequences.
You’re right, but you’re not disagreeing with me. My original statement assumed an incorrect model of free will. You are pointing out that a correct model of free will would yield different results. This is not a disputed point.
Imagine you have an AI that is capable of “thinking,” but incapable of actually controlling its actions. Its attitude towards its actions is immaterial, so its beliefs about the nature of morality are immaterial. This is essentially compatible with the common misconception of no-free-will-determinism.
My point was that using an incorrect model that decides “there is no free will” is a practical contradiction. Pointing out that a correct model contains free-will-like elements is not at odds with this claim.
I’m not sure what exactly you mean by “can’t”. Imagine a program that searches for the maximum element of an array. From our perspective there’s only one value the program “can” return. But from the program’s perspective, before it’s scanned the whole array, it “can” return any value. Purely deterministic worlds can still contain agents that search for the best thing to do by using counterfactuals (“I could”, “I should”), if these agents don’t have complete knowledge of the world and of themselves. The concept of “free will” was pretty well-covered in the sequences.
You’re right, but you’re not disagreeing with me. My original statement assumed an incorrect model of free will. You are pointing out that a correct model of free will would yield different results. This is not a disputed point.
Imagine you have an AI that is capable of “thinking,” but incapable of actually controlling its actions. Its attitude towards its actions is immaterial, so its beliefs about the nature of morality are immaterial. This is essentially compatible with the common misconception of no-free-will-determinism.
My point was that using an incorrect model that decides “there is no free will” is a practical contradiction. Pointing out that a correct model contains free-will-like elements is not at odds with this claim.
Yes, I misunderstood your original point. It seems to be correct. Sorry.
Psychohistorian disagrees that cousin_it was disagreeing with him.
Very cute ;)