You’re right, but you’re not disagreeing with me. My original statement assumed an incorrect model of free will. You are pointing out that a correct model of free will would yield different results. This is not a disputed point.
Imagine you have an AI that is capable of “thinking,” but incapable of actually controlling its actions. Its attitude towards its actions is immaterial, so its beliefs about the nature of morality are immaterial. This is essentially compatible with the common misconception of no-free-will-determinism.
My point was that using an incorrect model that decides “there is no free will” is a practical contradiction. Pointing out that a correct model contains free-will-like elements is not at odds with this claim.
You’re right, but you’re not disagreeing with me. My original statement assumed an incorrect model of free will. You are pointing out that a correct model of free will would yield different results. This is not a disputed point.
Imagine you have an AI that is capable of “thinking,” but incapable of actually controlling its actions. Its attitude towards its actions is immaterial, so its beliefs about the nature of morality are immaterial. This is essentially compatible with the common misconception of no-free-will-determinism.
My point was that using an incorrect model that decides “there is no free will” is a practical contradiction. Pointing out that a correct model contains free-will-like elements is not at odds with this claim.
Yes, I misunderstood your original point. It seems to be correct. Sorry.
Psychohistorian disagrees that cousin_it was disagreeing with him.
Very cute ;)