If we perfectly understood the decision-making process and all its inputs, there’d be no black box left to label ‘free will.’ If instead we could perfectly predict the outcomes (but not the internals) of a person’s cognitive algorithms… so we know, but don’t know how we know… I’m not sure. That would seem to invite mysterious reasoning to explain how we know, for which ‘free will’ seems unfitting as a mysterious answer.
That scenario probably depends on how it feels to perform the inerrant prediction of cognitive outcomes, and especially how it feels to turn that inerrant predictor on the self.
...hmm.
If we perfectly understood the decision-making process and all its inputs, there’d be no black box left to label ‘free will.’ If instead we could perfectly predict the outcomes (but not the internals) of a person’s cognitive algorithms… so we know, but don’t know how we know… I’m not sure. That would seem to invite mysterious reasoning to explain how we know, for which ‘free will’ seems unfitting as a mysterious answer.
That scenario probably depends on how it feels to perform the inerrant prediction of cognitive outcomes, and especially how it feels to turn that inerrant predictor on the self.