‘Free will’ is the halting point in the recursion of mental self-modeling.
Our minds model minds, and may model those minds’ models of minds, but cannot model an unlimited sequence of models of minds. At some point it must end on a model that does not attempt to model itself; a model that just acts without explanation. No matter how many resources we commit to ever-deeper models of models, we always end with a black box. So our intuition assumes the black box to be a fundamental feature of our minds, and not merely our failure to model them perfectly.
This explains why we rarely assume animals to share the same feature of free will, as we do not generally treat their minds as containing deep models of others’ minds. And, if we are particularly egocentric, we may not consider other human beings to share the same feature of free will, as we likewise assume their cognition to be fully comprehensible within our own.
If we perfectly understood the decision-making process and all its inputs, there’d be no black box left to label ‘free will.’ If instead we could perfectly predict the outcomes (but not the internals) of a person’s cognitive algorithms… so we know, but don’t know how we know… I’m not sure. That would seem to invite mysterious reasoning to explain how we know, for which ‘free will’ seems unfitting as a mysterious answer.
That scenario probably depends on how it feels to perform the inerrant prediction of cognitive outcomes, and especially how it feels to turn that inerrant predictor on the self.
You know, that fits. We often fail to ascribe free will to others, talking about how “that’s not like him” and making the Fundamental Attribution Error (“he’s a murderer—he’s evil!”)
This means we have to ascribe free will to any sufficiently intelligent agent that knows about our existence, right? Because they’ll be modelling us modeling them modelling us?
‘Free will’ is the halting point in the recursion of mental self-modeling.
Our minds model minds, and may model those minds’ models of minds, but cannot model an unlimited sequence of models of minds. At some point it must end on a model that does not attempt to model itself; a model that just acts without explanation. No matter how many resources we commit to ever-deeper models of models, we always end with a black box. So our intuition assumes the black box to be a fundamental feature of our minds, and not merely our failure to model them perfectly.
This explains why we rarely assume animals to share the same feature of free will, as we do not generally treat their minds as containing deep models of others’ minds. And, if we are particularly egocentric, we may not consider other human beings to share the same feature of free will, as we likewise assume their cognition to be fully comprehensible within our own.
...d-do I get the prize?
You have, in the local currency.
So, you are saying that free will is an illusion due to our limited predictive power?
...hmm.
If we perfectly understood the decision-making process and all its inputs, there’d be no black box left to label ‘free will.’ If instead we could perfectly predict the outcomes (but not the internals) of a person’s cognitive algorithms… so we know, but don’t know how we know… I’m not sure. That would seem to invite mysterious reasoning to explain how we know, for which ‘free will’ seems unfitting as a mysterious answer.
That scenario probably depends on how it feels to perform the inerrant prediction of cognitive outcomes, and especially how it feels to turn that inerrant predictor on the self.
You know, that fits. We often fail to ascribe free will to others, talking about how “that’s not like him” and making the Fundamental Attribution Error (“he’s a murderer—he’s evil!”)
This means we have to ascribe free will to any sufficiently intelligent agent that knows about our existence, right? Because they’ll be modelling us modeling them modelling us?