Someone who knows exactly what they will do can still suffer from akrasia, by wishing they would do something else. I’d say that if the model of yourself saying “I’ll do whatever I wish I would” beats every other model you try and build of yourself, that looks like free will. The other way around, you can observe akrasia.
I don’t think that’s right. If you know exactly what you are going to do, that leaves no room for counterfactuals, not if you’re an LDT agent. Physically, there is no such thing as a counterfactual, especially not a logical one; so if your beliefs match the physical world perfectly, then the world looks deterministic, including your own behavior. I don’t think counterfactual reasoning makes sense without uncertainty.
As a human who has an intuitive understanding of counterfactuals, if I know exactly what a tic tac toe or chess program would do, I can still ask what would happen if it chose a particular action instead. The same goes if the agent of interest is myself.
if I know exactly what a tic tac toe or chess program would do,
if you were this logically omniscient, then supposing that the program did something else would imply that your system is inconsistent, which means everything is provable.
There needs to be boundedness somewhere, either in the number of deductions you can make, or in the certainty of your logical beliefs. This is what I mean by uncertainty being necessary for logical counterfactuals.
Someone who knows exactly what they will do can still suffer from akrasia, by wishing they would do something else. I’d say that if the model of yourself saying “I’ll do whatever I wish I would” beats every other model you try and build of yourself, that looks like free will. The other way around, you can observe akrasia.
I don’t think that’s right. If you know exactly what you are going to do, that leaves no room for counterfactuals, not if you’re an LDT agent. Physically, there is no such thing as a counterfactual, especially not a logical one; so if your beliefs match the physical world perfectly, then the world looks deterministic, including your own behavior. I don’t think counterfactual reasoning makes sense without uncertainty.
As a human who has an intuitive understanding of counterfactuals, if I know exactly what a tic tac toe or chess program would do, I can still ask what would happen if it chose a particular action instead. The same goes if the agent of interest is myself.
I see what you mean, but
if you were this logically omniscient, then supposing that the program did something else would imply that your system is inconsistent, which means everything is provable.
There needs to be boundedness somewhere, either in the number of deductions you can make, or in the certainty of your logical beliefs. This is what I mean by uncertainty being necessary for logical counterfactuals.