Akrasia is the name we give the fact that the part of ourselves that communicates about X, and the part that actually does X have slightly different goals. The communicating part is always winging about how the other part is being lazy.
Perhaps, but that’s not quite how I see it.
I’m saying akrasia is failure to predict yourself, that is when there’s a disconnect between your predictions and your actions.
I’m modeling humans as two agents that share a skull. One of those agents wants to do stuff and writes blog posts, the other likes lying in bed and has at least partial control of your actions. The part of you that does the talking can really say that it wants to do X, but it isn’t in control.
Even if you can predict this whole thing, that still doesn’t stop it happening.
Right, so that’s not a decision-prediction fixed point; a correct LDT algorithm would, by its very definition, choose the optimal decision, so predicting its behavior would lead to the optimal decision.
Akrasia is the name we give the fact that the part of ourselves that communicates about X, and the part that actually does X have slightly different goals. The communicating part is always winging about how the other part is being lazy.
Perhaps, but that’s not quite how I see it. I’m saying akrasia is failure to predict yourself, that is when there’s a disconnect between your predictions and your actions.
I’m modeling humans as two agents that share a skull. One of those agents wants to do stuff and writes blog posts, the other likes lying in bed and has at least partial control of your actions. The part of you that does the talking can really say that it wants to do X, but it isn’t in control.
Even if you can predict this whole thing, that still doesn’t stop it happening.
Maybe I ought to give a slightly more practical description.
Your akrasia is part of the world and failing to navigate around it is epistemic failure.
Right, so that’s not a decision-prediction fixed point; a correct LDT algorithm would, by its very definition, choose the optimal decision, so predicting its behavior would lead to the optimal decision.
Donald Hobson appears to believe that determinism implies you do not have a choice.
Instead of a) Beliefs → Reality, it’s b) Reality → Beliefs. B can be broken or fixed, but fixing A...
How does a correct LDT algorithm turn 2 agents into 1?