So I think my basic problem here is I’m not familiar with this construct for decision making or why it would be favored over others. Specifically, why make logical rules about which actions to take? Why not take an MDP value-learning approach where the agent chooses an action based on which action has the highest predicted utility. If the estimate is bad, it’s merely updated and if that situation arises again, the agent might choose a different action as a result of the latest update to it.
So I think my basic problem here is I’m not familiar with this construct for decision making or why it would be favored over others. Specifically, why make logical rules about which actions to take? Why not take an MDP value-learning approach where the agent chooses an action based on which action has the highest predicted utility. If the estimate is bad, it’s merely updated and if that situation arises again, the agent might choose a different action as a result of the latest update to it.