Ugh. I was distracted by the issue of “is Deep Blue consequentialist” (which I’m still not sure about; maximizing the future value of a heuristic doesn’t seem clearly consequentalist or non-consequentialist to me), and forgot to check my assumption that all consequentialists backchain. Yes, you’re entirely right. If I’m not incorrect again, Deep Blue forwardchains, right? It doesn’t have a goal state that it works backward from, but instead has an initial state and simulates several actions recursively to a certain depth, choosing the initial action that maximizes the expected heuristic of the bottom depth. (Ways I could be wrong: this isn’t how Deep Blue works, “chaining” means something more specific, etc. But Google isn’t helping on either.)
Nitpick: Deep Blue does not backchain (nor does any widely used chess algorithm, to my knowledge).
Ugh. I was distracted by the issue of “is Deep Blue consequentialist” (which I’m still not sure about; maximizing the future value of a heuristic doesn’t seem clearly consequentalist or non-consequentialist to me), and forgot to check my assumption that all consequentialists backchain. Yes, you’re entirely right. If I’m not incorrect again, Deep Blue forwardchains, right? It doesn’t have a goal state that it works backward from, but instead has an initial state and simulates several actions recursively to a certain depth, choosing the initial action that maximizes the expected heuristic of the bottom depth. (Ways I could be wrong: this isn’t how Deep Blue works, “chaining” means something more specific, etc. But Google isn’t helping on either.)
Yes, that is a pretty good summary of how Deep Blue works.