Moreover, if you prune the decision tree of all branches bar one then all decision algorithms will give the same (correct) answer!
It’s totally OK to add a notion of pruning in, but you can’t really say that your decision algorithm of “CDT with pruning” makes sense unless you can specify which branches ought to be pruned, and which ones should not. Also, outright pruning will often not work; you may only be able to rule out a branch as highly improbable rather than altogether impossible.
In other words, “pruning” as you put it is simply the same thing as “recognizing logical connections” in the sense that So8res used in the above post.
Well, a decision theory presumably is applied to some model of the physics, so that your agent can for example conclude that jumping out of a 100th floor window would result in it hitting ground at a high velocity. Finding that a hypothetical outcome is physically impossible would fall within the purview of the model of physics.
Moreover, if you prune the decision tree of all branches bar one then all decision algorithms will give the same (correct) answer!
It’s totally OK to add a notion of pruning in, but you can’t really say that your decision algorithm of “CDT with pruning” makes sense unless you can specify which branches ought to be pruned, and which ones should not. Also, outright pruning will often not work; you may only be able to rule out a branch as highly improbable rather than altogether impossible.
In other words, “pruning” as you put it is simply the same thing as “recognizing logical connections” in the sense that So8res used in the above post.
Well, a decision theory presumably is applied to some model of the physics, so that your agent can for example conclude that jumping out of a 100th floor window would result in it hitting ground at a high velocity. Finding that a hypothetical outcome is physically impossible would fall within the purview of the model of physics.