I have not previously encountered the Predictor argument, but my immediate thought is to point out that the fidelity with which the predictor models its own behavior is strongly limited by the threat of infinite recursion (“If I answer A then I predict B so I’ll answer B then I predict C so I’ll answer C then I predict D...” etc). Even if it models its own predictions, either that submodel will be enough simplified not to model its own predictions, or some further submodel will be, or the prediction stack overflows.
Many ways of building a predictor result in goal-oriented predictors. However, it isn’t clear that all predicors are goal-directed—even if they are powerful, can see themselves in the world, etc. The argument that they are seems insufficiently compelling—at least to me.
I have not previously encountered the Predictor argument, but my immediate thought is to point out that the fidelity with which the predictor models its own behavior is strongly limited by the threat of infinite recursion (“If I answer A then I predict B so I’ll answer B then I predict C so I’ll answer C then I predict D...” etc). Even if it models its own predictions, either that submodel will be enough simplified not to model its own predictions, or some further submodel will be, or the prediction stack overflows.
Puts me in mind of a Philip K Dick story.
Many ways of building a predictor result in goal-oriented predictors. However, it isn’t clear that all predicors are goal-directed—even if they are powerful, can see themselves in the world, etc. The argument that they are seems insufficiently compelling—at least to me.