it’s entirely plausible that viewing GPTs as predictors or probabilistic constraint satisfaction problem solvers makes high-level properties more intuitive to you than viewing them as simulators
I disagree with the implied mutual exclusivity of viewing GPTs as predictors, probabilistic constraint satisfaction problem solvers, and simulators. A deep/holistic understanding of self-supervised simulators entails a model of probabilistic constraint solvers, a deep/holistic understanding of prediction (+ sampling) entails simulation, etc. Several of my sadly still unpublished posts in the Simulators sequence elaborate on the probabilistic boundary value problem solver analogy. Going through the “probabilistic laws of physics” analogy is a simple way to see how is equivalent to the (semiotic physics) simulators frame.
Awesome post! I’ve added it to the Cyborgism sequence.
One comment:
I disagree with the implied mutual exclusivity of viewing GPTs as predictors, probabilistic constraint satisfaction problem solvers, and simulators. A deep/holistic understanding of self-supervised simulators entails a model of probabilistic constraint solvers, a deep/holistic understanding of prediction (+ sampling) entails simulation, etc. Several of my sadly still unpublished posts in the Simulators sequence elaborate on the probabilistic boundary value problem solver analogy. Going through the “probabilistic laws of physics” analogy is a simple way to see how is equivalent to the (semiotic physics) simulators frame.