“Open Problems in GPT Simulator Theory” (forthcoming)
Specifically, this is a chapter on the preferred basis problem for GPT Simulator Theory.
TLDR: GPT Simulator Theory says that the language model μ:Tk→Δ(T) decomposes into a linear interpolation μ=∑s∈Sαsμs where each μs:Tk→Δ(T) is a “simulacra” and the amplitudes as update in an approximately Bayesian way. However, this decomposition is non-unique, making GPT Simulator Theory either ill-defined, arbitrary, or trivial. By comparing this problem to the preferred basis problem in quantum mechanics, I construct various potential solutions and compare them.
What report is the image pulled from?
“Open Problems in GPT Simulator Theory” (forthcoming)
Specifically, this is a chapter on the preferred basis problem for GPT Simulator Theory.
TLDR: GPT Simulator Theory says that the language model μ:Tk→Δ(T) decomposes into a linear interpolation μ=∑s∈Sαsμs where each μs:Tk→Δ(T) is a “simulacra” and the amplitudes as update in an approximately Bayesian way. However, this decomposition is non-unique, making GPT Simulator Theory either ill-defined, arbitrary, or trivial. By comparing this problem to the preferred basis problem in quantum mechanics, I construct various potential solutions and compare them.