Solomonoff induction is about computable models that produce conditional probabilities for an input symbol (which can represent anything at all) given a previous sequence of input symbols. The models are initially weighted by representational complexity, and for any given input sequence are further weighted by the probability assigned to the observed sequence.
The distinction between deterministic and non-deterministic Turing machines is not relevant since the same functions are computable by both. The distinction I’m making is between models and input. They are not the same thing. This part of your post
[...] world models which are one-dimensional sequences of states where every state has precisely one successor [...]
Confuses the two. The input is a sequence of states. World-models are any computable structure at all that provide predictions as output. Not even the predictions are sequences of states—they’re conditional probabilities for next input given previous input, and so can be viewed as a distribution over all finite sequences.
Solomonoff induction is about computable models that produce conditional probabilities for an input symbol (which can represent anything at all) given a previous sequence of input symbols. The models are initially weighted by representational complexity, and for any given input sequence are further weighted by the probability assigned to the observed sequence.
The distinction between deterministic and non-deterministic Turing machines is not relevant since the same functions are computable by both. The distinction I’m making is between models and input. They are not the same thing. This part of your post
Confuses the two. The input is a sequence of states. World-models are any computable structure at all that provide predictions as output. Not even the predictions are sequences of states—they’re conditional probabilities for next input given previous input, and so can be viewed as a distribution over all finite sequences.