I suspect that the paradigm of computation one chooses plays an important role here. The paradigm of a deterministic Turing machine leads to what I described in the post—one dimensional sequences and guaranteed solipsism. The paradigm of a a nondeterministic Turing machine allows for multi-dimensional sequences. I will edit the post to reflect on this.
Solomonoff induction is about computable models that produce conditional probabilities for an input symbol (which can represent anything at all) given a previous sequence of input symbols. The models are initially weighted by representational complexity, and for any given input sequence are further weighted by the probability assigned to the observed sequence.
The distinction between deterministic and non-deterministic Turing machines is not relevant since the same functions are computable by both. The distinction I’m making is between models and input. They are not the same thing. This part of your post
[...] world models which are one-dimensional sequences of states where every state has precisely one successor [...]
Confuses the two. The input is a sequence of states. World-models are any computable structure at all that provide predictions as output. Not even the predictions are sequences of states—they’re conditional probabilities for next input given previous input, and so can be viewed as a distribution over all finite sequences.
I suspect that the paradigm of computation one chooses plays an important role here. The paradigm of a deterministic Turing machine leads to what I described in the post—one dimensional sequences and guaranteed solipsism. The paradigm of a a nondeterministic Turing machine allows for multi-dimensional sequences. I will edit the post to reflect on this.
Solomonoff induction is about computable models that produce conditional probabilities for an input symbol (which can represent anything at all) given a previous sequence of input symbols. The models are initially weighted by representational complexity, and for any given input sequence are further weighted by the probability assigned to the observed sequence.
The distinction between deterministic and non-deterministic Turing machines is not relevant since the same functions are computable by both. The distinction I’m making is between models and input. They are not the same thing. This part of your post
Confuses the two. The input is a sequence of states. World-models are any computable structure at all that provide predictions as output. Not even the predictions are sequences of states—they’re conditional probabilities for next input given previous input, and so can be viewed as a distribution over all finite sequences.