As far as I am aware, Solomonoff induction describes the singularly correct way to do statistical inference in the limits of infinite compute. (It computes generalized/​full Bayesian inference)
All of AI can be reduced to universal inference, so understanding how to do that optimally with infinite compute perhaps helps one think more clearly about how practical efficient inference algorithms can exploit various structural regularities to approximate the ideal using vastly less compute.
Because AIXI is the first complete mathematical model of a general AI and is based on Solomonoff induction. Also, computable approximation to Solomonoff prior has been used to teach small AI to play videogames unsupervised. So, yeah.
That seems to be a pretty big claim. Can you articulate why you believe it to be true?
As far as I am aware, Solomonoff induction describes the singularly correct way to do statistical inference in the limits of infinite compute. (It computes generalized/​full Bayesian inference)
All of AI can be reduced to universal inference, so understanding how to do that optimally with infinite compute perhaps helps one think more clearly about how practical efficient inference algorithms can exploit various structural regularities to approximate the ideal using vastly less compute.
Because AIXI is the first complete mathematical model of a general AI and is based on Solomonoff induction.
Also, computable approximation to Solomonoff prior has been used to teach small AI to play videogames unsupervised.
So, yeah.