Solomonoff induction is uncomputable? So: use a computable approximation.
What is the argument that the approximation you use is good? What I mean is, when you approximate you are making changes. Some possible changes you could make would create massive errors. Others—the type you are aiming for—only create small errors that don’t spread all over. What is your method of creating an approximation of the second type?
Making computable approximations of Solomonoff induction is a challenging field which it seems inappropriate to try and cram into a blog comment. Probably the short answer is “by using stochastic testing”.
There’s a large amount of math behind this sort of thing, and frankly, given your other comments I’m not sure that you have enough background. It might help to just read up on Bayeisian machine learning which needs to deal with just this sort of issue. Then keep in mind that there are theorems that given some fairly weak conditions to rule out pathological cases one approximate any distribution by a computable distribution to arbitrary accuracy. You need to be careful about what metric you are using but it turns out to be true for a variety of different notions of approximating and different metrics. While this is far from my area of expertise, so I’m by no means an expert on this, my impression is that the theorems are essentially of the same flavor as the theorems one would see in a real analysis course about approximating functions with continuous functions or polynomial functions.
What is the argument that the approximation you use is good? What I mean is, when you approximate you are making changes. Some possible changes you could make would create massive errors. Others—the type you are aiming for—only create small errors that don’t spread all over. What is your method of creating an approximation of the second type?
Making computable approximations of Solomonoff induction is a challenging field which it seems inappropriate to try and cram into a blog comment. Probably the short answer is “by using stochastic testing”.
There’s a large amount of math behind this sort of thing, and frankly, given your other comments I’m not sure that you have enough background. It might help to just read up on Bayeisian machine learning which needs to deal with just this sort of issue. Then keep in mind that there are theorems that given some fairly weak conditions to rule out pathological cases one approximate any distribution by a computable distribution to arbitrary accuracy. You need to be careful about what metric you are using but it turns out to be true for a variety of different notions of approximating and different metrics. While this is far from my area of expertise, so I’m by no means an expert on this, my impression is that the theorems are essentially of the same flavor as the theorems one would see in a real analysis course about approximating functions with continuous functions or polynomial functions.