A lot of people have a problem with Kolmogorov complexity and Solomonoff induction being “ideals”. Sure, you can’t build a working perfect compressor in order to compute the Kolmogorov complexity of a binary string. The best you can do is to approximate it. Furthermore, the ways in which your compressor fails to achieve the perfect compression of Kolmogorov complexity are weaknesses of your compressor that a more powerful compressor could overcome… and so on and so on. It’s only in the limit that you get a completely general compressor that can’t be beaten, but by that point your compressor is requiring infinite amounts of computation in order to work perfectly. The solution is not to redefine the “perfect compressor”, indeed that wouldn’t work because anything less than Kolmogorov complexity can be beaten by some computable compressor. Instead, accept that the ideal can only be approximated in reality and that the better we can approximate it the better we are doing. The same goes for Solomonoff induction.
Eli: In your previous post you write about Copenhagen vs. MWI as if we have to decide on one of them. However, that’s a somewhat un-Bayesian thing to do! A strict Solomonoff-Bayesian would simply accept that there is a posterior distribution over a space of infinitely many theories and interpretations. When this strict Bayesian goes to make a prediction about the outcome of an experiment he will take all of these interpretations into account according to their posterior probabilities—including interpretations far more insane than anything you have described.
A lot of people have a problem with Kolmogorov complexity and Solomonoff induction being “ideals”. [...] Instead, accept that the ideal can only be approximated in reality and that the better we can approximate it the better we are doing.
That doesn’t sound like a very serious problem with these things being “ideals”.
A lot of people have a problem with Kolmogorov complexity and Solomonoff induction being “ideals”. Sure, you can’t build a working perfect compressor in order to compute the Kolmogorov complexity of a binary string. The best you can do is to approximate it. Furthermore, the ways in which your compressor fails to achieve the perfect compression of Kolmogorov complexity are weaknesses of your compressor that a more powerful compressor could overcome… and so on and so on. It’s only in the limit that you get a completely general compressor that can’t be beaten, but by that point your compressor is requiring infinite amounts of computation in order to work perfectly. The solution is not to redefine the “perfect compressor”, indeed that wouldn’t work because anything less than Kolmogorov complexity can be beaten by some computable compressor. Instead, accept that the ideal can only be approximated in reality and that the better we can approximate it the better we are doing. The same goes for Solomonoff induction.
Eli: In your previous post you write about Copenhagen vs. MWI as if we have to decide on one of them. However, that’s a somewhat un-Bayesian thing to do! A strict Solomonoff-Bayesian would simply accept that there is a posterior distribution over a space of infinitely many theories and interpretations. When this strict Bayesian goes to make a prediction about the outcome of an experiment he will take all of these interpretations into account according to their posterior probabilities—including interpretations far more insane than anything you have described.
That doesn’t sound like a very serious problem with these things being “ideals”.
Ideals don’t have to be attainable.