Truly random data is incompressible in the average case by the pigeonhole principle
Solomonoff induction still tries though. It assumes there is always more signal in the noise. I’m not sure how you would justify stopping that search, how can you ever be certain there’s not some complex signal we just haven’t found yet?
But you should end up with a bunch of theories with similar kolmogorov complexity.
I think you can justify stopping the search when you are hitting your resource limits and have long since ceased to find additional signal. You could be wrong, but it seems justified.
Solomonoff induction has no resource limit. It’s a theoretical framework for understanding machine learning when resources are not an issue, not an engineering proposal.
But it seems to me rather different to assume you can do any finite amount of calculation, vs relying on things that can only be done with infinite calculation. Can we ever have a hope of having infinite resources?
Can we ever have a hope of having infinite resources?
I think epsilon.
Just to clarify, though: in using universal induction, every hypothesis is finite in size, thereby at no point some process needs to run an infinite program to discover its output. The infinite part is of course the size of memory space, used to hold an infinite prior, and the infinite speed of its update.
Truly random data is incompressible in the average case by the pigeonhole principle
Solomonoff induction still tries though. It assumes there is always more signal in the noise. I’m not sure how you would justify stopping that search, how can you ever be certain there’s not some complex signal we just haven’t found yet?
But you should end up with a bunch of theories with similar kolmogorov complexity.
I think you can justify stopping the search when you are hitting your resource limits and have long since ceased to find additional signal. You could be wrong, but it seems justified.
Solomonoff induction has no resource limit. It’s a theoretical framework for understanding machine learning when resources are not an issue, not an engineering proposal.
But it seems to me rather different to assume you can do any finite amount of calculation, vs relying on things that can only be done with infinite calculation. Can we ever have a hope of having infinite resources?
I think epsilon.
Just to clarify, though: in using universal induction, every hypothesis is finite in size, thereby at no point some process needs to run an infinite program to discover its output. The infinite part is of course the size of memory space, used to hold an infinite prior, and the infinite speed of its update.