Lots of people promote the compression idea. But as far as I know, with the exception of the Hutter Prize, no one has actually attempted to do large scale compression, in any kind of systematic way. People in SNLP talk about language modeling, but they are not systematic (no shared benchmarks, no competitions, etc). No one in computer vision attempts to do large scale image compression.
I just spent an enjoyable lunch break reading about the Hutter Prize. Do you know if the paper with a broken link from a few places in its faq which seems to be disparaging Occam’s Razor is still online anywhere? A mailing list entry is the most complete summary of it I could find.
Lots of people promote the compression idea. But as far as I know, with the exception of the Hutter Prize, no one has actually attempted to do large scale compression, in any kind of systematic way. People in SNLP talk about language modeling, but they are not systematic (no shared benchmarks, no competitions, etc). No one in computer vision attempts to do large scale image compression.
I just spent an enjoyable lunch break reading about the Hutter Prize. Do you know if the paper with a broken link from a few places in its faq which seems to be disparaging Occam’s Razor is still online anywhere? A mailing list entry is the most complete summary of it I could find.