I’m a little confused. Removing one bit means that two possible images map onto the file that is 1 bit smaller, not 1024 possible images.
I’m also confused about what happens when you remove a million bits from the image. When you go to restore 1 bit, there are two possible files that are 1 bit larger. But which one is the one you want? They are both meaningless strings of bits until you get to the final image, and there are 2^million possible final images.
Ah I see, the bit you remove is freely chosen. Though I am still confused.
The problem I have is that given a 24 million bit raw image format consisting of a million RGB pixels, there are at least 2^million different images that look essentially identical to a human (let’s say a picture of my room with 1 bit of sensor noise per pixel in the blue channel). During the compression process the smaller files must remain distinct, otherwise we lose the property of being able to tell which one is correct on expansion.
So the process must end when the compressed file reaches a million bits, because every compression of the 2^million possible pictures of my room must have a different sequence of bits assigned to it, and there is no room left for being able to encode anything else.
But I can equally well apply the same argument to gigapixel images, concluding that this compression method can’t compress anything to less than a billion bits. This argument doesn’t have an upper limit, so I’m not sure how it can ever compress anything at all.
[edited]
I’d expect that if the differences are enough that a human can decide, they’re even easier for a computer to decide.
I’m a little confused. Removing one bit means that two possible images map onto the file that is 1 bit smaller, not 1024 possible images.
I’m also confused about what happens when you remove a million bits from the image. When you go to restore 1 bit, there are two possible files that are 1 bit larger. But which one is the one you want? They are both meaningless strings of bits until you get to the final image, and there are 2^million possible final images.
[edited]
Ah I see, the bit you remove is freely chosen. Though I am still confused.
The problem I have is that given a 24 million bit raw image format consisting of a million RGB pixels, there are at least 2^million different images that look essentially identical to a human (let’s say a picture of my room with 1 bit of sensor noise per pixel in the blue channel). During the compression process the smaller files must remain distinct, otherwise we lose the property of being able to tell which one is correct on expansion.
So the process must end when the compressed file reaches a million bits, because every compression of the 2^million possible pictures of my room must have a different sequence of bits assigned to it, and there is no room left for being able to encode anything else.
But I can equally well apply the same argument to gigapixel images, concluding that this compression method can’t compress anything to less than a billion bits. This argument doesn’t have an upper limit, so I’m not sure how it can ever compress anything at all.