I actually did not read the linked thread until now, I came across this post from the front page and thought this was a potentially interesting challenge.
Regarding “in the concept of the fiction”, I think this piece of data is way too human to be convincing. The noise is effectively a ‘gotcha, sprinkle in /dev/random into the data’.
Why sample with 24 bits of precision if the source image only has 8 bits of precision, and it shows. Then why only add <11 bits of noise, and uniform noise at that? It could work well if you had a 16-bit lossless source image, or even an approximation of one, but the way this image is constructed is way too artificial. (And why not gaussian noise? Or any other kind of more natural noise? Uniform noise pretty much never happens in practice.) One can also entirely separate the noise from the source data you used because 8 + 11 < 24.
JPEG-caused block artifacts were visible while I was analyzing the planes of the image, that’s why I thought the bayer filter was possibly 4x4 pixels in size. I believe you likely downsampled the image from a jpeg at approximately 2000x1200 resolution, which does affect analysis and breaks the fiction that this is raw sensor data from an alien civilization.
With these kinds of flaws I do believe cracking the PRNG is within limits since the data is already really flawed.
(1) is possibly true. At least it’s true in this case, although in practice understanding the structure of the data doesn’t actually help very much vs some of the best general purpose compressors from the PAQ family.
It doesn’t help that lossless image compression algorithms kinda suck. I can often get better results by using zpaq on a NetPBM file than using a special purpose algorithm like png or even lossless webp (although the latter is usually at least somewhat competitive with the zpaq results).
(2) I’d say my decompressor would contain useful info about the structure of the data, or at least the file format itself, however...
...it would not contain any useful representation of the pictured piece of bismuth. The lossless compression requirement hurts a lot. Reverse rendering techniques for various representations do exist, but they are either lossy, or larger than the source data.
Constructing and raytracing a NeRF / SDF / voxel grid / whatever might possibly be competitive if you had dozens (or maybe hundreds) of shots of the same bismuth piece at different angles, but it really doesn’t pay for a single image, especially at this quality, especially with all the jpeg artifacts that leaked through, and so on.
I feel like this is a bit of a wasted opportunity, you could have chosen a lot of different modalities of data, even something like a stream of data from the IMU sensor in your phone as you walk around the house. You would not need to add any artificial noise, it would already be there in the source data. Modeling that could actually be interesting (if the sample rate on the IMU was high enough for a physics-based model to help).
I also think that viewing the data ‘wrongly’ and figuring out something about it despite that is a feature, not a bug.
Updates on best results so far:
General purpose compression on the original file, using cmix:
(1) The first thing I did when approaching this was think about how the message is actually transmitted. Things like the preamble at the start of the transmission to synchronize clocks, the headers for source & destination, or the parity bits after each byte, or even things like using an inversed parity on the header so that it is possible to distinguish a true header from bytes within a message that look like a header, and even optional checksum calculations.
(2) I then thought about how I would actually represent the data so it wasn’t just traditional 8-bit bytes—I created encoders & decoders for 36/24/12/6 bit unsigned and signed ints, and 30 / 60 bit non-traditional floating point, etc.
Finally, I created a mock telemetry stream that consisted of a bunch of time-series data from many different sensors, with all of the sensor values packed into a single frame with all of the data types from (2), and repeatedly transmitted that frame over the varying time series, using (1), until I had >1 MB.
And then I didn’t submit that, and instead swapped to a single message using the transmission protocol that I designed first, and shoved an image into that message instead of the telemetry stream.
To avoid the flaw where the message is “just” 1-byte RGB, I viewed each pixel in the filter as being measured by a 24-bit ADC. That way someone decoding it has to consider byte-order when forming the 24-bit values.
Then, I added only a few LSB of noise because I was thinking about the type of noise you see on ADC channels prior to more extensive filtering. I consider it a bug that I only added noise in some interval [0, +N], when I should have allowed the noise to be positive or negative. I am less convinced that the uniform distribution is incorrect. In my experience, ADC noise is almost always uniform (and only present in a few LSB), unless there’s a problem with the HW design, in which case you’ll get dramatic non-uniform “spikes”. I was assuming that the alien HW is not so poorly designed that they are railing their ADC channels with noise of that magnitude.
I wanted the color data to be more complicated than just RGB, so I used a Bayer filter, that way people decoding it would need to demosiac the color channels. This further increased the size of the image.
The original, full resolution image produced a file much larger than 1 MB when it was put through the above process (3 8-bit RGB → 4 24-bit Bayer), so I cut the resolution on the source image until the output was more reasonably sized. I wasn’t thinking about how that would impact the image analysis, because I was still thinking about the data types (byte order, number of bits, bit ordering) more so than the actual image content.
“Was the source image actually a JPEG?” I didn’t check for JPEG artifacts at all, or analyze the image beyond trying to find a nice picture of bismuth with the full color of the rainbow present so that all of the color channels would be used. I just now did a search for “bismuth png” on Google, got a few hits, opened one, and it was actually a JPG. I remember scrolling through a bunch of Google results before I found an image that I liked, and then I just remember pulling & saving it as a BMP. Even if I had downloaded a source PNG as I intended, I definitely didn’t check that the PNG itself wasn’t just a resaved JPEG.
I actually did not read the linked thread until now, I came across this post from the front page and thought this was a potentially interesting challenge.
Regarding “in the concept of the fiction”, I think this piece of data is way too human to be convincing. The noise is effectively a ‘gotcha, sprinkle in /dev/random into the data’.
Why sample with 24 bits of precision if the source image only has 8 bits of precision, and it shows. Then why only add <11 bits of noise, and uniform noise at that? It could work well if you had a 16-bit lossless source image, or even an approximation of one, but the way this image is constructed is way too artificial. (And why not gaussian noise? Or any other kind of more natural noise? Uniform noise pretty much never happens in practice.) One can also entirely separate the noise from the source data you used because 8 + 11 < 24.
JPEG-caused block artifacts were visible while I was analyzing the planes of the image, that’s why I thought the bayer filter was possibly 4x4 pixels in size. I believe you likely downsampled the image from a jpeg at approximately 2000x1200 resolution, which does affect analysis and breaks the fiction that this is raw sensor data from an alien civilization.
With these kinds of flaws I do believe cracking the PRNG is within limits since the data is already really flawed.
(1) is possibly true. At least it’s true in this case, although in practice understanding the structure of the data doesn’t actually help very much vs some of the best general purpose compressors from the PAQ family.
It doesn’t help that lossless image compression algorithms kinda suck. I can often get better results by using zpaq on a NetPBM file than using a special purpose algorithm like png or even lossless webp (although the latter is usually at least somewhat competitive with the zpaq results).
(2) I’d say my decompressor would contain useful info about the structure of the data, or at least the file format itself, however...
...it would not contain any useful representation of the pictured piece of bismuth. The lossless compression requirement hurts a lot. Reverse rendering techniques for various representations do exist, but they are either lossy, or larger than the source data.
Constructing and raytracing a NeRF / SDF / voxel grid / whatever might possibly be competitive if you had dozens (or maybe hundreds) of shots of the same bismuth piece at different angles, but it really doesn’t pay for a single image, especially at this quality, especially with all the jpeg artifacts that leaked through, and so on.
I feel like this is a bit of a wasted opportunity, you could have chosen a lot of different modalities of data, even something like a stream of data from the IMU sensor in your phone as you walk around the house. You would not need to add any artificial noise, it would already be there in the source data. Modeling that could actually be interesting (if the sample rate on the IMU was high enough for a physics-based model to help).
I also think that viewing the data ‘wrongly’ and figuring out something about it despite that is a feature, not a bug.
Updates on best results so far:
General purpose compression on the original file, using cmix:
Results with knowledge about the contents of the file: https://gist.github.com/mateon1/f4e2b8e3fad338405fa793fb155ebf29 (spoilers).
Summary:
The best general-purpose method after massaging the structure of the data manages 713248 bytes.
The best purpose specific method manages to compress the data, minus headers, to 712439 bytes.
(1) The first thing I did when approaching this was think about how the message is actually transmitted. Things like the preamble at the start of the transmission to synchronize clocks, the headers for source & destination, or the parity bits after each byte, or even things like using an inversed parity on the header so that it is possible to distinguish a true header from bytes within a message that look like a header, and even optional checksum calculations.
(2) I then thought about how I would actually represent the data so it wasn’t just traditional 8-bit bytes—I created encoders & decoders for 36/24/12/6 bit unsigned and signed ints, and 30 / 60 bit non-traditional floating point, etc.
Finally, I created a mock telemetry stream that consisted of a bunch of time-series data from many different sensors, with all of the sensor values packed into a single frame with all of the data types from (2), and repeatedly transmitted that frame over the varying time series, using (1), until I had >1 MB.
And then I didn’t submit that, and instead swapped to a single message using the transmission protocol that I designed first, and shoved an image into that message instead of the telemetry stream.
To avoid the flaw where the message is “just” 1-byte RGB, I viewed each pixel in the filter as being measured by a 24-bit ADC. That way someone decoding it has to consider byte-order when forming the 24-bit values.
Then, I added only a few LSB of noise because I was thinking about the type of noise you see on ADC channels prior to more extensive filtering. I consider it a bug that I only added noise in some interval
[0, +N]
, when I should have allowed the noise to be positive or negative. I am less convinced that the uniform distribution is incorrect. In my experience, ADC noise is almost always uniform (and only present in a few LSB), unless there’s a problem with the HW design, in which case you’ll get dramatic non-uniform “spikes”. I was assuming that the alien HW is not so poorly designed that they are railing their ADC channels with noise of that magnitude.I wanted the color data to be more complicated than just RGB, so I used a Bayer filter, that way people decoding it would need to demosiac the color channels. This further increased the size of the image.
The original, full resolution image produced a file much larger than 1 MB when it was put through the above process (3 8-bit RGB → 4 24-bit Bayer), so I cut the resolution on the source image until the output was more reasonably sized. I wasn’t thinking about how that would impact the image analysis, because I was still thinking about the data types (byte order, number of bits, bit ordering) more so than the actual image content.
“Was the source image actually a JPEG?” I didn’t check for JPEG artifacts at all, or analyze the image beyond trying to find a nice picture of bismuth with the full color of the rainbow present so that all of the color channels would be used. I just now did a search for “bismuth png” on Google, got a few hits, opened one, and it was actually a JPG. I remember scrolling through a bunch of Google results before I found an image that I liked, and then I just remember pulling & saving it as a BMP. Even if I had downloaded a source PNG as I intended, I definitely didn’t check that the PNG itself wasn’t just a resaved JPEG.