I believe I had a good start analyzing the file, although I’m currently slightly stuck on the exact details of the encoding.
Spoilers ahead for people who want to try things entirely by themselves.
My initial findings were that the raw file easily compresses from 2100086 bytes to 846149 bytes with zpaq -m5, probably hard to beat its context modeling with other compressors or even manual implementations.
I wrote a Python script to reverse the intern’s transformation and analyzed the bits, looks like the file is largely organized into septets of data. I dumped those septets in all the ways that made sense (inverted bits, inverted order of bits) into a file. None of them compressed better than the source.
I opened those files with GIMP as raw data files, and noticed that at 4000x600 8-bit grayscale with a small offset for the header it looks like some sort of 2D data, but it’s extremely noisy. There are lots of very flat areas in the file, though, which is why it seems to compress so well. Likely a threshold of some sort on the sensor data.
By only taking every fourth septet I could form four 1000x600 grayscale images. One of them seems to have actual structure (I presume it’s the most significant septet), the rest are dominated by noise.
There also seems to be some sort of pattern in the picture pixels similar to the raw data behind a Bayer filter in a camera. It might be a 2x2 or a 4x4 filter, I can’t quite tell. It could also just be some sort of block artifacts from the picture source (which wouldn’t be present in raw sensor data from aliens, but the author needs to source this image somehow).
The binary encoding seems to involve a sign bit in each septet if I’m interpreting things correctly, but I’m not sure how to combine septets into a sensible value yet.
I’m currently stuck at this point and won’t have time to work on it more this evening, but seems like good progress for one and a half hour.
I sat down before going to bed and believe I have made some more progress.
I experimented with what I called the sign bit in the earlier post, and I’m certain I got it wrong. By ignoring the sign bit, I can reconstruct a much higher fidelity image. I can also do a non-obvious operation of rotating the bit to least significant place after inverting. I can’t visually distinguish the two approaches, though.
I wrote a naive debayering filter and got this image out: https://i.imgur.com/e5ydBTb.png (bit rotated version, 16-bit color RGB. Red channel on even rows/columns is exact, blue channel on odd rows/columns is exact, green channel is exact elsewhere.)
Finding the original image would definitely help, even if it wouldn’t fit the spirit of this challenge.
I haven’t yet tried going in the reverse direction and seeing how much space this saves—doing this properly is tricky. I’m not aware of any good, ready-to-use libraries that provide things like context modeling and arithmetic coding, so writing a custom compressor is a lot of work.
In fact, I believe it may be worth trying to break the author’s noise source on the sensor. Most programming languages use a fairly breakable PRNG, either a xorshift variant or an LCG. But this may be a dead end if cryptographic randomness was used. Again, this wouldn’t fit the spirit of this challenge, but it would minimize description length if it worked.
I have further discovered that in the bulk of the data, the awkward seventh bit is not in fact part of the values in the image, it is a parity bit. My analysis was confused by counting septets from the beginning of the file, which unfortunately seems incorrect.
Analyzing bi-gram statistics on the septets helped figure out that what I previously believed to be the ‘highest’ bit is in fact the lowest bit of the previous value, and that value always makes the parity of the septet even.
I was trying to ignore the header for now after failing to find values corresponding to the image width and height, but it looks like it’s biting me in the ass right now.
Analyzing the file more carefully:
The first 78 bits are the ‘prelude’, presumably added by the transmission system itself to estabilish a clock for the signal.
All following bits are divided into septets, with each septet’s lowest (last) bit being a parity check.
But: the first 37 septets have inverted (odd) parity (It’s possible it’s metadata for the transmitting system itself).
The next 50 septets constitute some kind of header (with even parity).
The remaining septets are image data (with even parity), four septets per pixel, most significant first. This means the image has 24-bit depth.
There’s a rogue zero bit at the end of the transmission to make things a multiple of 8 bits.
Note that the intern’s code does not explain the last zero bit. The code would not output extra data at the end, it would truncate the file if anything, so the zero bit must have been part of the transmission.
Curiously, by removing the data mentioned in the spoiler and packing everything into nice big-endian values, zpaq compresses the file worse than it does the original file. Just the main section of the file compresses to 850650 bytes.
I figured out how the values (and the noise) are generated.
The source image is an 8-bits per pixel color image, the source pixel value is chosen from one of the color channels using a bayer filter, with a blue filter at 0, 0.
The final value is given by: clamp(0, 2**24-1, (source_value * 65793 + uniform(0, 1676))), where uniform(x, y) is a uniformly chosen random value between x and y inclusive.
Without cracking the noise source, the best we can do to encode the noise itself is 465255 bytes.
...because 347475 pixels in the image have non-255 values, and log2(1677^347475) = 3722036.5 bits.
The best I could do to encode the bulk of the data is 276554 bytes, again with the general purpose zpaq compressor.
Aside on attempted alternative compression methods for the bulk of the data:
Image compression did quite poorly here, I thought adam7 interlacing would help compression due to the bayer filter pattern, but png did not perform well. With zopflipng + pngcrush the best I could achieve was 329939 bytes.
This gives an approximate lower bound of 741809 bytes without either modeling the actual data better, or cracking the noise source. This does not include the data needed to describe the decompressor and glue all the data together into the original bitstream.
My understanding of faul_sname’s claim is that for the purpose of this challenge we should treat the alien sensor data output as an original piece of data.
In reality, yes, there is a source image that was used to create the raw data that was then encoded and transmitted. But in the context of the fiction, the raw data is supposed to represent the output of the alien sensor, and the claim is that the decompressor + payload is less than the size of just an ad-hoc gzipping of the output by itself. It’s that latter part of the claim that I’m skeptical towards. There is so much noise in real sensors—almost always the first part of any sensor processing pipeline is some type of smoothing, median filtering, or other type of noise reduction. If a solution for a decompressor involves saving space on encoding that noise by breaking a PRNG, it’s not clear to me how that would apply to a world in which this data has no noise-less representation available. However, a technique of measuring & subtracting noise so that you can compress a representation that is more uniform and then applying the noise as a post-processing op during decoding is definitely doable.
Assuming that you use the payload of size 741809 bytes, and are able to write a decompressor + “transmitter” for that in the remaining ~400 KB (which should be possible, given that 7z is ~450 KB, zip is 349 KB, other compressors are in similar size ranges, and you’d be saving space since you just need to the decoder portion of the code), how would we rate that against the claims?
It would be possible for me, given some time to examine the data, create a decompressor and a payload such that running the decompressor on the payload yields the original file, and the decompressor program + the payload have a total size of less than the original gzipped file
The decompressor would legibly contain a substantial amount of information about the structure of the data.
(1) seems obviously met, but (2) is less clear to me. Going back to the original claim, faul_sname said ‘we would see that the winning programs would look more like “generate a model and use that model and a similar rendering process to what was used to original file, plus an error correction table” and less like a general-purpose compressor’.
So far though, this solution does use a general purpose compressor. My understanding of (2) is that I was supposed to be looking for solutions like “create a 3D model of the surface of the object being detected and then run lighting calculations to reproduce the scene that the camera is measuring”, etc. Other posts from faul_sname in the thread, e.g. here seem to indicate that was their thinking as well, since they suggested using ray tracing as a method to describe the data in a more compressed manner.
What are your thoughts?
Regarding the sensor data itself
I alluded to this in my post here, but I was waffling and backpedaling a lot on what would be “fair” in this challenge. I gave a bunch of examples in the thread of what would make a binary file difficult to decode—e.g. non-uniform channel lengths, an irregular data structure, multiple types of sensor data interwoven into the same file, and then did basically none of that, because I kept feeling like the file was unapproachable. Anything that was a >1 MB of binary data but not a 2D image (or series of images) seemed impossible. For example, the first thing I suggested in the other thread was a stream of telemetry from some alien system.
I thought this file would strike a good balance, but I now see that I made a crucial mistake: I didn’t expect that you’d be able to view it with the wrong number of bits per byte (7 instead of 6) and then skip almost every byte and still find a discernible image in the grayscale data. Once you can “see” what the image is supposed to be, the hard part is done.
I was assuming that more work would be needed for understanding the transmission itself (e.g. deducing the parity bits by looking at the bit patterns), and then only after that would it be possible to look at the raw data by itself.
I had a similar issue when I was playing with LIDAR data as an alternative to a 2D image. I found that a LIDAR point cloud is eerily similar enough to image data that you can stumble upon a depth map representation of the data almost by accident.
I actually did not read the linked thread until now, I came across this post from the front page and thought this was a potentially interesting challenge.
Regarding “in the concept of the fiction”, I think this piece of data is way too human to be convincing. The noise is effectively a ‘gotcha, sprinkle in /dev/random into the data’.
Why sample with 24 bits of precision if the source image only has 8 bits of precision, and it shows. Then why only add <11 bits of noise, and uniform noise at that? It could work well if you had a 16-bit lossless source image, or even an approximation of one, but the way this image is constructed is way too artificial. (And why not gaussian noise? Or any other kind of more natural noise? Uniform noise pretty much never happens in practice.) One can also entirely separate the noise from the source data you used because 8 + 11 < 24.
JPEG-caused block artifacts were visible while I was analyzing the planes of the image, that’s why I thought the bayer filter was possibly 4x4 pixels in size. I believe you likely downsampled the image from a jpeg at approximately 2000x1200 resolution, which does affect analysis and breaks the fiction that this is raw sensor data from an alien civilization.
With these kinds of flaws I do believe cracking the PRNG is within limits since the data is already really flawed.
(1) is possibly true. At least it’s true in this case, although in practice understanding the structure of the data doesn’t actually help very much vs some of the best general purpose compressors from the PAQ family.
It doesn’t help that lossless image compression algorithms kinda suck. I can often get better results by using zpaq on a NetPBM file than using a special purpose algorithm like png or even lossless webp (although the latter is usually at least somewhat competitive with the zpaq results).
(2) I’d say my decompressor would contain useful info about the structure of the data, or at least the file format itself, however...
...it would not contain any useful representation of the pictured piece of bismuth. The lossless compression requirement hurts a lot. Reverse rendering techniques for various representations do exist, but they are either lossy, or larger than the source data.
Constructing and raytracing a NeRF / SDF / voxel grid / whatever might possibly be competitive if you had dozens (or maybe hundreds) of shots of the same bismuth piece at different angles, but it really doesn’t pay for a single image, especially at this quality, especially with all the jpeg artifacts that leaked through, and so on.
I feel like this is a bit of a wasted opportunity, you could have chosen a lot of different modalities of data, even something like a stream of data from the IMU sensor in your phone as you walk around the house. You would not need to add any artificial noise, it would already be there in the source data. Modeling that could actually be interesting (if the sample rate on the IMU was high enough for a physics-based model to help).
I also think that viewing the data ‘wrongly’ and figuring out something about it despite that is a feature, not a bug.
Updates on best results so far:
General purpose compression on the original file, using cmix:
(1) The first thing I did when approaching this was think about how the message is actually transmitted. Things like the preamble at the start of the transmission to synchronize clocks, the headers for source & destination, or the parity bits after each byte, or even things like using an inversed parity on the header so that it is possible to distinguish a true header from bytes within a message that look like a header, and even optional checksum calculations.
(2) I then thought about how I would actually represent the data so it wasn’t just traditional 8-bit bytes—I created encoders & decoders for 36/24/12/6 bit unsigned and signed ints, and 30 / 60 bit non-traditional floating point, etc.
Finally, I created a mock telemetry stream that consisted of a bunch of time-series data from many different sensors, with all of the sensor values packed into a single frame with all of the data types from (2), and repeatedly transmitted that frame over the varying time series, using (1), until I had >1 MB.
And then I didn’t submit that, and instead swapped to a single message using the transmission protocol that I designed first, and shoved an image into that message instead of the telemetry stream.
To avoid the flaw where the message is “just” 1-byte RGB, I viewed each pixel in the filter as being measured by a 24-bit ADC. That way someone decoding it has to consider byte-order when forming the 24-bit values.
Then, I added only a few LSB of noise because I was thinking about the type of noise you see on ADC channels prior to more extensive filtering. I consider it a bug that I only added noise in some interval [0, +N], when I should have allowed the noise to be positive or negative. I am less convinced that the uniform distribution is incorrect. In my experience, ADC noise is almost always uniform (and only present in a few LSB), unless there’s a problem with the HW design, in which case you’ll get dramatic non-uniform “spikes”. I was assuming that the alien HW is not so poorly designed that they are railing their ADC channels with noise of that magnitude.
I wanted the color data to be more complicated than just RGB, so I used a Bayer filter, that way people decoding it would need to demosiac the color channels. This further increased the size of the image.
The original, full resolution image produced a file much larger than 1 MB when it was put through the above process (3 8-bit RGB → 4 24-bit Bayer), so I cut the resolution on the source image until the output was more reasonably sized. I wasn’t thinking about how that would impact the image analysis, because I was still thinking about the data types (byte order, number of bits, bit ordering) more so than the actual image content.
“Was the source image actually a JPEG?” I didn’t check for JPEG artifacts at all, or analyze the image beyond trying to find a nice picture of bismuth with the full color of the rainbow present so that all of the color channels would be used. I just now did a search for “bismuth png” on Google, got a few hits, opened one, and it was actually a JPG. I remember scrolling through a bunch of Google results before I found an image that I liked, and then I just remember pulling & saving it as a BMP. Even if I had downloaded a source PNG as I intended, I definitely didn’t check that the PNG itself wasn’t just a resaved JPEG.
I believe I had a good start analyzing the file, although I’m currently slightly stuck on the exact details of the encoding.
Spoilers ahead for people who want to try things entirely by themselves.
My initial findings were that the raw file easily compresses from 2100086 bytes to 846149 bytes with zpaq -m5, probably hard to beat its context modeling with other compressors or even manual implementations.
I wrote a Python script to reverse the intern’s transformation and analyzed the bits, looks like the file is largely organized into septets of data. I dumped those septets in all the ways that made sense (inverted bits, inverted order of bits) into a file. None of them compressed better than the source.
I opened those files with GIMP as raw data files, and noticed that at 4000x600 8-bit grayscale with a small offset for the header it looks like some sort of 2D data, but it’s extremely noisy. There are lots of very flat areas in the file, though, which is why it seems to compress so well. Likely a threshold of some sort on the sensor data.
By only taking every fourth septet I could form four 1000x600 grayscale images. One of them seems to have actual structure (I presume it’s the most significant septet), the rest are dominated by noise.
It’s recognizable as a picture of a piece of crystalized bismuth. (https://i.imgur.com/rP5sq56.png) :::
There also seems to be some sort of pattern in the picture pixels similar to the raw data behind a Bayer filter in a camera. It might be a 2x2 or a 4x4 filter, I can’t quite tell. It could also just be some sort of block artifacts from the picture source (which wouldn’t be present in raw sensor data from aliens, but the author needs to source this image somehow).
The binary encoding seems to involve a sign bit in each septet if I’m interpreting things correctly, but I’m not sure how to combine septets into a sensible value yet.
I’m currently stuck at this point and won’t have time to work on it more this evening, but seems like good progress for one and a half hour.
I sat down before going to bed and believe I have made some more progress.
I experimented with what I called the sign bit in the earlier post, and I’m certain I got it wrong. By ignoring the sign bit, I can reconstruct a much higher fidelity image. I can also do a non-obvious operation of rotating the bit to least significant place after inverting. I can’t visually distinguish the two approaches, though.
I wrote a naive debayering filter and got this image out: https://i.imgur.com/e5ydBTb.png (bit rotated version, 16-bit color RGB. Red channel on even rows/columns is exact, blue channel on odd rows/columns is exact, green channel is exact elsewhere.)
You can reverse image search that image to find that it’s a standard stock photo of bismuth, example larger but slightly cropped version: http://blog-imgs-98.fc2.com/s/c/i/scienceminestrone/Bismuth.jpg
Finding the original image would definitely help, even if it wouldn’t fit the spirit of this challenge.
I haven’t yet tried going in the reverse direction and seeing how much space this saves—doing this properly is tricky. I’m not aware of any good, ready-to-use libraries that provide things like context modeling and arithmetic coding, so writing a custom compressor is a lot of work.
In fact, I believe it may be worth trying to break the author’s noise source on the sensor. Most programming languages use a fairly breakable PRNG, either a xorshift variant or an LCG. But this may be a dead end if cryptographic randomness was used. Again, this wouldn’t fit the spirit of this challenge, but it would minimize description length if it worked.
I have further discovered that in the bulk of the data, the awkward seventh bit is not in fact part of the values in the image, it is a parity bit. My analysis was confused by counting septets from the beginning of the file, which unfortunately seems incorrect.
Analyzing bi-gram statistics on the septets helped figure out that what I previously believed to be the ‘highest’ bit is in fact the lowest bit of the previous value, and that value always makes the parity of the septet even.
I was trying to ignore the header for now after failing to find values corresponding to the image width and height, but it looks like it’s biting me in the ass right now.
Analyzing the file more carefully:
The first 78 bits are the ‘prelude’, presumably added by the transmission system itself to estabilish a clock for the signal.
All following bits are divided into septets, with each septet’s lowest (last) bit being a parity check.
But: the first 37 septets have inverted (odd) parity (It’s possible it’s metadata for the transmitting system itself).
The next 50 septets constitute some kind of header (with even parity).
The remaining septets are image data (with even parity), four septets per pixel, most significant first. This means the image has 24-bit depth.
There’s a rogue zero bit at the end of the transmission to make things a multiple of 8 bits. Note that the intern’s code does not explain the last zero bit. The code would not output extra data at the end, it would truncate the file if anything, so the zero bit must have been part of the transmission.
Curiously, by removing the data mentioned in the spoiler and packing everything into nice big-endian values, zpaq compresses the file worse than it does the original file. Just the main section of the file compresses to 850650 bytes.
Morning progress so far:
I figured out how the values (and the noise) are generated.
The source image is an 8-bits per pixel color image, the source pixel value is chosen from one of the color channels using a bayer filter, with a blue filter at 0, 0.
The final value is given by:
clamp(0, 2**24-1, (source_value * 65793 + uniform(0, 1676)))
, where uniform(x, y) is a uniformly chosen random value between x and y inclusive.Without cracking the noise source, the best we can do to encode the noise itself is 465255 bytes.
...because 347475 pixels in the image have non-255 values, and log2(1677^347475) = 3722036.5 bits.
The best I could do to encode the bulk of the data is 276554 bytes, again with the general purpose zpaq compressor.
Aside on attempted alternative compression methods for the bulk of the data:
Image compression did quite poorly here, I thought adam7 interlacing would help compression due to the bayer filter pattern, but png did not perform well. With zopflipng + pngcrush the best I could achieve was 329939 bytes.
This gives an approximate lower bound of 741809 bytes without either modeling the actual data better, or cracking the noise source. This does not include the data needed to describe the decompressor and glue all the data together into the original bitstream.
My understanding of faul_sname’s claim is that for the purpose of this challenge we should treat the alien sensor data output as an original piece of data.
In reality, yes, there is a source image that was used to create the raw data that was then encoded and transmitted. But in the context of the fiction, the raw data is supposed to represent the output of the alien sensor, and the claim is that the decompressor + payload is less than the size of just an ad-hoc gzipping of the output by itself. It’s that latter part of the claim that I’m skeptical towards. There is so much noise in real sensors—almost always the first part of any sensor processing pipeline is some type of smoothing, median filtering, or other type of noise reduction. If a solution for a decompressor involves saving space on encoding that noise by breaking a PRNG, it’s not clear to me how that would apply to a world in which this data has no noise-less representation available. However, a technique of measuring & subtracting noise so that you can compress a representation that is more uniform and then applying the noise as a post-processing op during decoding is definitely doable.
Assuming that you use the payload of size 741809 bytes, and are able to write a decompressor + “transmitter” for that in the remaining ~400 KB (which should be possible, given that 7z is ~450 KB, zip is 349 KB, other compressors are in similar size ranges, and you’d be saving space since you just need to the decoder portion of the code), how would we rate that against the claims?
(1) seems obviously met, but (2) is less clear to me. Going back to the original claim, faul_sname said ‘we would see that the winning programs would look more like “generate a model and use that model and a similar rendering process to what was used to original file, plus an error correction table” and less like a general-purpose compressor’.
So far though, this solution does use a general purpose compressor. My understanding of (2) is that I was supposed to be looking for solutions like “create a 3D model of the surface of the object being detected and then run lighting calculations to reproduce the scene that the camera is measuring”, etc. Other posts from faul_sname in the thread, e.g. here seem to indicate that was their thinking as well, since they suggested using ray tracing as a method to describe the data in a more compressed manner.
What are your thoughts?
Regarding the sensor data itself
I alluded to this in my post here, but I was waffling and backpedaling a lot on what would be “fair” in this challenge. I gave a bunch of examples in the thread of what would make a binary file difficult to decode—e.g. non-uniform channel lengths, an irregular data structure, multiple types of sensor data interwoven into the same file, and then did basically none of that, because I kept feeling like the file was unapproachable. Anything that was a >1 MB of binary data but not a 2D image (or series of images) seemed impossible. For example, the first thing I suggested in the other thread was a stream of telemetry from some alien system.
I thought this file would strike a good balance, but I now see that I made a crucial mistake: I didn’t expect that you’d be able to view it with the wrong number of bits per byte (7 instead of 6) and then skip almost every byte and still find a discernible image in the grayscale data. Once you can “see” what the image is supposed to be, the hard part is done.
I was assuming that more work would be needed for understanding the transmission itself (e.g. deducing the parity bits by looking at the bit patterns), and then only after that would it be possible to look at the raw data by itself.
I had a similar issue when I was playing with LIDAR data as an alternative to a 2D image. I found that a LIDAR point cloud is eerily similar enough to image data that you can stumble upon a depth map representation of the data almost by accident.
I actually did not read the linked thread until now, I came across this post from the front page and thought this was a potentially interesting challenge.
Regarding “in the concept of the fiction”, I think this piece of data is way too human to be convincing. The noise is effectively a ‘gotcha, sprinkle in /dev/random into the data’.
Why sample with 24 bits of precision if the source image only has 8 bits of precision, and it shows. Then why only add <11 bits of noise, and uniform noise at that? It could work well if you had a 16-bit lossless source image, or even an approximation of one, but the way this image is constructed is way too artificial. (And why not gaussian noise? Or any other kind of more natural noise? Uniform noise pretty much never happens in practice.) One can also entirely separate the noise from the source data you used because 8 + 11 < 24.
JPEG-caused block artifacts were visible while I was analyzing the planes of the image, that’s why I thought the bayer filter was possibly 4x4 pixels in size. I believe you likely downsampled the image from a jpeg at approximately 2000x1200 resolution, which does affect analysis and breaks the fiction that this is raw sensor data from an alien civilization.
With these kinds of flaws I do believe cracking the PRNG is within limits since the data is already really flawed.
(1) is possibly true. At least it’s true in this case, although in practice understanding the structure of the data doesn’t actually help very much vs some of the best general purpose compressors from the PAQ family.
It doesn’t help that lossless image compression algorithms kinda suck. I can often get better results by using zpaq on a NetPBM file than using a special purpose algorithm like png or even lossless webp (although the latter is usually at least somewhat competitive with the zpaq results).
(2) I’d say my decompressor would contain useful info about the structure of the data, or at least the file format itself, however...
...it would not contain any useful representation of the pictured piece of bismuth. The lossless compression requirement hurts a lot. Reverse rendering techniques for various representations do exist, but they are either lossy, or larger than the source data.
Constructing and raytracing a NeRF / SDF / voxel grid / whatever might possibly be competitive if you had dozens (or maybe hundreds) of shots of the same bismuth piece at different angles, but it really doesn’t pay for a single image, especially at this quality, especially with all the jpeg artifacts that leaked through, and so on.
I feel like this is a bit of a wasted opportunity, you could have chosen a lot of different modalities of data, even something like a stream of data from the IMU sensor in your phone as you walk around the house. You would not need to add any artificial noise, it would already be there in the source data. Modeling that could actually be interesting (if the sample rate on the IMU was high enough for a physics-based model to help).
I also think that viewing the data ‘wrongly’ and figuring out something about it despite that is a feature, not a bug.
Updates on best results so far:
General purpose compression on the original file, using cmix:
Results with knowledge about the contents of the file: https://gist.github.com/mateon1/f4e2b8e3fad338405fa793fb155ebf29 (spoilers).
Summary:
The best general-purpose method after massaging the structure of the data manages 713248 bytes.
The best purpose specific method manages to compress the data, minus headers, to 712439 bytes.
(1) The first thing I did when approaching this was think about how the message is actually transmitted. Things like the preamble at the start of the transmission to synchronize clocks, the headers for source & destination, or the parity bits after each byte, or even things like using an inversed parity on the header so that it is possible to distinguish a true header from bytes within a message that look like a header, and even optional checksum calculations.
(2) I then thought about how I would actually represent the data so it wasn’t just traditional 8-bit bytes—I created encoders & decoders for 36/24/12/6 bit unsigned and signed ints, and 30 / 60 bit non-traditional floating point, etc.
Finally, I created a mock telemetry stream that consisted of a bunch of time-series data from many different sensors, with all of the sensor values packed into a single frame with all of the data types from (2), and repeatedly transmitted that frame over the varying time series, using (1), until I had >1 MB.
And then I didn’t submit that, and instead swapped to a single message using the transmission protocol that I designed first, and shoved an image into that message instead of the telemetry stream.
To avoid the flaw where the message is “just” 1-byte RGB, I viewed each pixel in the filter as being measured by a 24-bit ADC. That way someone decoding it has to consider byte-order when forming the 24-bit values.
Then, I added only a few LSB of noise because I was thinking about the type of noise you see on ADC channels prior to more extensive filtering. I consider it a bug that I only added noise in some interval
[0, +N]
, when I should have allowed the noise to be positive or negative. I am less convinced that the uniform distribution is incorrect. In my experience, ADC noise is almost always uniform (and only present in a few LSB), unless there’s a problem with the HW design, in which case you’ll get dramatic non-uniform “spikes”. I was assuming that the alien HW is not so poorly designed that they are railing their ADC channels with noise of that magnitude.I wanted the color data to be more complicated than just RGB, so I used a Bayer filter, that way people decoding it would need to demosiac the color channels. This further increased the size of the image.
The original, full resolution image produced a file much larger than 1 MB when it was put through the above process (3 8-bit RGB → 4 24-bit Bayer), so I cut the resolution on the source image until the output was more reasonably sized. I wasn’t thinking about how that would impact the image analysis, because I was still thinking about the data types (byte order, number of bits, bit ordering) more so than the actual image content.
“Was the source image actually a JPEG?” I didn’t check for JPEG artifacts at all, or analyze the image beyond trying to find a nice picture of bismuth with the full color of the rainbow present so that all of the color channels would be used. I just now did a search for “bismuth png” on Google, got a few hits, opened one, and it was actually a JPG. I remember scrolling through a bunch of Google results before I found an image that I liked, and then I just remember pulling & saving it as a BMP. Even if I had downloaded a source PNG as I intended, I definitely didn’t check that the PNG itself wasn’t just a resaved JPEG.