Edit: The y-axis is inverted. The graph shows differences when it should show similarities.
Each bit of the message is 55% likely to be the the same as its predecessor; the bits of the message are autocorrelated.
If we split the message into 64-bit chunks, each bit in a given chunk has a 68% chance of being the same as the corresponding bit in the previous chunk.
If you split the message into 64-bit chunks, the last 6 bits of each chunk are always identical. That is, they’re either “000000” or “111111″. I don’t think there’s a third spatial dimension to the data, as chunking by n*64 doesn’t yield any substantial change in autocorrelation for n > 1.
Oh, and the 7th-last bit (index [57]) is always the opposite of the final six bits. I interpret this as either being due to some cellular automaton rule, or that the “aliens” have given us six redundant bits in order to help us figure out the 64-bit reading frame.
There’s something up with the eighth bit (index [7]) of every 64-bit chunk. It has a remarkably low turnover rate (23%) when compared to its next-door neighbors. Bits [48:51] also have low turnover rates (22%-25%), but the eighth bit’s low turnover rate uniquely persists when extended to context windows with lengths up to 20 chunks.
The last seven bits of every 64-bit chunk actually carry only one bit of information. The bit right before these seven (index [56]) has an abnormally high turnover rate w.r.t. next-door neighbors(66%).
Part of me wants to attribute this to some cellular automaton rule. But isn’t it interesting that, in a chunk, the eighth bit is unusually stable, while the eighth-last bit is unusually volatile? Some weird kind of symmetry at play here.
Edit: The y-axis is inverted. The graph shows differences when it should show similarities.
Each bit of the message is 55% likely to be the the same as its predecessor; the bits of the message are autocorrelated.
If we split the message into 64-bit chunks, each bit in a given chunk has a 68% chance of being the same as the corresponding bit in the previous chunk.
If you split the message into 64-bit chunks, the last 6 bits of each chunk are always identical. That is, they’re either “000000” or “111111″. I don’t think there’s a third spatial dimension to the data, as chunking by n*64 doesn’t yield any substantial change in autocorrelation for n > 1.
Oh, and the 7th-last bit (index [57]) is always the opposite of the final six bits. I interpret this as either being due to some cellular automaton rule, or that the “aliens” have given us six redundant bits in order to help us figure out the 64-bit reading frame.
There’s something up with the eighth bit (index [7]) of every 64-bit chunk. It has a remarkably low turnover rate (23%) when compared to its next-door neighbors. Bits [48:51] also have low turnover rates (22%-25%), but the eighth bit’s low turnover rate uniquely persists when extended to context windows with lengths up to 20 chunks.
The last seven bits of every 64-bit chunk actually carry only one bit of information. The bit right before these seven (index [56]) has an abnormally high turnover rate w.r.t. next-door neighbors(66%).
Part of me wants to attribute this to some cellular automaton rule. But isn’t it interesting that, in a chunk, the eighth bit is unusually stable, while the eighth-last bit is unusually volatile? Some weird kind of symmetry at play here.