Even leaving issues of quantum physics aside, macroscopic physical objects like humans are unlikely to be very compressible (information-wise, that is). The author might feel that the number of lead atoms in their 36 molar tooth is not part of their Kolmogorov string, but I would argue that is is certainly part of a complete description.
I don’t know, just how compressible are we? I agree that the lead in my 36 molar is a part of my description, but anomalies such as these are always going to be the hardest part of compression since noise is not compressible. So maybe a complete description would look more like “all of the usual teeth, with xyz lead anomalies”.
In practice, that fine level of detail is not actually what I care about. Just like I listen to lossy compressed music, I would be fine with being uploaded into a somewhat lossy representation of myself where I don’t have any lead atoms in my teeth.
The “noise” of lead atoms in your teeth are among the least important bits in your Kolmogorov string, and would be the first to be dropped if you decided to allow a lossy representation. This reminds me of overfitting actually. The first thing a model tries to learn are the actual useful bits, and then later on when you train too long it starts to memorize the random noise in the dataset.
I don’t know, just how compressible are we? I agree that the lead in my 36 molar is a part of my description, but anomalies such as these are always going to be the hardest part of compression since noise is not compressible. So maybe a complete description would look more like “all of the usual teeth, with xyz lead anomalies”.
The “noise” of lead atoms in your teeth are among the least important bits in your Kolmogorov string, and would be the first to be dropped if you decided to allow a lossy representation. This reminds me of overfitting actually. The first thing a model tries to learn are the actual useful bits, and then later on when you train too long it starts to memorize the random noise in the dataset.