The previous post looked at measuring the resemblance between some region and its environment as a possible definition of knowledge and found that it was not able to account for the range of possible representations of knowledge.
I found myself going back to the previous post to clarify what you mean here. I feel like you could do a better job of summarizing the issue of the previous post (maybe by mentioning the computer example explicitly?).
Formally, the mutual information between two objects is the gap between the entropy of the two objects considered as a whole, and the sum of the entropy of the two objects considered separately. If knowing the configuration of one object tells us nothing about the configuration of the other object, then the entropy of the whole will be exactly equal to the sum of the entropy of the parts, meaning there is no gap, in which case the mutual information between the two objects is zero. To the extent that knowing the configuration of one object tells us something about the configuration of the other, the mutual information between them is greater than zero.
I need to get deeper into information theory, but that is probably the most intuitive explanation of mutual information I’ve seen. I delayed reading this post because I worried that my half-remembered information theory wasn’t up to it, but you deal with that nicely.
At the microscopic level, each photon that strikes the surface of an object might change the physical configuration of that object by exciting an electron or knocking out a covalent bond. Over time, the photons bouncing off the object being sought and striking other objects will leave an imprint in every one of those objects that will have high mutual information with the position of the object being sought. So then does the physical case in which the computer is housed have as much “knowledge” about the position of the object being sought as the computer itself?
Interestingly, I expect this effect to disappear when the measurement defining our two variable get less precise. In a sense, the mutual information between the case and ship container depend on measuring very subtle differences, whereas the mutual information between the computer and the ship container is far more robust to loss of precision.
For example, a computer that is using an electron microscope to build up a circuit diagram of its own CPU ought to be considered an example of the accumulation of knowledge. However, the mutual information between the computer and itself is always equal to the entropy of the computer and is therefore constant over time, since any variable always has perfect mutual information with itself.
But wouldn’t there be a part of the computer that accumulates knowledge about the whole computer?
This is also true of the mutual information between the region of interest and the whole system: since the whole system includes the region of interest, the mutual information between the two is always equal to the entropy of the region of interest, since every bit of information we learn about the region of interest gives us exactly one bit of information about the whole system also.
Maybe it’s my lack of understanding of information theory speaking, but that sounds wrong. Surely there’s a difference between cases where the region of interest determines the full environment, and when it is completely independent of the rest of the environment?
The accumulation of information within a region of interest seems to be a necessary but not sufficient condition for the accumulation of knowledge within that region. Measuring mutual information fails to account for the usefulness and accessibility that makes information into knowledge.
Despite my comments above, that sounds broadly correct. I’m not sure that the mutual information would capture your example of the textbook for example, even when it contains a lot of knowledge.
Thanks again for a nice post in this sequence!
I found myself going back to the previous post to clarify what you mean here. I feel like you could do a better job of summarizing the issue of the previous post (maybe by mentioning the computer example explicitly?).
I need to get deeper into information theory, but that is probably the most intuitive explanation of mutual information I’ve seen. I delayed reading this post because I worried that my half-remembered information theory wasn’t up to it, but you deal with that nicely.
Interestingly, I expect this effect to disappear when the measurement defining our two variable get less precise. In a sense, the mutual information between the case and ship container depend on measuring very subtle differences, whereas the mutual information between the computer and the ship container is far more robust to loss of precision.
But wouldn’t there be a part of the computer that accumulates knowledge about the whole computer?
Maybe it’s my lack of understanding of information theory speaking, but that sounds wrong. Surely there’s a difference between cases where the region of interest determines the full environment, and when it is completely independent of the rest of the environment?
Despite my comments above, that sounds broadly correct. I’m not sure that the mutual information would capture your example of the textbook for example, even when it contains a lot of knowledge.