Knowledge is not just map/territory resemblance
Financial status: This is independent research. I welcome financial support to make further posts like this possible.
Epistemic status: This is in-progress thinking.
This post is part of a sequence on the accumulation of knowledge. Our goal is to articulate what it means for knowledge to accumulate within a physical system.
The challenge is this: given a closed physical system, if I point to a region and tell you that “knowledge is accumulating” in this region, how would you test my claim? What are the physical characteristics of the accumulation of knowledge? We do not take some agent as the fundamental starting point but instead take a mechanistic physical system as the starting point, and look for a definition of knowledge accumulation in terms of physical patterns.
In this post I explore directly looking within the territory for a physical map that resembles the territory.
Example: Shipping container
In the shipping container example from the previous post, if we found within some region of interest a physical map with markings that clearly corresponded to the arrangement of items in the shipping container, then we could say, yes, knowledge exists within this region. If we revisited this region over several time steps and observed the markings on the map coming into closer and closer correspondence to the configuration of the overall shipping container then we could say that knowledge is accumulating there.
Example: Sailing ship
Suppose we examined a sailing ship from a few hundred years ago and asked whether knowledge was accumulating on that sailing ship. If we found physical sheets of paper with drawings of nearby coastlines then we could say, yes, there is a process happening on this ship that is resulting in the accumulation of knowledge.
Resemblance
But what exactly does it mean for one thing to resemble another? The original lesswrong 1.0 had the following header at the top of each page, pointing at a certain concept of map/territory resemblance:
Suppose we converted both the “map” and “territory” image to grayscale and then looked at a 2D histogram of pixel intensities with pixels from the “map” on the X axis and pixels from the “territory” on the Y axis.
We would find that the “map” and “territory” pixel intensities are predictive of one another, in the sense that knowing the “map” pixel intensity gives us some information about the corresponding “territory” pixel intensity, even if we don’t know the location of the pixel in the image.
Now here we are just computing a resemblance between two images, neither of which is actually the territory. But we could go out and measure the average reflectivity of different parts of the surface of the Earth and compare those values to the street map above and get the same result. Or if we are asking whether knowledge is accumulating within a region of a cellular automata like Conway’s Game of Life then we could look for a connection between the on/off configuration of cells in that region to the on/off configuration of cells in the whole system using the same method.
We can take this method beyond pixel-wise computation. If we found a military planning room with figurines laid out on a flat surface then we could plot the position of those figurines against the coordinates of cities, people, or buildings in the physical world in order to discover whether these figurines represent accumulated knowledge.[1]
Counterexample: Digital computer
The problem with directly looking for a resemblance between map and territory is that maps might be represented in all sorts of ways. A map drawn on a physical sheet of paper is easy to recognize as a map because there is a direct relationship between the spatial layout of markings on the map and the spatial layout of objects in the world. But if instead of a physical sheet of paper the map were represented as a file on a digital computer then although we would still expect a relationship between the physical configuration of the computer’s memory units and the configuration of the coastline to exist, we wouldn’t expect to be able to discover it so easily.
General problem: Representation
Maps must exist physically within the territory but their representation might make it impossible to recognize them by looking at a single configuration of the system.
Conclusion
The accumulation of knowledge clearly does have a lot to do with a resemblance between map and territory, but any notion of resemblance that can be defined with respect to a single configuration of some system cannot provide both necessary and sufficient conditions for the accumulation of knowledge. The next post will examine notions of resemblance that go beyond a single configuration of the system.
- ↩︎
To determine whether two distributions are predictive of one another we can compute mutual information. The next post in this sequence also uses mutual information, but in a different way. In this post we are computing mutual information between the configurations of parts of the map and parts of the territory given a single configuration of the system, whereas in the next post we will compute mutual information between the whole configuration of the map and the whole configuration of the territory, given many configurations of the system acquired, for example, by running many simulations.
I don’t remember the image you show. I looked it up, I don’t see this header on the wayback machine. I see a map atop this post in 2009 and then not too long after it becomes the grey texture that stayed until LW 2.0. Where did you get your image from?
Dang, the images in this post are totally off. I have a script that converts a google doc to markdown, then I proofread the markdown, but the images don’t show up in the editor, and it looks like my script is off. Will fix tomorrow.
Update: fixed
I love the depth you’re going into with this sequence, and I am very keen to read more about this. I wonder if the word “knowledge” is not ideal. It seems like the examples you’ve given, while all clearly “knowledge” could correspond to different things. Possibly the human-understandable concept of “knowledge” is tied up with lots of agent-y optimizer-y things which make it more difficult to describe in a human-comfortable way on the level of physics (or maybe it’s totally possible and you’re going to prove me dead-wrong in the next few posts!)
My other thought is that knowledge is stable to small perturbations (equivalently: small amounts of uncertainty) of the initial knowledge-accumulating region: a rock on the moon moved a couple of atoms to the left would not get the same mutual information with the history of humanity, but a ship moved a couple of atoms to the left would make the same map of the coastline.
This brings to mind the idea of abstractions as things which are not “wiped out” by noise or uncertainty between a system and an observer. Lots of examples I can think of as knowledge seem to be representations of abstractions but so do some counterexamples (it’s possible—minus quantumness—to have knowledge about the position of an atom at a certain time).
Other systems which are stable to small perturbations of the starting configuration are optimizers. I have written about optimizers previously using an information-theoretic point of view (though before realizing I only have a slippery grasp on the concept of knowledge). Is a knowledge-accumulating algorithm simply a special class of optimization algorithm? Backpropagation definitely seems to be both, so there’s probably significant overlap, but maybe there are some counter examples I haven’t thought of yet.
Thank you for the kind words Jemist.
Yeah I’m open to improvements upon the use of the word “knowledge” because you’re right that what I’m describing here isn’t quite what either philosophers or cognitive scientists refer to as knowledge.
Yes knowledge-accumulating systems do seem to be a special case of optimizing systems. It may be that among all optimizing systems, it is precisely the ones that accumulate knowledge in the process of optimization that are of most interest to us from an alignment perspective, because knowledge-accumulating optimizing systems are (perhaps) the most powerful of all optimizing systems.