The “map” and “territory” analogy as it pertains to potentially novel territories that people may not anticipate
So in terms of the “map” and “territory” analogy, the goal of rationality is to make our map correspond more closely with the territory. This comes in two forms - (a) area and (b) accuracy. Person A could have a larger map than person B, even if A’s map might be less accurate than B’s map. There are ways to increase the area of the territory—often by testing things in the boundary value conditions of the territory. I often like asking boundary value/possibility space questions like “well, what might happen to the atmosphere of a rogue planet as time approaches infinity?”, since I feel like they might give us additional insight about the robustness of planetary atmosphere models across different environments (and also, the possibility that I might be wrong makes me more motivated to actually spend additional effort to test/calibrate my model more than I otherwise would test/calibrate it). My intense curiosity with these highly theoretical questions often puzzles the experts in the field though, since they feel like these questions aren’t empirically verifiable (so they are considered less “interesting”). I also like to study other things that many academics aren’t necessarily comfortable with studying (perhaps since it is harder to be empirically rigorous), such as the possible social outcomes that could spring out of a radical social experiment. When you’re concerned with maintaining the accuracy of your map, it may come at the sacrifice of dA/dt, where A is area (so your Area increases more slowly with time).
I also feel that social breaching experiments are another interesting way of increasing the volume of my “map”, since they help me test the robustness of my social models in situations that people are unaccustomed to. Hackers often perform these sorts of experiments to test the robustness of security systems (in fact, a low level of potentially embarrassing hacking is probably optimal when it comes to ensuring that the security system remains robust—although it’s entirely possible that even then, people may pay too much attention to certain models of hacking, causing potentially malicious hackers to dream up of new models of hacking).
With possibility space, you could code up the conditions of the environment in a k-dimensional space such as (1,0,0,1,0,...), where 1 indicates the existence of some variable in a particular environment, and 0 indicates the absence of such variable. We can then use Huffman Coding to indicate the frequency of the combination of each set of conditions in the set of environments we most frequently encounter (so then, less probable environments would have longer Huffman codes, or higher values of entropy/information).
As we know from Taleb’s book “The Black Swan”, many people frequently underestimate the prevalence of “long tail” events (which are often part of the unrealized portion of possibility space, and have longer Huffman codes). This causes them to over-rely on Gaussian distributions even in situations where the Gaussian distributions may be inappropriate, and it is often said that this was one of the factors behind the recent financial crisis.
Now, what does this investigation of possibility space allow us to do? It allows us to re-examine the robustness of our formal system - how sensitive or flexible our system is with respect to continuing its duties in the face of perturbations in the environment we believe it’s applicable for. We often have a tendency to overestimate the consistency of the environment. But if we consistently try to test the boundary conditions, we might be able to better estimate the “map” that corresponds to the “territory” of different (or potentially novel) environments that exist in possibility space, but not yet in realized possibility space.
The thing is, though, that many people have a habitual tendency to avoid exploring boundary conditions. The fact is, that the space of realized events is always far smaller than the entirety of possibility space, and it is usually impractical to explore all of possibility space. Since our time is limited, and the payoffs of exploring the unrealized portions of possibility space uncertain (and often time-delayed, and also subject to hyperbolic time-discounting, especially when the payoffs may come only after a single person’s lifetime), people often don’t explore these portions of possibility space (although life extension, combined with various creative approaches to decrease people’s time preference, might change the incentives). Furthermore, we cannot empirically verify unrealized portions of possibility space using the traditional scientific method. Bayesian methods may be more appropriate, but even then, people may be susceptible to plugging the wrong values into the Bayesian formula (again, perhaps due to over-assuming continuity in environmental conditions). As in my original example about hacking, it is way too easy for the designers of security systems to use the wrong Bayesian priors when they are being observed by potential hackers, who may have an idea about ways that take advantage of the values of these Bayesian priors.
A nice discussion posting. Let me try to capture some of your points with a game-theory metaphor.
We are playing an iterated two person game, and play has fallen into a repetitive pattern which may well be a Nash equilibrium—satisfactory to both sides. However, we don’t know for sure that current play is optimal, because much of the decision matrix remains unexplored—we simply don’t know what the payoffs are for some combinations of pure strategies, because those strategies have never been tried.
We may be inclined to do some exploring, but there are some difficulties.
Our coplayers tend to treat exploration moves on our part as defections and punish us accordingly. Efficient exploration seems to require collaboration between players, but our coplayers are less inclined to explore than we are.
Exploration usually costs us some payoff utility in the short term, even though it gains us some information about the game we are trapped in. So how do we justify that sacrifice? We obviously need to assign some instrumental value to the gained information. But how do we do that, if we have no idea what we will find by exploring.
Clearly, our propensity to explore is greater the longer our time horizon (the lower our discount rate). This results in an inversion of the normal moral respectability of low discount rates. Those who take the long view are more likely to indulge in explorations that short-termers consider immoral (because they impose some short-term disutility on others.)
It is an interesting set of issues to look at.
On maps, exploration and context:
The value of maps is that they are abstractions of territory. They represent the essential aspects of the domain while ignoring the inconsequential and avoiding the extraneous. Creating these is a form of compression, generating knowledge (meaning) from raw data.
The problem with maps is that they are (necessarily) incomplete. It isn’t enough to simply have an accurate map, the map must be appropriate for the current context. For example if I’m driving a car a road map is more useful than a terrain map.
Exploration then is the process of mapping the maps. Identifying maps and their appropriate contexts. This structure of abstracted abstractions is recursively defined, terminating (perhaps) in our physical reality. From this perspective there is very little difference between area and accuracy.
To tie this back to terminology more commonly used on this site, exploration is a process of learning how to carve reality at its joints.
It seems to get a bit thick around the “Huffman Coding” point—I’m not sure the end is clear.