The Virtue of Compartmentalization
Cross posted from my blog, Selfish Meme.
Learning how to program is both learning the words to which computers listen and training yourself to think about complex problems. Learning to comfortably move between levels of abstraction is an important part of the second challenge.
Large programs are composed of multiple modules. Each module is composed of lines of code. Each line of code is composed of functions manipulating objects. Each function is yet a deeper set of instructions.
For a programmer to truly focus on one element of a program, he or she has to operate at the right level of abstraction and temporarily forget the elements above, below or alongside the current problem.
Programming is not the only discipline that requires this focus. Economists and mathematicians rely on tools such as regressions and Bayes’ rule without continually recanting the math that makes them truths. Engineers do not consider wave-particle duality when predicting Newtonian-type problems. When a mechanic is fixing a radiator, the only relevant fact about spark plugs is that they produce heat.
If curiosity killed the cat, it’s only because it distracted her from more urgent matters.
As I became a better programmer I didn’t notice my Compartmentalization-skills improving – I was too lost in the problem at hand, but I noticed the skill when I noticed its absence in other people. Take, for example, the confused philosophical debate about free will. A typical spiel from an actual philosopher can be found in the movie Waking Life.
Discussions about free will often veer into unproductive digressions about physical facts at the wrong level of abstraction. Perhaps, at its deepest level, reality is a collection of billiard balls. Perhaps reality is, deep down, a pantheon of gods rolling dice. Maybe all matter is composed of cellists balancing on vibrating tightropes. Maybe we’re living in a simulated matrix of 1’s and 0s, or maybe it really is just turtles all the way down.
These are interesting questions that should be pursued by all blessed with sufficient curiosity, but these are questions at a level of abstraction absolutely irrelevant to the questions at hand.
A philosopher with a programmer’s discipline thinking about “free will” will not start by debating the above questions. Instead, he will notice that “free will” is itself a philosophical abstraction that can be broken down into several, oft-convoluted components. Debating the concept as a whole is too high of an abstraction. When one says “do I have free will?” one could actually be asking:
Are the actions of humans predictable?
Are humans perfectly predictable with complete knowledge and infinite computational time?
Will we ever have complete knowledge and infinite computational time necessary to perfectly predict a human?
Can you reliably manipulate humans with advertising/priming?
Are humans capable of thinking about and changing their habits through conscious thought?
Do humans have a non-physical soul that directs our actions and is above physical influences?
I’m sure there are other questions lurking beneath in the conceptual quagmire of “free will,” but that’s a good start These six are not only significantly narrower in scope than “Do humans have free will?” but also are also answerable and actionable. Off the cuff:
Of course.
Probably.
Probably not.
Less than marketers/psychologists would want you to believe but more than the rest of us would like to admit.
More so than most animals, but less so than we might desire.
Brain damage and mind-altering drugs would suggest our “spirits” are not above physical influences.
So, in sum, what would a programmer have to say about the question of free will? Nothing. The problem must be broken into manageable pieces, and each element must be examined in turn. The original question is not clear enough for a single answer. Furthermore, he will ignore all claims about the fundamental nature of the universe. You don’t go digging around machine code when you’re making a spreadsheet.
If you want your brain to think about problems larger, older and deeper than your brain, then you should be capable of zooming in and out of the problem – sometimes poring over the minutest details and sometimes blurring your vision to see the larger picture. Sometimes you need to alternate between multiple maps of varying detail for the same territory. Far from being a vice, this is the virtue of Compartmentalization.
Your homework assignment: Does the expression “love is just a chemical” change anything about Valentine’s Day?
When you mentioned compartmentalization, I thought of compartmentalization of beliefs and the failure to decompartmentalize—which I consider a rationalistic sin, not a virtue.
Maybe rename this to something about remembering the end goal, or something about abstraction levels, or keeping the potential application in mind; for example “the virtue of determinism”?
To contrast my intentions, the linked post is about compartmentalizing map-making from non-map-making while mine is compartmentalizing different maps. Your association is a good data point, so I’ll think about a better name. Perhaps the virtue of focus, abstraction or sequestration? Nothing’s jumping out at me right now.
Abstraction is good; encapsulation is also good. I mentally use the terms “denotative folding” and “conceptual collapse” to refer to the idea when programming or considering multi-dimensional physics, respectively.
I like the encapsulation suggestion a lot. I’ll implement all of these edits tonight. Thank you.
I think you have a pretty good idea in general here (I’d describe it as a tool, not a virtue, because it’s not universally applicable), but your language gets in the way.
First “Compartmentalization,” while encapsulating the concept you refer to, -also- encapsulates a lot of negative connotation among rationalist types. “Abstraction” is already a pretty good all-purpose word.
Second, a lot of what you’re calling “compartmentalization” is already referred to as “reductionism”.
(Also, I do a lot of maintenance work for a product that is ten years old; the 3 C’s ceased to be meaningfully applicable in most of the code years ago. Considering only the level you’re actually working on is a luxury that sometimes can’t be afforded.)
You’re right. The elements of reductionism in the examples are unrelated to the topic. I’m attached to the examples, but I should either demarcate them as a separate skill or remove them.
I don’t think “compartmentalization” is an appropriate word for a virtue...
The error is in overestimating the importance of surface similarity. If I know something, I don’t necessary have an argument that would convince you, and there is no “right to convince” that should change your belief despite the absence of argument. Similarly, there is no right to convince yourself. Just knowing something is not a very strong argument in favor of coming to believe it, of changing the other beliefs about the same fact that assert contradictory things. To change a belief, you need to consider an argument (evidence) about it, not just an assertion of a contradictory belief, even if both happen to be present in the same mind.