I’d like to humbly propose a new virtue to add to Eliezer’s virtues of rationality — the virtue of Compartmentalization. Like the Aristoteian virtues, the virtue of Compartmentalization is a golden mean. Learning the appropriate amount of Compartmentalization, like learning the appropriate amount of bravery, is a life-long challenge.
Learning how to program is both learning the words to which computers listen and training yourself to think about complex problems. Learning to comfortably move between levels of abstraction is an important part of the second challenge.
Large programs are composed of multiple modules. Each module is composed of lines of code. Each line of code is composed of functions manipulating objects. Each function is yet a deeper set of instructions.
For a programmer to truly focus on one element of a program, he or she has to operate at the right level of abstraction and temporarily forget the elements above, below or alongside the current problem.
Programming is not the only discipline that requires this focus. Economists and mathematicians rely on tools such as regressions and Bayes’ rule without continually recanting the math that makes them truths. Engineers do not consider wave-particle duality when predicting Newtonian-type problems. When a mechanic is fixing a radiator, the only relevant fact about spark plugs is that they produce heat.
If curiosity killed the cat, it’s only because it distracted her from more urgent matters.
As I became a better programmer I didn’t notice my Compartmentalization-skills improving – I was too lost in the problem at hand, but I noticed the skill when I noticed its absence in other people. Take, for example, the confused philosophical debate about free will. A typical spiel from an actual philosopher can be found in the movie Waking Life.
Discussions about free will often veer into unproductive digressions about physical facts at the wrong level of abstraction. Perhaps, at its deepest level, reality is a collection of billiard balls. Perhaps reality is, deep down, a pantheon of gods rolling dice. Maybe all matter is composed of cellists balancing on vibrating tightropes. Maybe we’re living in a simulated matrix of 1’s and 0s, or maybe it really is just turtles all the way down.
These are interesting questions that should be pursued by all blessed with sufficient curiosity, but these are questions at a level of abstraction absolutely irrelevant to the questions at hand.
A philosopher with a programmer’s discipline thinking about “free will” will not start by debating the above questions. Instead, he will notice that “free will” is itself a philosophical abstraction that can be broken down into several, oft-convoluted components. Debating the concept as a whole is too high of an abstraction. When one says “do I have free will?” one could actually be asking:
Are the actions of humans predictable?
Are humans perfectly predictable with complete knowledge and infinite computational time?
Will we ever have complete knowledge and infinite computational time necessary to perfectly predict a human?
Can you reliably manipulate humans with advertising/priming?
Are humans capable of thinking about and changing their habits through conscious thought?
Do humans have a non-physical soul that directs our actions and is above physical influences?
I’m sure there are other questions lurking beneath in the conceptual quagmire of “free will,” but that’s a good start These six are not only significantly narrower in scope than “Do humans have free will?” but also are also answerable and actionable. Off the cuff:
Of course.
Probably.
Probably not.
Less than marketers/psychologists would want you to believe but more than the rest of us would like to admit.
More so than most animals, but less so than we might desire.
Brain damage and mind-altering drugs would suggest our “spirits” are not above physical influences.
So, in sum, what would a programmer have to say about the question of free will? Nothing. The problem must be broken into manageable pieces, and each element must be examined in turn. The original question is not clear enough for a single answer. Furthermore, he will ignore all claims about the fundamental nature of the universe. You don’t go digging around machine code when you’re making a spreadsheet.
If you want your brain to think about problems larger, older and deeper than your brain, then you should be capable of zooming in and out of the problem – sometimes poring over the minutest details and sometimes blurring your vision to see the larger picture. Sometimes you need to alternate between multiple maps of varying detail for the same territory. Far from being a vice, this is the virtue of Compartmentalization.
Your homework assignment: Does the expression “love is just a chemical” change anything about Valentine’s Day?
The Virtue of Compartmentalization
Cross posted from my blog, Selfish Meme.
I’d like to humbly propose a new virtue to add to Eliezer’s virtues of rationality — the virtue of Compartmentalization. Like the Aristoteian virtues, the virtue of Compartmentalization is a golden mean. Learning the appropriate amount of Compartmentalization, like learning the appropriate amount of bravery, is a life-long challenge.Learning how to program is both learning the words to which computers listen and training yourself to think about complex problems. Learning to comfortably move between levels of abstraction is an important part of the second challenge.
Large programs are composed of multiple modules. Each module is composed of lines of code. Each line of code is composed of functions manipulating objects. Each function is yet a deeper set of instructions.
For a programmer to truly focus on one element of a program, he or she has to operate at the right level of abstraction and temporarily forget the elements above, below or alongside the current problem.
Programming is not the only discipline that requires this focus. Economists and mathematicians rely on tools such as regressions and Bayes’ rule without continually recanting the math that makes them truths. Engineers do not consider wave-particle duality when predicting Newtonian-type problems. When a mechanic is fixing a radiator, the only relevant fact about spark plugs is that they produce heat.
If curiosity killed the cat, it’s only because it distracted her from more urgent matters.
As I became a better programmer I didn’t notice my Compartmentalization-skills improving – I was too lost in the problem at hand, but I noticed the skill when I noticed its absence in other people. Take, for example, the confused philosophical debate about free will. A typical spiel from an actual philosopher can be found in the movie Waking Life.
Discussions about free will often veer into unproductive digressions about physical facts at the wrong level of abstraction. Perhaps, at its deepest level, reality is a collection of billiard balls. Perhaps reality is, deep down, a pantheon of gods rolling dice. Maybe all matter is composed of cellists balancing on vibrating tightropes. Maybe we’re living in a simulated matrix of 1’s and 0s, or maybe it really is just turtles all the way down.
These are interesting questions that should be pursued by all blessed with sufficient curiosity, but these are questions at a level of abstraction absolutely irrelevant to the questions at hand.
A philosopher with a programmer’s discipline thinking about “free will” will not start by debating the above questions. Instead, he will notice that “free will” is itself a philosophical abstraction that can be broken down into several, oft-convoluted components. Debating the concept as a whole is too high of an abstraction. When one says “do I have free will?” one could actually be asking:
Are the actions of humans predictable?
Are humans perfectly predictable with complete knowledge and infinite computational time?
Will we ever have complete knowledge and infinite computational time necessary to perfectly predict a human?
Can you reliably manipulate humans with advertising/priming?
Are humans capable of thinking about and changing their habits through conscious thought?
Do humans have a non-physical soul that directs our actions and is above physical influences?
I’m sure there are other questions lurking beneath in the conceptual quagmire of “free will,” but that’s a good start These six are not only significantly narrower in scope than “Do humans have free will?” but also are also answerable and actionable. Off the cuff:
Of course.
Probably.
Probably not.
Less than marketers/psychologists would want you to believe but more than the rest of us would like to admit.
More so than most animals, but less so than we might desire.
Brain damage and mind-altering drugs would suggest our “spirits” are not above physical influences.
So, in sum, what would a programmer have to say about the question of free will? Nothing. The problem must be broken into manageable pieces, and each element must be examined in turn. The original question is not clear enough for a single answer. Furthermore, he will ignore all claims about the fundamental nature of the universe. You don’t go digging around machine code when you’re making a spreadsheet.
If you want your brain to think about problems larger, older and deeper than your brain, then you should be capable of zooming in and out of the problem – sometimes poring over the minutest details and sometimes blurring your vision to see the larger picture. Sometimes you need to alternate between multiple maps of varying detail for the same territory. Far from being a vice, this is the virtue of Compartmentalization.
Your homework assignment: Does the expression “love is just a chemical” change anything about Valentine’s Day?