Part of my day job involves doing design reviews for alterations to relatively complex software systems, often systems that I don’t actually understand all that well to begin with. Mostly, what I catch are the integration failures; places where an assumption in one module doesn’t line up quite right with the end-to-end system flow, or where an expected capability isn’t actually being supported by the system design, etc.
Which isn’t quite the same thing as what you’re talking about, but has some similarities; being able to push through from an understanding of each piece of the system to an understanding of the expected implications of that idea across the system as a whole.
But thinking about it now, I’m not really sure what that involves, certainly not at the 5-second level.
A few not-fully-formed thoughts: Don’t get distracted by details. Look at each piece, figure out its center, draw lines from center to center. Develop a schematic understanding before I try to understand the system in its entirety. If it’s too complicated to understand that way, back out and come up with a different way of decomposing the system into “pieces” that results in fewer pieces. Repeat until I have a big-picture skeleton view. Then go back to the beginning, and look at every detail, and for each detail explain how it connects to that skeleton. Not just understand, but explain: write it down, draw a picture, talk it through with someone. That includes initial requirements: explain what each requirement means in terms of that skeleton, and for each such requirement-thread find a matching design-thread.
So, of course someone’s going to ask for a concrete example, and I can’t think of how to be concrete without actually working through a design review in tedious detail, which I really don’t feel like doing. So I recognize that the above isn’t really all that useful in and of itself, but maybe it’s a place to start.
I wonder if it doesn’t actually support compartmentalization. In software, you don’t want a lot of different links between the internals of modules, and so it seems you might not want lots of links between different belief clusters. Just make sure they’re connected with a clean API.
Of course, that’s not really compartmentalization, that’s just not drawing any more arrows on your Bayes net than you need to. If your entire religious belief network really hangs on one empirical claim, that might be a sufficient connection between it and the rest of your beliefs, at least for efficiency.
I agree that this approach likely works well for software precisely because software tends to be built modularly to begin with… it probably would work far less well for analyzing evolved rather than constructed systems.
I wouldn’t be so sure. Tooby and Cosmides keep finding evidence that our brains are organized to some degree in terms of adaptive modules, like “cheater detection” and “status seeking” which work spectacularly well in the domain for which they evolved, and then fail miserably when expanded to other domains. If we had a “modus tollens” module or a “Bayesian update” module, we’d be a lot smarter than we are; but apparently we don’t, because most people are atrociously bad at modus tollens and Bayesian updating, but remarkably good at cheater detection and status seeking.
(If you haven’t seen it already, look up the Wason task experiments. They clearly show that human beings fail miserably at modus tollens—unless it’s formulated in terms of cheater detection, in which case we are almost perfect.)
Yup, I’m familiar with the cheater stuff, and I agree that the brain has subsystems which work better in some domains than others.
The thing about a well-designed modular architecture, though, is not just that it has task-optimized subsystems, but that those subsystems are isolated from one another and communicate through interfaces that support treating them more or less independently. That’s what makes the kind of compartmentalization thomblake is talking about feasible.
If, instead, I have a bunch of subsystems that share each other’s code and data structures, I may still be able to identify “modules” that perform certain functions, but if I try to analyze (let alone optimize or upgrade) those modules independently I will quickly get bogged down in interactions that require me to understand the whole system.
Part of my day job involves doing design reviews for alterations to relatively complex software systems, often systems that I don’t actually understand all that well to begin with. Mostly, what I catch are the integration failures; places where an assumption in one module doesn’t line up quite right with the end-to-end system flow, or where an expected capability isn’t actually being supported by the system design, etc.
Which isn’t quite the same thing as what you’re talking about, but has some similarities; being able to push through from an understanding of each piece of the system to an understanding of the expected implications of that idea across the system as a whole.
But thinking about it now, I’m not really sure what that involves, certainly not at the 5-second level.
A few not-fully-formed thoughts: Don’t get distracted by details. Look at each piece, figure out its center, draw lines from center to center. Develop a schematic understanding before I try to understand the system in its entirety. If it’s too complicated to understand that way, back out and come up with a different way of decomposing the system into “pieces” that results in fewer pieces. Repeat until I have a big-picture skeleton view. Then go back to the beginning, and look at every detail, and for each detail explain how it connects to that skeleton. Not just understand, but explain: write it down, draw a picture, talk it through with someone. That includes initial requirements: explain what each requirement means in terms of that skeleton, and for each such requirement-thread find a matching design-thread.
So, of course someone’s going to ask for a concrete example, and I can’t think of how to be concrete without actually working through a design review in tedious detail, which I really don’t feel like doing. So I recognize that the above isn’t really all that useful in and of itself, but maybe it’s a place to start.
That’s an interesting metaphor.
I wonder if it doesn’t actually support compartmentalization. In software, you don’t want a lot of different links between the internals of modules, and so it seems you might not want lots of links between different belief clusters. Just make sure they’re connected with a clean API.
Of course, that’s not really compartmentalization, that’s just not drawing any more arrows on your Bayes net than you need to. If your entire religious belief network really hangs on one empirical claim, that might be a sufficient connection between it and the rest of your beliefs, at least for efficiency.
I agree that this approach likely works well for software precisely because software tends to be built modularly to begin with… it probably would work far less well for analyzing evolved rather than constructed systems.
I wouldn’t be so sure. Tooby and Cosmides keep finding evidence that our brains are organized to some degree in terms of adaptive modules, like “cheater detection” and “status seeking” which work spectacularly well in the domain for which they evolved, and then fail miserably when expanded to other domains. If we had a “modus tollens” module or a “Bayesian update” module, we’d be a lot smarter than we are; but apparently we don’t, because most people are atrociously bad at modus tollens and Bayesian updating, but remarkably good at cheater detection and status seeking.
(If you haven’t seen it already, look up the Wason task experiments. They clearly show that human beings fail miserably at modus tollens—unless it’s formulated in terms of cheater detection, in which case we are almost perfect.)
Yup, I’m familiar with the cheater stuff, and I agree that the brain has subsystems which work better in some domains than others.
The thing about a well-designed modular architecture, though, is not just that it has task-optimized subsystems, but that those subsystems are isolated from one another and communicate through interfaces that support treating them more or less independently. That’s what makes the kind of compartmentalization thomblake is talking about feasible.
If, instead, I have a bunch of subsystems that share each other’s code and data structures, I may still be able to identify “modules” that perform certain functions, but if I try to analyze (let alone optimize or upgrade) those modules independently I will quickly get bogged down in interactions that require me to understand the whole system.