I think there’s a question of understanding here, not just incentives. The knowledge of minds as cognitive engines or the principle of the bottom line, is the knowledge that in full generality you can’t draw an accurate map of a city without seeing it or having some other kind of causal interaction with it. This is one of the things that readers have cited as the most important thing they learned from my writing on OB. And it’s the difference between being told an equation in school to use on a particular test, versus knowing under what (extremely general) real-world conditions you can derive it.
Like the difference between being told that gravity is 9.8 m/s^2 and being able to use that to answer written questions about gravity on a test or maybe even predict the fall of clocks off a tower, but never thinking to apply this to anything except gravity. Versus being able to do and visualize the two steps of integral calculus that get you from constant acceleration A to 1⁄2 A t^2, which is much more general than gravity.
If you knew on a gut level—as knowledge—that you couldn’t draw a map of a city without looking at it, I think the issue of incentives would be a lot mooter. There might still be incentives whether or not to communicate that understanding, whether or not to talk to others about it, etc., but on a gut level, you yourself would just know.
Even if you “just know”, this doesn’t grant you the ability to perform an instantaneous search-and-replace on the entire contents of your own brain.
Think of the difference between copying code, and function invocation. If the function is defined in one place and then reused, you can certainly make one change, and get a multitude of benefits from doing so.
However, this relies on the original programmer having recognized the pattern, and then consistently using a single abstraction throughout the code. But in practice, we usually learn variations on a theme before we learn the theme itself, and don’t always connect all our variations.
And this limitation applies equally to our declarative and procedural memories. If there’s not a shared abstraction in use, you have to search-and-replace… and the brain doesn’t have very many “indexes” you can use to do the searching with—you’re usually limited to searching by sensory information (which can include emotional responses, fortunately), or by existing abstractions. (“Off-index” or “table scan” searches are slower and unlikely to be complete, anyway—think of trying to do a search and replace on uses of the “visitor” pattern, where each application has different method names, none of which include “visit” or use “Visitor” in a class name!)
It seems to me that yours and Robin’s view of minds still contains some notion of a “decider”—that there’s some part of you that can just look and see something’s wrong and then refuse to execute that wrongness.
But if mind is just a self-modifying program, then not only are we subject to getting things wrong, we’re also subject to recording that wrongness, and perpetuating it in a variety of ways… recapitulating the hardware wrongs on a software level, in other words.
And so, while you seem to be saying, “if people were better programmers, they’d write better code”… it seems to me you’re leaving out the part where becoming a better programmer has NO effect...
I think there’s a question of understanding here, not just incentives. The knowledge of minds as cognitive engines or the principle of the bottom line, is the knowledge that in full generality you can’t draw an accurate map of a city without seeing it or having some other kind of causal interaction with it. This is one of the things that readers have cited as the most important thing they learned from my writing on OB. And it’s the difference between being told an equation in school to use on a particular test, versus knowing under what (extremely general) real-world conditions you can derive it.
Like the difference between being told that gravity is 9.8 m/s^2 and being able to use that to answer written questions about gravity on a test or maybe even predict the fall of clocks off a tower, but never thinking to apply this to anything except gravity. Versus being able to do and visualize the two steps of integral calculus that get you from constant acceleration A to 1⁄2 A t^2, which is much more general than gravity.
If you knew on a gut level—as knowledge—that you couldn’t draw a map of a city without looking at it, I think the issue of incentives would be a lot mooter. There might still be incentives whether or not to communicate that understanding, whether or not to talk to others about it, etc., but on a gut level, you yourself would just know.
Even if you “just know”, this doesn’t grant you the ability to perform an instantaneous search-and-replace on the entire contents of your own brain.
Think of the difference between copying code, and function invocation. If the function is defined in one place and then reused, you can certainly make one change, and get a multitude of benefits from doing so.
However, this relies on the original programmer having recognized the pattern, and then consistently using a single abstraction throughout the code. But in practice, we usually learn variations on a theme before we learn the theme itself, and don’t always connect all our variations.
And this limitation applies equally to our declarative and procedural memories. If there’s not a shared abstraction in use, you have to search-and-replace… and the brain doesn’t have very many “indexes” you can use to do the searching with—you’re usually limited to searching by sensory information (which can include emotional responses, fortunately), or by existing abstractions. (“Off-index” or “table scan” searches are slower and unlikely to be complete, anyway—think of trying to do a search and replace on uses of the “visitor” pattern, where each application has different method names, none of which include “visit” or use “Visitor” in a class name!)
It seems to me that yours and Robin’s view of minds still contains some notion of a “decider”—that there’s some part of you that can just look and see something’s wrong and then refuse to execute that wrongness.
But if mind is just a self-modifying program, then not only are we subject to getting things wrong, we’re also subject to recording that wrongness, and perpetuating it in a variety of ways… recapitulating the hardware wrongs on a software level, in other words.
And so, while you seem to be saying, “if people were better programmers, they’d write better code”… it seems to me you’re leaving out the part where becoming a better programmer has NO effect...
On all the code you’ve already written.