Yes, “don’t pull a broken chain.” But how do you know or notice that it is broken? In all of the examples you cited there was one thing missing: the feedback loop. Seeing incremental results based on your actions and adjusting those actions. Open loop doesn’t work well, or, at least, is much harder to make work. Noticing that you are in the open loop mode and looking for ways to close it is something that can help a lot but is often overlooked. I think in the ML parlance it is called iterative learning or something. And if you notice that you are open-looping and there is no way you can find to close the loop, adjust your expectation of success down accordingly.
Noticing that chain-1 is broken introduces a meta-level problem: there has to be a cause-and-effect chain from the break in chain-1 to my mental model of chain-1. One could even imagine a cause-and-effect chain-2 from chain-2 itself to my mental model of chain-2; think Yudkowsky’s lens which sees its own flaws.
I’m re-posting this now because my post on cartographic processes made too many inferential jumps all at once and caused confusion. But here’s where this is all headed: cartographic processes (i.e. cause-and-effect chains which produce maps/models of the world) which produce maps/models of themselves. Before we can deal with that much meta, we need to characterize more basic cartographic processes, i.e. cause-and-effect chains which produce accurate/useful maps of the territory.
Note that mapping is a type of abstraction that’s independent of causal chains and feedback/control loops. You can make an excellent thermostat that doesn’t understand a thing about thermodynamics, air flow, or control theory (though you may need to know something about these things in order to make the thermostat work well).
Seems like we speak different languages, or come from different epistemologies, since we seem to be talking past each other. The meta-model I was talking about is a rather universal one: close the loop whenever possible and don’t expect much from an open loop. I don’t understand the chain-1 to chain-2 argument. Then again, I find the whole idea of causal chains underwhelming. Maybe they have something to show in terms of having solved problems that are intractable otherwise, but if so, I am not aware of any,
Here’s a real-world example (in fact the example which motivated the original essay). I was at a startup and there was this complicated feature we were working on for our app. I asked “Is this actually going to make us any money? Like, what’s the cause-and-effect process from this feature to profits?”. And nobody had any answer. There was no imagined chain from building that feature to making money; we had simply lost sight of the goal. We were building the feature because that was what we knew how to do, and making the app profitable was not something we knew how to do—we were looking for the metaphorical quarter under the streetlight.
Sure, feedback would have straightened us out eventually. We would have released the feature, no money would have been made. But that would have taken months. Noticing that the chain was broken was much faster.
Lost purposes is a common occurrence, definitely. And it looks like you had provided that feedback without waiting in the open loop until your customer give you the feedback. Your motivation was, or appeared to be based on a broken causal chain or something, but the resulting action was something that ought to have been built-in from the start: closing the loop early.
Yes, “don’t pull a broken chain.” But how do you know or notice that it is broken? In all of the examples you cited there was one thing missing: the feedback loop. Seeing incremental results based on your actions and adjusting those actions. Open loop doesn’t work well, or, at least, is much harder to make work. Noticing that you are in the open loop mode and looking for ways to close it is something that can help a lot but is often overlooked. I think in the ML parlance it is called iterative learning or something. And if you notice that you are open-looping and there is no way you can find to close the loop, adjust your expectation of success down accordingly.
Noticing that chain-1 is broken introduces a meta-level problem: there has to be a cause-and-effect chain from the break in chain-1 to my mental model of chain-1. One could even imagine a cause-and-effect chain-2 from chain-2 itself to my mental model of chain-2; think Yudkowsky’s lens which sees its own flaws.
I’m re-posting this now because my post on cartographic processes made too many inferential jumps all at once and caused confusion. But here’s where this is all headed: cartographic processes (i.e. cause-and-effect chains which produce maps/models of the world) which produce maps/models of themselves. Before we can deal with that much meta, we need to characterize more basic cartographic processes, i.e. cause-and-effect chains which produce accurate/useful maps of the territory.
Note that mapping is a type of abstraction that’s independent of causal chains and feedback/control loops. You can make an excellent thermostat that doesn’t understand a thing about thermodynamics, air flow, or control theory (though you may need to know something about these things in order to make the thermostat work well).
Seems like we speak different languages, or come from different epistemologies, since we seem to be talking past each other. The meta-model I was talking about is a rather universal one: close the loop whenever possible and don’t expect much from an open loop. I don’t understand the chain-1 to chain-2 argument. Then again, I find the whole idea of causal chains underwhelming. Maybe they have something to show in terms of having solved problems that are intractable otherwise, but if so, I am not aware of any,
Here’s a real-world example (in fact the example which motivated the original essay). I was at a startup and there was this complicated feature we were working on for our app. I asked “Is this actually going to make us any money? Like, what’s the cause-and-effect process from this feature to profits?”. And nobody had any answer. There was no imagined chain from building that feature to making money; we had simply lost sight of the goal. We were building the feature because that was what we knew how to do, and making the app profitable was not something we knew how to do—we were looking for the metaphorical quarter under the streetlight.
Sure, feedback would have straightened us out eventually. We would have released the feature, no money would have been made. But that would have taken months. Noticing that the chain was broken was much faster.
Lost purposes is a common occurrence, definitely. And it looks like you had provided that feedback without waiting in the open loop until your customer give you the feedback. Your motivation was, or appeared to be based on a broken causal chain or something, but the resulting action was something that ought to have been built-in from the start: closing the loop early.