Just because X describes Y in a high level abstract way, doesn’t mean studying X is the best of understanding Y.
Often, the best way is to simply study Y, and studying X just makes you sound smarter when talking about Y.
pointless abstractions: cybernetics and OODA loop
This is based on my experience trying to learn stuff about cybernetics, in order to understand GUI tool design for personal use, and to understand the feedback loop that roughly looks like, build -> use -> rethink -> let it help you rethink -> rebuild, where me and any LLM instance I talk to (via the GUI) are part of the cybernetic system. Whenever I “loaded cybernetics concepts” into my mind and tried to view GUI design from that perspective, I was just spending a bunch of effort mapping the abstract ideas to concrete things, and then being like, “ok but so what?”.
A similar thing happened while looking into the OODA loop, though at least its Wiki page has a nice little flowchart, and it’s much more concrete than cybernetics. And you can draw more concrete inspiration about GUI design by thinking about fighter pilot interfaces.
It’s also because I often see people using abstract reasoning and, whenever I dig into what they’re actually saying it doesn’t make that much sense. Also because of personal experience where, things become way clearer and easier to think about, after phrasing them in very concrete and basic ways.
The abstract descriptions are sometimes leaky. Recently I had a short training on “how to solve problems”, which assumed discrete steps, something like “define the problem”, “collect facts”, “find out causes”, “propose a fix”.
Some people seem impressed by seeing four or five bullet points that in theory would allow them to solve any problem, if they only apply them in the right sequence. To me, it seems like the real world does not follow this “waterfall” model.
For example, collecting the facts. Which facts? About what? How much certainty do you need for those facts? If you really tried to collect all facts about a nontrivial system, it would take much more time than you usually have. So you go with some assumptions, like “these are the things that usually break, let’s check them first” and “these are the things that are cheap to check, so let’s check those, too”. And you solve half of the problems like this, quickly. For the other half, you take a step back, check a few more things, etc. And when the things really get complicated, then you start going step by step, verifying one step at a time. But it would be a mistake to say that you wasted your time by trying to solve the problem before you had all the facts. You did something that works well on average, it’s just that problem we had today was an outlier.
(I am not saying that this is completely useless. Sometimes people make the mistake of going in circles without taking a step back and checking their premises, so it is good to remind them to collect more data. But you need to be adaptive about that, which is something that in my case the high-level description abstracted away.)
Just because X describes Y in a high level abstract way, doesn’t mean studying X is the best of understanding Y.
Often, the best way is to simply study Y, and studying X just makes you sound smarter when talking about Y.
pointless abstractions: cybernetics and OODA loop
This is based on my experience trying to learn stuff about cybernetics, in order to understand GUI tool design for personal use, and to understand the feedback loop that roughly looks like,
build -> use -> rethink -> let it help you rethink -> rebuild
, where me and any LLM instance I talk to (via the GUI) are part of the cybernetic system. Whenever I “loaded cybernetics concepts” into my mind and tried to view GUI design from that perspective, I was just spending a bunch of effort mapping the abstract ideas to concrete things, and then being like, “ok but so what?”.A similar thing happened while looking into the OODA loop, though at least its Wiki page has a nice little flowchart, and it’s much more concrete than cybernetics. And you can draw more concrete inspiration about GUI design by thinking about fighter pilot interfaces.
It’s also because I often see people using abstract reasoning and, whenever I dig into what they’re actually saying it doesn’t make that much sense. Also because of personal experience where, things become way clearer and easier to think about, after phrasing them in very concrete and basic ways.
The abstract descriptions are sometimes leaky. Recently I had a short training on “how to solve problems”, which assumed discrete steps, something like “define the problem”, “collect facts”, “find out causes”, “propose a fix”.
Some people seem impressed by seeing four or five bullet points that in theory would allow them to solve any problem, if they only apply them in the right sequence. To me, it seems like the real world does not follow this “waterfall” model.
For example, collecting the facts. Which facts? About what? How much certainty do you need for those facts? If you really tried to collect all facts about a nontrivial system, it would take much more time than you usually have. So you go with some assumptions, like “these are the things that usually break, let’s check them first” and “these are the things that are cheap to check, so let’s check those, too”. And you solve half of the problems like this, quickly. For the other half, you take a step back, check a few more things, etc. And when the things really get complicated, then you start going step by step, verifying one step at a time. But it would be a mistake to say that you wasted your time by trying to solve the problem before you had all the facts. You did something that works well on average, it’s just that problem we had today was an outlier.
(I am not saying that this is completely useless. Sometimes people make the mistake of going in circles without taking a step back and checking their premises, so it is good to remind them to collect more data. But you need to be adaptive about that, which is something that in my case the high-level description abstracted away.)