Maybe thats a problem, you’ve been at this for too long, so you’ve developed a very firm outlook.
Grin. There’s more than a little bit of truth there, thanks for reminding me.
I do quite like sed, wget, etc. but it also doesnt feel very rigid to use in a project.
Depends on the project. If it’s stupid and it works, it’s not stupid. There’s a WHOLE LOT of projects that don’t need wide applicability, great uptime, nor long-term maintainability. Getting them functional in half a day is way better than getting them reliable in a week (or in GCC’s case, keeping the build-of-gcc system functional over 35 years is important, but making it clean is less important than improving the actual usage and functionality of GCC). Rigidity is a cost, not a benefit. Sometimes.
Which is my main point—using different tools and languages for different needs is the only way I know to get to the right points in the tradeoff space (both human-level and operational) for different things. I suspect we’re talking past each other when we talk about multiple levels of abstraction in a project—the word “project” is pretty undefined, and separation between these abstractions is absolutely normal -whether they build/test/deploy together or separately is an independent question.
You’re absolutely right that pretty much all long-lived projects are a mess of hacks and debt just trying to get it to work (and keep it working as new uses get added). But there are pretty difficult-to-oppose reasons that this is universally true. The decisions about when to refactor/rewrite vs when to patch over are … non-trivial and heated. Fundamentally, tech debt is a lot like financial debt—a VERY useful tool for being able to do things sooner than you can fully afford it by yourself, and as long as you’re growing, can be rolled over for a long long time. And it makes you more fragile in a downturn, so the organizations and codebases that over-use it get winnowed occasionally. Circle of life, you know.
Grin. There’s more than a little bit of truth there, thanks for reminding me.
Depends on the project. If it’s stupid and it works, it’s not stupid. There’s a WHOLE LOT of projects that don’t need wide applicability, great uptime, nor long-term maintainability. Getting them functional in half a day is way better than getting them reliable in a week (or in GCC’s case, keeping the build-of-gcc system functional over 35 years is important, but making it clean is less important than improving the actual usage and functionality of GCC). Rigidity is a cost, not a benefit. Sometimes.
Which is my main point—using different tools and languages for different needs is the only way I know to get to the right points in the tradeoff space (both human-level and operational) for different things. I suspect we’re talking past each other when we talk about multiple levels of abstraction in a project—the word “project” is pretty undefined, and separation between these abstractions is absolutely normal -whether they build/test/deploy together or separately is an independent question.
You’re absolutely right that pretty much all long-lived projects are a mess of hacks and debt just trying to get it to work (and keep it working as new uses get added). But there are pretty difficult-to-oppose reasons that this is universally true. The decisions about when to refactor/rewrite vs when to patch over are … non-trivial and heated. Fundamentally, tech debt is a lot like financial debt—a VERY useful tool for being able to do things sooner than you can fully afford it by yourself, and as long as you’re growing, can be rolled over for a long long time. And it makes you more fragile in a downturn, so the organizations and codebases that over-use it get winnowed occasionally. Circle of life, you know.