Maybe thats a problem, you’ve been at this for too long, so you’ve developed a very firm outlook. Just kidding.
You can get a long way with the Unix philosophy (small single-purpose tools that compose via a shell/script mechanism
I do quite like sed, wget, etc. but it also doesnt feel very rigid to use in a project. What I see is a bunch of projects that have a mess of scripts invoked by other scripts. See https://github.com/gcc-mirror/gcc. Why not try to link those tools into the code in a more rigid way? If a project uses git why make a script that calls git from the system instead of directly linking something like https://docs.rs/git2/latest/git2/ into your code to do what you want? I believe cargo/rust is going the right direction with build.rs, config/cargo.toml, etc.
On the case of debugging, what Im trying to say is that your language shouldnt need to be debugged at all, given that it is safe. That means no logging to stdout or any half assed ways to see where your code is dying at. It should be formally verified, through some mathematical scheme. The (safe) features of language should be used and composed in a way that the code must always be correct. Another thing is to adequately test each individual functionality of your code to ensure that it behaves in practice how you expect. Property based testing is also a good idea imo. I just dont think a project, no matter how big it gets, needs to ever get to the point where it just randomly dies due to some small bug/potentially buggy code or half assed solution that the programmer decided to oversee or implement.
So that leads me to say that I do absolutely think that a systems language definition is important, if not, key to the effective development and functioning of a program. You’ve also mentioned big projects that spans multiple abstraction levels—that doesnt sound like very good software design to me. From first principles, one would expect KISS, DRY, principle of least knowledge, etc. to hold up. Why arent projects like that broken up into smaller pieces that are more manageable? Why arent they using better FFI/interface designs? A lot of the big “industry grade” projects seem to have decades of technical debt piled on, just trying to get it to work and wrap bandaids around bad code instead of proper refactoring and redesigning. Newer protocols are supported in a lazy manner leading to more technical debt.
Just like how a good powertool like a circular saw would dramatically improve your DIY experience and quality in making furniture over a handsaw. I think a good systems language + development environment would make dramatically improve your experience in making good software
Maybe thats a problem, you’ve been at this for too long, so you’ve developed a very firm outlook.
Grin. There’s more than a little bit of truth there, thanks for reminding me.
I do quite like sed, wget, etc. but it also doesnt feel very rigid to use in a project.
Depends on the project. If it’s stupid and it works, it’s not stupid. There’s a WHOLE LOT of projects that don’t need wide applicability, great uptime, nor long-term maintainability. Getting them functional in half a day is way better than getting them reliable in a week (or in GCC’s case, keeping the build-of-gcc system functional over 35 years is important, but making it clean is less important than improving the actual usage and functionality of GCC). Rigidity is a cost, not a benefit. Sometimes.
Which is my main point—using different tools and languages for different needs is the only way I know to get to the right points in the tradeoff space (both human-level and operational) for different things. I suspect we’re talking past each other when we talk about multiple levels of abstraction in a project—the word “project” is pretty undefined, and separation between these abstractions is absolutely normal -whether they build/test/deploy together or separately is an independent question.
You’re absolutely right that pretty much all long-lived projects are a mess of hacks and debt just trying to get it to work (and keep it working as new uses get added). But there are pretty difficult-to-oppose reasons that this is universally true. The decisions about when to refactor/rewrite vs when to patch over are … non-trivial and heated. Fundamentally, tech debt is a lot like financial debt—a VERY useful tool for being able to do things sooner than you can fully afford it by yourself, and as long as you’re growing, can be rolled over for a long long time. And it makes you more fragile in a downturn, so the organizations and codebases that over-use it get winnowed occasionally. Circle of life, you know.
Maybe thats a problem, you’ve been at this for too long, so you’ve developed a very firm outlook. Just kidding.
I do quite like sed, wget, etc. but it also doesnt feel very rigid to use in a project. What I see is a bunch of projects that have a mess of scripts invoked by other scripts. See https://github.com/gcc-mirror/gcc. Why not try to link those tools into the code in a more rigid way? If a project uses
git
why make a script that calls git from the system instead of directly linking something like https://docs.rs/git2/latest/git2/ into your code to do what you want? I believe cargo/rust is going the right direction with build.rs, config/cargo.toml, etc.On the case of debugging, what Im trying to say is that your language shouldnt need to be debugged at all, given that it is safe. That means no logging to stdout or any half assed ways to see where your code is dying at. It should be formally verified, through some mathematical scheme. The (safe) features of language should be used and composed in a way that the code must always be correct. Another thing is to adequately test each individual functionality of your code to ensure that it behaves in practice how you expect. Property based testing is also a good idea imo. I just dont think a project, no matter how big it gets, needs to ever get to the point where it just randomly dies due to some small bug/potentially buggy code or half assed solution that the programmer decided to oversee or implement.
So that leads me to say that I do absolutely think that a systems language definition is important, if not, key to the effective development and functioning of a program. You’ve also mentioned big projects that spans multiple abstraction levels—that doesnt sound like very good software design to me. From first principles, one would expect KISS, DRY, principle of least knowledge, etc. to hold up. Why arent projects like that broken up into smaller pieces that are more manageable? Why arent they using better FFI/interface designs? A lot of the big “industry grade” projects seem to have decades of technical debt piled on, just trying to get it to work and wrap bandaids around bad code instead of proper refactoring and redesigning. Newer protocols are supported in a lazy manner leading to more technical debt.
Just like how a good powertool like a circular saw would dramatically improve your DIY experience and quality in making furniture over a handsaw. I think a good systems language + development environment would make dramatically improve your experience in making good software
Grin. There’s more than a little bit of truth there, thanks for reminding me.
Depends on the project. If it’s stupid and it works, it’s not stupid. There’s a WHOLE LOT of projects that don’t need wide applicability, great uptime, nor long-term maintainability. Getting them functional in half a day is way better than getting them reliable in a week (or in GCC’s case, keeping the build-of-gcc system functional over 35 years is important, but making it clean is less important than improving the actual usage and functionality of GCC). Rigidity is a cost, not a benefit. Sometimes.
Which is my main point—using different tools and languages for different needs is the only way I know to get to the right points in the tradeoff space (both human-level and operational) for different things. I suspect we’re talking past each other when we talk about multiple levels of abstraction in a project—the word “project” is pretty undefined, and separation between these abstractions is absolutely normal -whether they build/test/deploy together or separately is an independent question.
You’re absolutely right that pretty much all long-lived projects are a mess of hacks and debt just trying to get it to work (and keep it working as new uses get added). But there are pretty difficult-to-oppose reasons that this is universally true. The decisions about when to refactor/rewrite vs when to patch over are … non-trivial and heated. Fundamentally, tech debt is a lot like financial debt—a VERY useful tool for being able to do things sooner than you can fully afford it by yourself, and as long as you’re growing, can be rolled over for a long long time. And it makes you more fragile in a downturn, so the organizations and codebases that over-use it get winnowed occasionally. Circle of life, you know.