I’ve long (30 years plus, and reinforced in working on dev tools including an IDE for a large Java server company) been of the opinion that no one language is sufficient to solve the diverse set of fairly large and complex problems that modern developers face. You can get a long way with the Unix philosophy (small single-purpose tools that compose via a shell/script mechanism, but there are plenty of apps that need the effeciency or safety of a coherent design or language structures to manage threads or networking or something else missing from either the low-level syscall-wrapper language or the high-level flow control and data-structure-manipulation language of choice.
I’m a fan of the JVM abstraction for applications—it’s easy to mix languages (java, scala, kotlin, groovy, etc.) all with the same (imperfect but well-defined) underlying memory and threading model, and a reasonably complete core data structure library. It runs on a bunch of different CPU architectures, mostly fairly efficiently (at least for long-running repeated operations, where JIT optimizations dominate).
It’s not great as an extremely low-level “systems” language—for that, a mix of assembler, C, C++, Rust, and others is better. And those aren’t great for mid-level “glue” languages (one-off or higher-level OS or deployment automations), for which Python is the current winner (with a historical mix of BASH, Python, Ruby, and others).
Fundamentally, don’t think of programming language as either the limiter or the enabler of your proficiency as a systems developer/engineer. They’re just additional tools in the box of how to think about efficiency, reliability, and explainability of systems that include computing machinery.
I apologize for being blunt, but your comments on debugging and documentation indicate to me mostly that you’ve never worked on large projects that span abstraction levels or use complex enough data structures that “debugging via stdout” is infeasible. Getting good at using a debugger for watchpoints and tracing is life-changing for many kinds of work. Likewise memory and execution profilers—not necessary for many pieces of code you’ll write, but life-saving for some.
Maybe thats a problem, you’ve been at this for too long, so you’ve developed a very firm outlook. Just kidding.
You can get a long way with the Unix philosophy (small single-purpose tools that compose via a shell/script mechanism
I do quite like sed, wget, etc. but it also doesnt feel very rigid to use in a project. What I see is a bunch of projects that have a mess of scripts invoked by other scripts. See https://github.com/gcc-mirror/gcc. Why not try to link those tools into the code in a more rigid way? If a project uses git why make a script that calls git from the system instead of directly linking something like https://docs.rs/git2/latest/git2/ into your code to do what you want? I believe cargo/rust is going the right direction with build.rs, config/cargo.toml, etc.
On the case of debugging, what Im trying to say is that your language shouldnt need to be debugged at all, given that it is safe. That means no logging to stdout or any half assed ways to see where your code is dying at. It should be formally verified, through some mathematical scheme. The (safe) features of language should be used and composed in a way that the code must always be correct. Another thing is to adequately test each individual functionality of your code to ensure that it behaves in practice how you expect. Property based testing is also a good idea imo. I just dont think a project, no matter how big it gets, needs to ever get to the point where it just randomly dies due to some small bug/potentially buggy code or half assed solution that the programmer decided to oversee or implement.
So that leads me to say that I do absolutely think that a systems language definition is important, if not, key to the effective development and functioning of a program. You’ve also mentioned big projects that spans multiple abstraction levels—that doesnt sound like very good software design to me. From first principles, one would expect KISS, DRY, principle of least knowledge, etc. to hold up. Why arent projects like that broken up into smaller pieces that are more manageable? Why arent they using better FFI/interface designs? A lot of the big “industry grade” projects seem to have decades of technical debt piled on, just trying to get it to work and wrap bandaids around bad code instead of proper refactoring and redesigning. Newer protocols are supported in a lazy manner leading to more technical debt.
Just like how a good powertool like a circular saw would dramatically improve your DIY experience and quality in making furniture over a handsaw. I think a good systems language + development environment would make dramatically improve your experience in making good software
Maybe thats a problem, you’ve been at this for too long, so you’ve developed a very firm outlook.
Grin. There’s more than a little bit of truth there, thanks for reminding me.
I do quite like sed, wget, etc. but it also doesnt feel very rigid to use in a project.
Depends on the project. If it’s stupid and it works, it’s not stupid. There’s a WHOLE LOT of projects that don’t need wide applicability, great uptime, nor long-term maintainability. Getting them functional in half a day is way better than getting them reliable in a week (or in GCC’s case, keeping the build-of-gcc system functional over 35 years is important, but making it clean is less important than improving the actual usage and functionality of GCC). Rigidity is a cost, not a benefit. Sometimes.
Which is my main point—using different tools and languages for different needs is the only way I know to get to the right points in the tradeoff space (both human-level and operational) for different things. I suspect we’re talking past each other when we talk about multiple levels of abstraction in a project—the word “project” is pretty undefined, and separation between these abstractions is absolutely normal -whether they build/test/deploy together or separately is an independent question.
You’re absolutely right that pretty much all long-lived projects are a mess of hacks and debt just trying to get it to work (and keep it working as new uses get added). But there are pretty difficult-to-oppose reasons that this is universally true. The decisions about when to refactor/rewrite vs when to patch over are … non-trivial and heated. Fundamentally, tech debt is a lot like financial debt—a VERY useful tool for being able to do things sooner than you can fully afford it by yourself, and as long as you’re growing, can be rolled over for a long long time. And it makes you more fragile in a downturn, so the organizations and codebases that over-use it get winnowed occasionally. Circle of life, you know.
I’ve long (30 years plus, and reinforced in working on dev tools including an IDE for a large Java server company) been of the opinion that no one language is sufficient to solve the diverse set of fairly large and complex problems that modern developers face. You can get a long way with the Unix philosophy (small single-purpose tools that compose via a shell/script mechanism, but there are plenty of apps that need the effeciency or safety of a coherent design or language structures to manage threads or networking or something else missing from either the low-level syscall-wrapper language or the high-level flow control and data-structure-manipulation language of choice.
I’m a fan of the JVM abstraction for applications—it’s easy to mix languages (java, scala, kotlin, groovy, etc.) all with the same (imperfect but well-defined) underlying memory and threading model, and a reasonably complete core data structure library. It runs on a bunch of different CPU architectures, mostly fairly efficiently (at least for long-running repeated operations, where JIT optimizations dominate).
It’s not great as an extremely low-level “systems” language—for that, a mix of assembler, C, C++, Rust, and others is better. And those aren’t great for mid-level “glue” languages (one-off or higher-level OS or deployment automations), for which Python is the current winner (with a historical mix of BASH, Python, Ruby, and others).
Fundamentally, don’t think of programming language as either the limiter or the enabler of your proficiency as a systems developer/engineer. They’re just additional tools in the box of how to think about efficiency, reliability, and explainability of systems that include computing machinery.
I apologize for being blunt, but your comments on debugging and documentation indicate to me mostly that you’ve never worked on large projects that span abstraction levels or use complex enough data structures that “debugging via stdout” is infeasible. Getting good at using a debugger for watchpoints and tracing is life-changing for many kinds of work. Likewise memory and execution profilers—not necessary for many pieces of code you’ll write, but life-saving for some.
Maybe thats a problem, you’ve been at this for too long, so you’ve developed a very firm outlook. Just kidding.
I do quite like sed, wget, etc. but it also doesnt feel very rigid to use in a project. What I see is a bunch of projects that have a mess of scripts invoked by other scripts. See https://github.com/gcc-mirror/gcc. Why not try to link those tools into the code in a more rigid way? If a project uses
git
why make a script that calls git from the system instead of directly linking something like https://docs.rs/git2/latest/git2/ into your code to do what you want? I believe cargo/rust is going the right direction with build.rs, config/cargo.toml, etc.On the case of debugging, what Im trying to say is that your language shouldnt need to be debugged at all, given that it is safe. That means no logging to stdout or any half assed ways to see where your code is dying at. It should be formally verified, through some mathematical scheme. The (safe) features of language should be used and composed in a way that the code must always be correct. Another thing is to adequately test each individual functionality of your code to ensure that it behaves in practice how you expect. Property based testing is also a good idea imo. I just dont think a project, no matter how big it gets, needs to ever get to the point where it just randomly dies due to some small bug/potentially buggy code or half assed solution that the programmer decided to oversee or implement.
So that leads me to say that I do absolutely think that a systems language definition is important, if not, key to the effective development and functioning of a program. You’ve also mentioned big projects that spans multiple abstraction levels—that doesnt sound like very good software design to me. From first principles, one would expect KISS, DRY, principle of least knowledge, etc. to hold up. Why arent projects like that broken up into smaller pieces that are more manageable? Why arent they using better FFI/interface designs? A lot of the big “industry grade” projects seem to have decades of technical debt piled on, just trying to get it to work and wrap bandaids around bad code instead of proper refactoring and redesigning. Newer protocols are supported in a lazy manner leading to more technical debt.
Just like how a good powertool like a circular saw would dramatically improve your DIY experience and quality in making furniture over a handsaw. I think a good systems language + development environment would make dramatically improve your experience in making good software
Grin. There’s more than a little bit of truth there, thanks for reminding me.
Depends on the project. If it’s stupid and it works, it’s not stupid. There’s a WHOLE LOT of projects that don’t need wide applicability, great uptime, nor long-term maintainability. Getting them functional in half a day is way better than getting them reliable in a week (or in GCC’s case, keeping the build-of-gcc system functional over 35 years is important, but making it clean is less important than improving the actual usage and functionality of GCC). Rigidity is a cost, not a benefit. Sometimes.
Which is my main point—using different tools and languages for different needs is the only way I know to get to the right points in the tradeoff space (both human-level and operational) for different things. I suspect we’re talking past each other when we talk about multiple levels of abstraction in a project—the word “project” is pretty undefined, and separation between these abstractions is absolutely normal -whether they build/test/deploy together or separately is an independent question.
You’re absolutely right that pretty much all long-lived projects are a mess of hacks and debt just trying to get it to work (and keep it working as new uses get added). But there are pretty difficult-to-oppose reasons that this is universally true. The decisions about when to refactor/rewrite vs when to patch over are … non-trivial and heated. Fundamentally, tech debt is a lot like financial debt—a VERY useful tool for being able to do things sooner than you can fully afford it by yourself, and as long as you’re growing, can be rolled over for a long long time. And it makes you more fragile in a downturn, so the organizations and codebases that over-use it get winnowed occasionally. Circle of life, you know.