Sorry but the software world described here has little to do with my daily work in software. As most apps have moved to webapps, and most servers are now in the Cloud, and most devices are IoT cloud-connected, as all these trends have happened, the paradigm for software has evolved to maximizing change.
Software never was very re-usable itself, but frameworks and APIs turned out to have huge value, so now we have systems everywhere based a a layered approach from OS up to application, where application software is quite abstracted from the OS and hardware and support software ( e.g. webserver or database). However frameworks also change quickly these days—JQuery-Angular-React-Vue.js .
Cloud engineering is all about reliability, scalability, and a very rapid change process. This is accomplished through infrastructure automation, and process automation. Well-organized shops aim to release daily, and at the same time, have very good quality. We use CI/CD patterns that automate every step from build to deployment.
Containers are everywhere, but the next step is Kubernetes and serverless in the Cloud, where we hardly touch the infrastructure, and focus on code and API. I see no chance that code will last long enough to depreciate.
Making high-quality software is all about the process and the architecture. You just can’t meet today’s requirements building monoliths, on manually-managed servers.
Sounds like you’re mostly talking about ops, which is a different beast.
An example from my previous job, to illustrate the sort of things I’m talking about: we had a mortgage app, so we called a credit report api, an api to get house data from an address, and an api to pull current pricing from the mortgage securities market (there were others, but those three were the most important). Within a six-month span, the first two apis made various small breaking changes to the format returned, and the third was shut down altogether and had to be switched to a new service.
(We also had the whole backend setup on Kubernetes, and breaking changes there were pretty rare. But as long as the infrastructure is working, it’s tangential to most engineers’ day-to-day work; the bulk of the code is not infrastructure/process related. Though I suppose we did have a slew of new bugs every time anything in the stack “upgraded” to a new version.)
Well I’ve heard those bank APIs break a lot. I think I am trying to say that software lifespan is not at all what it used to be 10-15 years ago. Software is just not a *thing* that gets depreciated, its a thing that never stops changing. This company here too separates infrastructure engineering from software, but that’s not how the big kids play, and I am learning some bitter lessons about why. It really is better if the developers are in charge of deployment. Or at least constantly collaborating with the DevOps crew and the OPs crew. Granted every project has its special requirements, so no idea works everywhere. But “throw it over the wall” is going away.
Maybe this is all just this years buzzwords, but I don’t think so. I am seeing some startups going after rust-belt manufacturing software, where they are often still running on XP, and dare not change anything. These startups want to sell support for a much more highly automated process, with much more flexibility. Good business model or not, you just can’t do that sort of thing in a waterfall release process.
Sorry but the software world described here has little to do with my daily work in software. As most apps have moved to webapps, and most servers are now in the Cloud, and most devices are IoT cloud-connected, as all these trends have happened, the paradigm for software has evolved to maximizing change.
Software never was very re-usable itself, but frameworks and APIs turned out to have huge value, so now we have systems everywhere based a a layered approach from OS up to application, where application software is quite abstracted from the OS and hardware and support software ( e.g. webserver or database). However frameworks also change quickly these days—JQuery-Angular-React-Vue.js .
Cloud engineering is all about reliability, scalability, and a very rapid change process. This is accomplished through infrastructure automation, and process automation. Well-organized shops aim to release daily, and at the same time, have very good quality. We use CI/CD patterns that automate every step from build to deployment.
Containers are everywhere, but the next step is Kubernetes and serverless in the Cloud, where we hardly touch the infrastructure, and focus on code and API. I see no chance that code will last long enough to depreciate.
Making high-quality software is all about the process and the architecture. You just can’t meet today’s requirements building monoliths, on manually-managed servers.
Sounds like you’re mostly talking about ops, which is a different beast.
An example from my previous job, to illustrate the sort of things I’m talking about: we had a mortgage app, so we called a credit report api, an api to get house data from an address, and an api to pull current pricing from the mortgage securities market (there were others, but those three were the most important). Within a six-month span, the first two apis made various small breaking changes to the format returned, and the third was shut down altogether and had to be switched to a new service.
(We also had the whole backend setup on Kubernetes, and breaking changes there were pretty rare. But as long as the infrastructure is working, it’s tangential to most engineers’ day-to-day work; the bulk of the code is not infrastructure/process related. Though I suppose we did have a slew of new bugs every time anything in the stack “upgraded” to a new version.)
Well I’ve heard those bank APIs break a lot. I think I am trying to say that software lifespan is not at all what it used to be 10-15 years ago. Software is just not a *thing* that gets depreciated, its a thing that never stops changing. This company here too separates infrastructure engineering from software, but that’s not how the big kids play, and I am learning some bitter lessons about why. It really is better if the developers are in charge of deployment. Or at least constantly collaborating with the DevOps crew and the OPs crew. Granted every project has its special requirements, so no idea works everywhere. But “throw it over the wall” is going away.
Maybe this is all just this years buzzwords, but I don’t think so. I am seeing some startups going after rust-belt manufacturing software, where they are often still running on XP, and dare not change anything. These startups want to sell support for a much more highly automated process, with much more flexibility. Good business model or not, you just can’t do that sort of thing in a waterfall release process.