The biggest factor here (at least in my experience) is external APIs. A company whose code does not call out to any external APIs has relatively light depreciation load—once their code is written, it’s mostly going to keep running, other than long-term changes in the language or OS. APIs usually change much more frequently than languages or operating systems, and are less stringent about backwards compatibility. (For apps built by large companies, this also includes calling APIs maintained by other teams.)
Of course, the trade off is that then YOUR engineers have to maintain the code for security updates/refactors when they realize its’ unmaintable or built wrong vs. having the API do it.
One way to look at this is through the lens of Wardley Evolution. When a new function is not that well understood, it needs to be changed very frequently, as people are still trying to figure out what the API needs and how to correctly abstract what you’re doing. In this case, it makes sense to build the code yourself rather than using an API that knows less about your use case than you do. An example might be the first few blockchain’s writing their own consensus code instead of relying on Bitcoin’s.
On the other extreme, when a certain paradigm is extremely understood such that it’s commoditized, it makes sense to use an existing API that will keep up to date with infrequent security vulnerabilities instead of having your engineers do that. An example would be having webapps use existing SQL databases and the existing SQL API instead of writing their own database format and query language.
In the middle is where it gets murky. If you adopt an API too early, you run the risk of being in a place where you’re spending more time keeping up with the API changes than you would writing your own thing. However, adopt it too late, and you’re spending valuable time trying to come up with the correct abstractions and refactor your code so that it’s more maintainable, whereas it would be cheaper to just outsource that work to the API developers.
Of course, the trade off is that then YOUR engineers have to maintain the code for security updates/refactors when they realize its’ unmaintable or built wrong vs. having the API do it.
One way to look at this is through the lens of Wardley Evolution. When a new function is not that well understood, it needs to be changed very frequently, as people are still trying to figure out what the API needs and how to correctly abstract what you’re doing. In this case, it makes sense to build the code yourself rather than using an API that knows less about your use case than you do. An example might be the first few blockchain’s writing their own consensus code instead of relying on Bitcoin’s.
On the other extreme, when a certain paradigm is extremely understood such that it’s commoditized, it makes sense to use an existing API that will keep up to date with infrequent security vulnerabilities instead of having your engineers do that. An example would be having webapps use existing SQL databases and the existing SQL API instead of writing their own database format and query language.
In the middle is where it gets murky. If you adopt an API too early, you run the risk of being in a place where you’re spending more time keeping up with the API changes than you would writing your own thing. However, adopt it too late, and you’re spending valuable time trying to come up with the correct abstractions and refactor your code so that it’s more maintainable, whereas it would be cheaper to just outsource that work to the API developers.