Software is economic capital—just like buildings, infrastructure, machines, etc. It’s created once, and then used for a (relatively) long time. Using it does not destroy it. Someone who buys/creates a machine usually plans to use it to build other things and make back their investment over time. Someone who buys/creates software usually plans to use it for other things and make back their investment over time.
Software depreciates. Hardware needs to be replaced (or cloud provider switched), operating systems need to be upgraded, and backward compatibility is not always maintained. Security problems pop up, and need to be patched. External libraries are deprecated, abandoned, and stop working altogether. People shift from desktop to browser to mobile to ???. Perhaps most frequently, external APIs change format or meaning or are shut down altogether.
In most macroeconomic models, new capital accumulates until it reaches an equilibrium level, where all investment goes toward repairing/replacing depreciated capital—resurfacing roads, replacing machines, repairing buildings rather than creating new roads, machines and buildings. The same applies to software teams/companies: code accumulates until it reaches an equilibrium level, where all effort goes toward repairing/replacing depreciated code—switching to new libraries, updating to match changed APIs, and patching bugs introduced by previous repairs.
What qualitative predictions does this model make?
Prediction 1
If a software company wants to expand the capabilities of their software over time, they can’t just write more code—the old software will break down if the engineers turn their attention elsewhere. That leaves a few options:
Hire more engineers (economics equivalent: population/labor force growth)
Hire/train better engineers (economics equivalent: more education)
Figure out better ways to make the software do what it does (economics equivalent: innovation)
Hiring more engineers is the “throw money at it solution”, and probably the most common in practice—but also the solution most prone to total failure when the VC funding dries up.
Hiring/training better engineers is the dream. Every software company wishes they could do so, many even claim to do so, but few (if any) actually manage it. There are many reasons: it’s hard to recognize skill levels above your own, education is slow and hard to measure, there’s lots of bullshit on the subject and it’s hard to comb through, hard to get buy-in from management, etc.
Figuring out better ways to make the software do what it does is probably the most technically interesting item on the list, and also arguably the item with the most long-term potential. This includes adopting new libraries/languages/frameworks/techniques. It includes refactoring to unify duplicate functionality. It includes designing new abstraction layers. Unfortunately, all of these things are also easy to get wrong—unifying things with significantly divergent use cases, or designing a leaky abstraction—and it’s often hard to tell until later whether the change has helped or hurt.
Prediction 2
New products from small companies tend to catch up to large existing products, at least in terms of features. The new product with a small code base needs to invest much less in fighting back depreciation (i.e. legacy code), so they can add new features much more quickly.
If you’ve worked in a software startup, you’ve probably experienced this first hand.
Conversely, as the code base grows, the pace of new features necessarily slows. Decreasing marginal returns of new features meets up with increasing depreciation load, until adding a new feature means abandoning an old one. Unless a company is constantly adding engineers, the pace of feature addition will slow to a crawl as they grow.
Prediction 3
Since this all depends on depreciation, it’s going to hit hardest when the software depreciates fastest.
The biggest factor here (at least in my experience) is external APIs. A company whose code does not call out to any external APIs has relatively light depreciation load—once their code is written, it’s mostly going to keep running, other than long-term changes in the language or OS. APIs usually change much more frequently than languages or operating systems, and are less stringent about backwards compatibility. (For apps built by large companies, this also includes calling APIs maintained by other teams.)
Redis is a pretty self-contained system—not much depreciation there. Redis could easily add a lot more features without drowning in maintenance needs. On the other end of the spectrum, a mortgage app needs to call loads of APIs—credit agencies, property databases, government APIs, pricing feeds… they’ll hit equilibrium pretty quickly. In that sort of environment, you’ll probably end up with a roughly-constant number of APIs per engineer which can be sustained long term.
The Simple Solow Model of Software Engineering
Optional background: The Super-Simple Solow Model
Software is economic capital—just like buildings, infrastructure, machines, etc. It’s created once, and then used for a (relatively) long time. Using it does not destroy it. Someone who buys/creates a machine usually plans to use it to build other things and make back their investment over time. Someone who buys/creates software usually plans to use it for other things and make back their investment over time.
Software depreciates. Hardware needs to be replaced (or cloud provider switched), operating systems need to be upgraded, and backward compatibility is not always maintained. Security problems pop up, and need to be patched. External libraries are deprecated, abandoned, and stop working altogether. People shift from desktop to browser to mobile to ???. Perhaps most frequently, external APIs change format or meaning or are shut down altogether.
In most macroeconomic models, new capital accumulates until it reaches an equilibrium level, where all investment goes toward repairing/replacing depreciated capital—resurfacing roads, replacing machines, repairing buildings rather than creating new roads, machines and buildings. The same applies to software teams/companies: code accumulates until it reaches an equilibrium level, where all effort goes toward repairing/replacing depreciated code—switching to new libraries, updating to match changed APIs, and patching bugs introduced by previous repairs.
What qualitative predictions does this model make?
Prediction 1
If a software company wants to expand the capabilities of their software over time, they can’t just write more code—the old software will break down if the engineers turn their attention elsewhere. That leaves a few options:
Hire more engineers (economics equivalent: population/labor force growth)
Hire/train better engineers (economics equivalent: more education)
Figure out better ways to make the software do what it does (economics equivalent: innovation)
Hiring more engineers is the “throw money at it solution”, and probably the most common in practice—but also the solution most prone to total failure when the VC funding dries up.
Hiring/training better engineers is the dream. Every software company wishes they could do so, many even claim to do so, but few (if any) actually manage it. There are many reasons: it’s hard to recognize skill levels above your own, education is slow and hard to measure, there’s lots of bullshit on the subject and it’s hard to comb through, hard to get buy-in from management, etc.
Figuring out better ways to make the software do what it does is probably the most technically interesting item on the list, and also arguably the item with the most long-term potential. This includes adopting new libraries/languages/frameworks/techniques. It includes refactoring to unify duplicate functionality. It includes designing new abstraction layers. Unfortunately, all of these things are also easy to get wrong—unifying things with significantly divergent use cases, or designing a leaky abstraction—and it’s often hard to tell until later whether the change has helped or hurt.
Prediction 2
New products from small companies tend to catch up to large existing products, at least in terms of features. The new product with a small code base needs to invest much less in fighting back depreciation (i.e. legacy code), so they can add new features much more quickly.
If you’ve worked in a software startup, you’ve probably experienced this first hand.
Conversely, as the code base grows, the pace of new features necessarily slows. Decreasing marginal returns of new features meets up with increasing depreciation load, until adding a new feature means abandoning an old one. Unless a company is constantly adding engineers, the pace of feature addition will slow to a crawl as they grow.
Prediction 3
Since this all depends on depreciation, it’s going to hit hardest when the software depreciates fastest.
The biggest factor here (at least in my experience) is external APIs. A company whose code does not call out to any external APIs has relatively light depreciation load—once their code is written, it’s mostly going to keep running, other than long-term changes in the language or OS. APIs usually change much more frequently than languages or operating systems, and are less stringent about backwards compatibility. (For apps built by large companies, this also includes calling APIs maintained by other teams.)
Redis is a pretty self-contained system—not much depreciation there. Redis could easily add a lot more features without drowning in maintenance needs. On the other end of the spectrum, a mortgage app needs to call loads of APIs—credit agencies, property databases, government APIs, pricing feeds… they’ll hit equilibrium pretty quickly. In that sort of environment, you’ll probably end up with a roughly-constant number of APIs per engineer which can be sustained long term.