Wirth’s law is a computing adage made popular by Niklaus Wirth in 1995. It states that “software is getting slower more rapidly than hardware becomes faster.”
Is Wirth’s Law still in effect? Most of the examples I’ve read about are several years old.
ETA: I find it interesting that Wirth’s Law was apparently a thing for decades (known since the 1980s, supposedly) but seems to be over. I’m no expert though, I just wonder what changed.
It was my impression that Wirth’s law was mostly intended to be tongue-in-cheek, and refer to how programs with user interfaces are getting bloated (which may be true depending on your point of view).
In terms of software that actually needs speed (numerical simulations, science and tech software, games, etc.) the reverse has always been true. New algorithms are usually faster than old ones. Case in point is the trusty old BLAS library which is the workhorse of scientific computing. Modern BLAS implementations are extremely super-optimized, far more optimized than older implementations (for current computing hardware, of course).
It wasn’t even true in 1995, I don’t think. The first way of evaluating it that comes to mind is the startup times of “equivalent” programs, like MS Windows, Macintosh OS, various Corels, etc.
Startup times for desktop operating systems seem to have trended up, then down, between the ’80s and today; with the worst performance being in the late ’90s to 2000 or so when rebooting on any of the major systems could be a several-minutes affair. Today, typical boot times for Mac, Windows, or GNU/Linux systems can be in a handful of seconds if no boot-time repairs (that’s “fsck” to us Unix nerds) are required.
I know that a few years back, there was a big effort in the Linux space to improve startup times, in particular by switching from serial startup routines (with only one subsystem starting at once) to parallel ones where multiple independent subsystems could be starting at the same time. I expect the same was true on the other major systems as well.
My experience is that boot time was worst in Windows Vista (released 2007) and improved a great deal in Windows 7 and 8. MS Office was probably at its worst in bloatiness in the 2007 edition as well.
It would be interesting to plot the time sequence of major chip upgrades from intel on the same page as the time sequence of major upgrades of MS Word and/or MS Excel. My vague sense is the mid/early 90s had Word releases that I avoided for a year or two until faster machines came along that made them more usable from my point of view. But it seems the rate of new Word releases has come way down compared to the rate of new chip releases. That is, perhaps hardware is creeping up faster than features are in the current epoch?
I find it interesting that Wirth’s Law was apparently a thing for decades (known since the 1980s, supposedly) but seems to be over. I’m no expert though, I just wonder what changed.
I think both software and hardware got further out on the learning curve which means their real rates of innovative development have both slowed down which means the performance of software has sped up.
I don’t get how I get to the last part of that sentence from the first part either, but it almost makes sense.
I mean, this formulation is wrong (software isn’t getting slower), except for the tongue-in-cheek original interpretation I guess. On the other hand, software is getting faster at a slower rate than hardware is and that is still an important observation.
Wirth’s Law:
Is Wirth’s Law still in effect? Most of the examples I’ve read about are several years old.
ETA: I find it interesting that Wirth’s Law was apparently a thing for decades (known since the 1980s, supposedly) but seems to be over. I’m no expert though, I just wonder what changed.
It was my impression that Wirth’s law was mostly intended to be tongue-in-cheek, and refer to how programs with user interfaces are getting bloated (which may be true depending on your point of view).
In terms of software that actually needs speed (numerical simulations, science and tech software, games, etc.) the reverse has always been true. New algorithms are usually faster than old ones. Case in point is the trusty old BLAS library which is the workhorse of scientific computing. Modern BLAS implementations are extremely super-optimized, far more optimized than older implementations (for current computing hardware, of course).
It wasn’t even true in 1995, I don’t think. The first way of evaluating it that comes to mind is the startup times of “equivalent” programs, like MS Windows, Macintosh OS, various Corels, etc.
Startup times for desktop operating systems seem to have trended up, then down, between the ’80s and today; with the worst performance being in the late ’90s to 2000 or so when rebooting on any of the major systems could be a several-minutes affair. Today, typical boot times for Mac, Windows, or GNU/Linux systems can be in a handful of seconds if no boot-time repairs (that’s “fsck” to us Unix nerds) are required.
I know that a few years back, there was a big effort in the Linux space to improve startup times, in particular by switching from serial startup routines (with only one subsystem starting at once) to parallel ones where multiple independent subsystems could be starting at the same time. I expect the same was true on the other major systems as well.
My experience is that boot time was worst in Windows Vista (released 2007) and improved a great deal in Windows 7 and 8. MS Office was probably at its worst in bloatiness in the 2007 edition as well.
It would be interesting to plot the time sequence of major chip upgrades from intel on the same page as the time sequence of major upgrades of MS Word and/or MS Excel. My vague sense is the mid/early 90s had Word releases that I avoided for a year or two until faster machines came along that made them more usable from my point of view. But it seems the rate of new Word releases has come way down compared to the rate of new chip releases. That is, perhaps hardware is creeping up faster than features are in the current epoch?
This seems to be true for video game consoles. Possibly because good graphics make better ads than short loading times.
I think both software and hardware got further out on the learning curve which means their real rates of innovative development have both slowed down which means the performance of software has sped up.
I don’t get how I get to the last part of that sentence from the first part either, but it almost makes sense.
I mean, this formulation is wrong (software isn’t getting slower), except for the tongue-in-cheek original interpretation I guess. On the other hand, software is getting faster at a slower rate than hardware is and that is still an important observation.