While the example given is not the main point of the article, I’d still like to share a bit of actual data. Especially since I’m kind of annoyed at having spouted this rule as gospel without having a source, before.
A study done at IBM shows a defect fixed during the coding stage costs about 25$ to fix (basically in engineer hours used to find and fix it).
This cost quadruples to 100$ during the build phase; presumably because this can bottleneck a lot of other people trying to submit their code, if you happen to break the build.
The cost quadruples again for bugs found during QA/Testing phase, to 450$. I’m guessing this includes tester time, developer time, additional tools used to facilitate bug tracking… Investments the company might have made anyway, but not if testing did not catch bugs that would otherwise go out to market.
Bugs discovered once released as a product is the next milestone, and here the jump is huge: Each bug cost 16k$, about 35 times the cost of a tester-found bug. I’m not sure if this includes revenue lost due to bad publicity, but I’m guessing probably no. I think only tangible investments were tracked.
Critical bugs discovered by customers that do not result in a general recall cost x10 that much (this is the only step that actually seems to have this number), at 158k$ per defect. This increases to 241k$ for recalled products.
My own company also noticed that external bugs typically take twice as long to fix as internally found bugs (~59h to ~30h) in a certain division.
So this “rule of thumb” seems real enough… The x10 rule is not quite right, it’s more like a x4 rule with a huge jump once your product goes to market. But the general gist seems to be correct.
Note this is all more in line with the quoted graph than its extrapolation: Bugs detected late cost more to fix. It tells us nothing about the stage they were introduced in.
While the example given is not the main point of the article, I’d still like to share a bit of actual data. Especially since I’m kind of annoyed at having spouted this rule as gospel without having a source, before.
A study done at IBM shows a defect fixed during the coding stage costs about 25$ to fix (basically in engineer hours used to find and fix it).
This cost quadruples to 100$ during the build phase; presumably because this can bottleneck a lot of other people trying to submit their code, if you happen to break the build.
The cost quadruples again for bugs found during QA/Testing phase, to 450$. I’m guessing this includes tester time, developer time, additional tools used to facilitate bug tracking… Investments the company might have made anyway, but not if testing did not catch bugs that would otherwise go out to market.
Bugs discovered once released as a product is the next milestone, and here the jump is huge: Each bug cost 16k$, about 35 times the cost of a tester-found bug. I’m not sure if this includes revenue lost due to bad publicity, but I’m guessing probably no. I think only tangible investments were tracked.
Critical bugs discovered by customers that do not result in a general recall cost x10 that much (this is the only step that actually seems to have this number), at 158k$ per defect. This increases to 241k$ for recalled products.
My own company also noticed that external bugs typically take twice as long to fix as internally found bugs (~59h to ~30h) in a certain division.
So this “rule of thumb” seems real enough… The x10 rule is not quite right, it’s more like a x4 rule with a huge jump once your product goes to market. But the general gist seems to be correct.
Note this is all more in line with the quoted graph than its extrapolation: Bugs detected late cost more to fix. It tells us nothing about the stage they were introduced in.
Go data-driven conclusions! :)