That strikes me as a highly specific description of the “bug predicate”—I can see how it applies in this instance, but if you have 1000 bugs to classify, of which this is one, you’ll have to write 999 more predicates at this level. It seems to me, too, that we’ve only moved the question one step back—to why you deem an operation or a displayed result “invalid”. (The calculator applet on my computer lets me compute 1⁄0 giving back the result “ERROR”, but since that’s been the behavior over several OS versions, I suspect it’s not considered a “bug”.)
Is there a more abstract way of framing the predicate “this behavior is a bug”? (What is “bug” even a property of?)
Ah, I see—you’re looking for a general rule, not a specific reason.
In that case, the general rule under which this bug falls is the following:
For any valid input, the software should not produce an error message. For any invalid input, the software should unambiguously display a clear error message.
‘Valid input’ is defined as any input for which there is a sensible, correct output value.
So, for example, in a calculator application, 1⁄0 is not valid input because division by zero is undefined. Thus, “ERROR” (or some variant thereof) is a reasonable output. 1⁄0.2, on the other hand, is a valid operation, with a correct output value of 5. Returning “ERROR” in that case would be a bug.
Or, to put it another way; error messages should always have a clear external cause (up to and including hardware failure). It should be obvious that the external cause is a case of using the software incorrectly. An error should never start within the software, but should always be detected by the software and (where possible) unambiguously communicated to the user.
Granting that this definition of what constitutes a “bug” is diagnostic in the case we’ve been looking at (I’m not quite convinced, but let’s move on), will it suffice for the 999 other cases? Roughly how many general rules are we going to need to sort 1000 typical bugs?
Can we even tell, in the case we’ve been discussing, that the above definition applies, just by looking at the source code or revision history of the source code? Or do we need to have a conversation with the developers and possibly other stakeholders for every bug?
(I did warn up front that I consider the task of even asking the question properly to be very hard, so I’ll make no apologies for the decidedly Socratic turn of this thread.)
Roughly how many general rules are we going to need to sort 1000 typical bugs?
I can think, off the top of my head, of six rules that seem to cover most cases (each additional rule addressing one category in the list above). If I think about it for a few minutes longer, I may be able to think of exceptions (and then rules to cover those exceptions); however, I think it very probable that over 990 of those thousand bugs would fall under no more than a dozen similarly broad rules. I also expect the rare bug that is very hard to classify, that is likely to turn up in a random sample of 1000 bugs.
Can we even tell, in the case we’ve been discussing, that the above definition applies, just by looking at the source code or revision history of the source code?
Hmmm. That depends. I can, because I know the program, and the test case that triggered the bug. Any developer presented with the snippet of code should recognise its purpose, and that it should be present, though it would not be obvious what valid input, if any, triggers the bug. Someone who is not a developer may need to get a developer to look at the code, then talk to the developer. In this specific case, talking with a stakeholder should not be necessary; an independent developer would be sufficient (there are bugs where talking to a stakeholder would be required to properly identify them as bugs). I don’t think that identifying this fix as a bug can be easily automated.
If I were to try to automate the task of identifying bugs with a computer, I’d search through the version history for the word “fix”. It’s not foolproof, but the presence of “fix” in the version history is strong evidence that something was, indeed, fixed. (This fails when the full comment includes the phrase ”...still need to fix...”). Annoyingly, it would fail to pick up this particular bug (the version history mentions “adding boundary checks” without once using the word “fix”).
I’d search through the version history for the word “fix”.
That’s a useful approximation for finding fixes, and simpler enough compared to a half-dozen rules that I would personally accept the risk of uncertainty (e.g. repeated fixes for the same issues would be counted more than once). As you point out, you have to make it a systematic rule prior to the project, which makes it perhaps less applicable to existing open-source projects. (Many developers diligently mark commits according to their nature, but I don’t know what proportion of all open-source devs do, I suspect not enough.)
It’s too bad we can’t do the same to find when bugs were introduced—developers don’t generally label as such commits that contain bugs.
It’s too bad we can’t do the same to find when bugs were introduced—developers don’t generally label as such commits that contain bugs.
If they did, it would make the bugs easier to find.
If I had to automate that, I’d consider the lines of code changed by the update. For each line changed, I’d find the last time that that line had been changed; I’d take the earliest of these dates.
However, many bugs are fixed not by lines changed, but by lines added. I’m not sure how to date those; the date of the creation of the function containing the new line? The date of the last change to that function? I can imagine situations where either of those could be valid. Again, I would take the earliest applicable date.
I should probably also ignore lines that are only comments.
This one is interesting—it remained undetected for two years, was very cheap to fix (just add the commented out line back in), but had large and hard to estimate indirect costs.
Among people who buy into the “rising cost of defects” theory, there’s a common mistake: conflating “cost to fix” and “cost of the bug”. This is especially apparent in arguments that bugs in the field are “obviously” very costly to fix, because the software has been distributed in many places, etc. That strikes me as a category error.
many bugs are fixed not by lines changed, but by lines added
Many bugs are also fixed by adding or changing (or in fact deleting) code elsewhere than the place where the bug was introduced—the well-known game of workarounds.
At least one well-known bug I know about consisted of commenting out a single line of code.
I take your point. I should only ignore lines that are comments both before and after the change; commenting or uncommenting code can clearly be a bugfix. (Or can introduce a bug, of course).
Among people who buy into the “rising cost of defects” theory, there’s a common mistake: conflating “cost to fix” and “cost of the bug”. This is especially apparent in arguments that bugs in the field are “obviously” very costly to fix, because the software has been distributed in many places, etc. That strikes me as a category error.
Hmmm. “Cost to fix”, to my mind, should include the cost to find the bug and the cost to repair the bug. “Cost of the bug” should include all the knock-on effects of the bug having been active in the field for some time (which could be lost productivity, financial losses, information leakage, and just about anything, depending on the bug).
Many bugs are also fixed by adding or changing (or in fact deleting) code elsewhere than the place where the bug was introduced—the well-known game of workarounds.
I would assert that this does not fix the bug at all; it simply makes the bug less relevant (hopefully, irrelevant to the end user). If I write a function that’s supposed to return a+b, and it instead returns a+b+1, then this can easily be worked around by subtracting one from the return value every time it is used; but the downside is that the function is still returning the wrong value (a trap for any future maintainers) and, moreover, it makes the actual bug even more expensive to fix (since once it is fixed, all the extraneous minus-ones must be tracked down and removed).
That strikes me as a highly specific description of the “bug predicate”—I can see how it applies in this instance, but if you have 1000 bugs to classify, of which this is one, you’ll have to write 999 more predicates at this level. It seems to me, too, that we’ve only moved the question one step back—to why you deem an operation or a displayed result “invalid”. (The calculator applet on my computer lets me compute 1⁄0 giving back the result “ERROR”, but since that’s been the behavior over several OS versions, I suspect it’s not considered a “bug”.)
Is there a more abstract way of framing the predicate “this behavior is a bug”? (What is “bug” even a property of?)
Ah, I see—you’re looking for a general rule, not a specific reason.
In that case, the general rule under which this bug falls is the following:
For any valid input, the software should not produce an error message. For any invalid input, the software should unambiguously display a clear error message.
‘Valid input’ is defined as any input for which there is a sensible, correct output value.
So, for example, in a calculator application, 1⁄0 is not valid input because division by zero is undefined. Thus, “ERROR” (or some variant thereof) is a reasonable output. 1⁄0.2, on the other hand, is a valid operation, with a correct output value of 5. Returning “ERROR” in that case would be a bug.
Or, to put it another way; error messages should always have a clear external cause (up to and including hardware failure). It should be obvious that the external cause is a case of using the software incorrectly. An error should never start within the software, but should always be detected by the software and (where possible) unambiguously communicated to the user.
Granting that this definition of what constitutes a “bug” is diagnostic in the case we’ve been looking at (I’m not quite convinced, but let’s move on), will it suffice for the 999 other cases? Roughly how many general rules are we going to need to sort 1000 typical bugs?
Can we even tell, in the case we’ve been discussing, that the above definition applies, just by looking at the source code or revision history of the source code? Or do we need to have a conversation with the developers and possibly other stakeholders for every bug?
(I did warn up front that I consider the task of even asking the question properly to be very hard, so I’ll make no apologies for the decidedly Socratic turn of this thread.)
No. I have not yet addressed the issues of:
Incorrect output
Program crashes
Irrelevant output
Output that takes too long
Bad user interface
I can think, off the top of my head, of six rules that seem to cover most cases (each additional rule addressing one category in the list above). If I think about it for a few minutes longer, I may be able to think of exceptions (and then rules to cover those exceptions); however, I think it very probable that over 990 of those thousand bugs would fall under no more than a dozen similarly broad rules. I also expect the rare bug that is very hard to classify, that is likely to turn up in a random sample of 1000 bugs.
Hmmm. That depends. I can, because I know the program, and the test case that triggered the bug. Any developer presented with the snippet of code should recognise its purpose, and that it should be present, though it would not be obvious what valid input, if any, triggers the bug. Someone who is not a developer may need to get a developer to look at the code, then talk to the developer. In this specific case, talking with a stakeholder should not be necessary; an independent developer would be sufficient (there are bugs where talking to a stakeholder would be required to properly identify them as bugs). I don’t think that identifying this fix as a bug can be easily automated.
If I were to try to automate the task of identifying bugs with a computer, I’d search through the version history for the word “fix”. It’s not foolproof, but the presence of “fix” in the version history is strong evidence that something was, indeed, fixed. (This fails when the full comment includes the phrase ”...still need to fix...”). Annoyingly, it would fail to pick up this particular bug (the version history mentions “adding boundary checks” without once using the word “fix”).
That’s a useful approximation for finding fixes, and simpler enough compared to a half-dozen rules that I would personally accept the risk of uncertainty (e.g. repeated fixes for the same issues would be counted more than once). As you point out, you have to make it a systematic rule prior to the project, which makes it perhaps less applicable to existing open-source projects. (Many developers diligently mark commits according to their nature, but I don’t know what proportion of all open-source devs do, I suspect not enough.)
It’s too bad we can’t do the same to find when bugs were introduced—developers don’t generally label as such commits that contain bugs.
If they did, it would make the bugs easier to find.
If I had to automate that, I’d consider the lines of code changed by the update. For each line changed, I’d find the last time that that line had been changed; I’d take the earliest of these dates.
However, many bugs are fixed not by lines changed, but by lines added. I’m not sure how to date those; the date of the creation of the function containing the new line? The date of the last change to that function? I can imagine situations where either of those could be valid. Again, I would take the earliest applicable date.
I should probably also ignore lines that are only comments.
At least one well-known bug I know about consisted of commenting out a single line of code.
This one is interesting—it remained undetected for two years, was very cheap to fix (just add the commented out line back in), but had large and hard to estimate indirect costs.
Among people who buy into the “rising cost of defects” theory, there’s a common mistake: conflating “cost to fix” and “cost of the bug”. This is especially apparent in arguments that bugs in the field are “obviously” very costly to fix, because the software has been distributed in many places, etc. That strikes me as a category error.
Many bugs are also fixed by adding or changing (or in fact deleting) code elsewhere than the place where the bug was introduced—the well-known game of workarounds.
I take your point. I should only ignore lines that are comments both before and after the change; commenting or uncommenting code can clearly be a bugfix. (Or can introduce a bug, of course).
Hmmm. “Cost to fix”, to my mind, should include the cost to find the bug and the cost to repair the bug. “Cost of the bug” should include all the knock-on effects of the bug having been active in the field for some time (which could be lost productivity, financial losses, information leakage, and just about anything, depending on the bug).
I would assert that this does not fix the bug at all; it simply makes the bug less relevant (hopefully, irrelevant to the end user). If I write a function that’s supposed to return a+b, and it instead returns a+b+1, then this can easily be worked around by subtracting one from the return value every time it is used; but the downside is that the function is still returning the wrong value (a trap for any future maintainers) and, moreover, it makes the actual bug even more expensive to fix (since once it is fixed, all the extraneous minus-ones must be tracked down and removed).