From my own research, in which I often form and report conclusions based on the output of some code, I know this is very true.
I’ve come to the conclusion that a code (a code that I’ve written, anyway) is never bug free, and the only material question is whether the bugs that are there affect anything.
What I’ve learned to do while writing the code is to run it on a variety of data for which I already know what the output should be. (For example, if a simple code line to supposed to output the maximum of a function, I input a zero function, and then some kind of step function, and then a sine function, etc.) I do this no matter how straight-forward and simple the code seems to be, it only takes a few seconds and most bugs are obvious in retrospect. I break up the code into small pieces and retest different subsets of the small pieces.
When I finally use the code on something I’m going to report, I’ve developed a strategy of using lots of ad hoc methods of trying to anticipate the proper output of the code even before I run it. If I get something unexpected, I will go back to studying the code until I find a bug or need to update my ad hoc intuitions. While I’m searching for a bug, by following the steps of the code in more and more detail, I begin to understand in greater detail why/how the unexpected result came about—it will either not make sense and I can better hone in on the problem, or I find out why my ad hoc intuitions about the problem were wrong.
I cannot imagine getting a reliable result without the interplay of testing my intuitions against the code and my code against my intuitions. Pretty much, they train each other and I call that research.
Still, despite my efforts, I occasionally find a non-negligible bug when I look at the code later, especially if using it for a different application and a new context. This is a point of some embarrassment and anxiety for me, but I’m not sure what can be done about it. I expand the scope of my tests and would probably appear a bit obsessive compulsive in my debugging process if anyone was looking over my shoulder.
I would never want to work on a project that I had no intuition for. My result would almost certainly be incorrect.
--
I will add that what saves us in scenario is what Anna Salamon observed (Why will a randomly chosen eight-year-old fail a calculus test?) -- the correct solution is a tiny point in the space of possible answers. If your code is wrong, your unlikely to closely match your correct or incorrect intuition, so you know you need to go back.
Your point in the post above is that you won’t catch the bug in your code if it outputs an answer that is close. How frequently can we expect this to occur?
If the bugs have the affect of scaling your answer by a small amount, then this is quite likely. Further, if there are a lot of such bugs, some increasing and some decreasing the final output, a person can systematically skew their results towards the one they expect by ignoring mistakes in the ‘wrong’ direction (“well, that error isn’t be the problem”) and fixing ones in the ‘right’ direction. So if you find a lot of bugs in the direction you want to correct in, you should also be sure and fix just as many in the opposite direction.
From my own research, in which I often form and report conclusions based on the output of some code, I know this is very true.
I’ve come to the conclusion that a code (a code that I’ve written, anyway) is never bug free, and the only material question is whether the bugs that are there affect anything.
What I’ve learned to do while writing the code is to run it on a variety of data for which I already know what the output should be. (For example, if a simple code line to supposed to output the maximum of a function, I input a zero function, and then some kind of step function, and then a sine function, etc.) I do this no matter how straight-forward and simple the code seems to be, it only takes a few seconds and most bugs are obvious in retrospect. I break up the code into small pieces and retest different subsets of the small pieces.
When I finally use the code on something I’m going to report, I’ve developed a strategy of using lots of ad hoc methods of trying to anticipate the proper output of the code even before I run it. If I get something unexpected, I will go back to studying the code until I find a bug or need to update my ad hoc intuitions. While I’m searching for a bug, by following the steps of the code in more and more detail, I begin to understand in greater detail why/how the unexpected result came about—it will either not make sense and I can better hone in on the problem, or I find out why my ad hoc intuitions about the problem were wrong.
I cannot imagine getting a reliable result without the interplay of testing my intuitions against the code and my code against my intuitions. Pretty much, they train each other and I call that research.
Still, despite my efforts, I occasionally find a non-negligible bug when I look at the code later, especially if using it for a different application and a new context. This is a point of some embarrassment and anxiety for me, but I’m not sure what can be done about it. I expand the scope of my tests and would probably appear a bit obsessive compulsive in my debugging process if anyone was looking over my shoulder.
I would never want to work on a project that I had no intuition for. My result would almost certainly be incorrect.
--
I will add that what saves us in scenario is what Anna Salamon observed (Why will a randomly chosen eight-year-old fail a calculus test?) -- the correct solution is a tiny point in the space of possible answers. If your code is wrong, your unlikely to closely match your correct or incorrect intuition, so you know you need to go back.
Your point in the post above is that you won’t catch the bug in your code if it outputs an answer that is close. How frequently can we expect this to occur?
If the bugs have the affect of scaling your answer by a small amount, then this is quite likely. Further, if there are a lot of such bugs, some increasing and some decreasing the final output, a person can systematically skew their results towards the one they expect by ignoring mistakes in the ‘wrong’ direction (“well, that error isn’t be the problem”) and fixing ones in the ‘right’ direction. So if you find a lot of bugs in the direction you want to correct in, you should also be sure and fix just as many in the opposite direction.