After admittedly very little experience with scientific research, my basic feeling is that the scientists don’t particularly care whether or not their results are affected by a coding error, just whether or not they get published. It’s not that they’re unlikely to do deep error checking when the result is consistent with their expectations, but that they’re unlikely to do it at all.
Though it’s possible that papers with unexpected results are held to higher standards by reviewers before they can get published. Which is another level of confirmation bias.
Ah, medium to strong disagree. I’m not far into my scientific career in $_DISCIPLINE, but any paper introducing a new “standard code” (i.e. one that you intend to use more than once) has an extensive section explaining how their code has accurately reproduced analytic results or agreed with previous simulations in a simpler case (simpler than the one currently being analysed). Most codes seem also to be open-source, since it’s good for your cred if people are writing papers saying “Using x’s y code, we analyse...” which means they need to be clearly written and commented—not a guarantee against pernicious bugs, but certainly a help. This error-checking setup is also convenient for those people generating analytic solutions, since they can find something pretty and say “Oh, people can use this to test their code.”.
Of course, this isn’t infallible, but sometimes you have to do 10 bad simulations before you can do 1 good one.
Fluid dynamics seems to be a much more serious field than the one I was doing an REU in. None of the standard papers I read even considered supplying code. Fortunately I have found a different field of study.
Also, you have persuaded me to include code in my senior thesis. Which I admit I’ve also debugged in a manner similar to the one mentioned in the article… I kept fixing bugs until my polynomials stopped taking up a whole page of Mathematica output and started fitting onto one line. Usually a good sign.
After admittedly very little experience with scientific research, my basic feeling is that the scientists don’t particularly care whether or not their results are affected by a coding error, just whether or not they get published. It’s not that they’re unlikely to do deep error checking when the result is consistent with their expectations, but that they’re unlikely to do it at all.
Though it’s possible that papers with unexpected results are held to higher standards by reviewers before they can get published. Which is another level of confirmation bias.
Ah, medium to strong disagree. I’m not far into my scientific career in $_DISCIPLINE, but any paper introducing a new “standard code” (i.e. one that you intend to use more than once) has an extensive section explaining how their code has accurately reproduced analytic results or agreed with previous simulations in a simpler case (simpler than the one currently being analysed). Most codes seem also to be open-source, since it’s good for your cred if people are writing papers saying “Using x’s y code, we analyse...” which means they need to be clearly written and commented—not a guarantee against pernicious bugs, but certainly a help. This error-checking setup is also convenient for those people generating analytic solutions, since they can find something pretty and say “Oh, people can use this to test their code.”.
Of course, this isn’t infallible, but sometimes you have to do 10 bad simulations before you can do 1 good one.
Fluid dynamics seems to be a much more serious field than the one I was doing an REU in. None of the standard papers I read even considered supplying code. Fortunately I have found a different field of study.
Also, you have persuaded me to include code in my senior thesis. Which I admit I’ve also debugged in a manner similar to the one mentioned in the article… I kept fixing bugs until my polynomials stopped taking up a whole page of Mathematica output and started fitting onto one line. Usually a good sign.
Except for those damned lazy biologists, of course.