I’d like to see evidence that the observed correlation between flawed thinking and biases is due to “flawed thinking is easily affected by biases” rather than “biases cause flawed thinking”.
If I understand right, the point was this: programmers routinely display flawed thinking (in the form of conceptual bugs) that don’t seem to stem from cognitive biases. This evidence, if you haven’t previously internalized it, should cause you to downwards-revise your estimate for the fraction of flawed thinking caused by biases.
One example I’ve seen first-hand and suffered is confirmation bias. Beginning programmers, at least, when they run into a bug, do not try to disconfirm what they think the bug is, but instead try to confirm it. For example, they’ll put print statements for a troublesome variable after the line they suspect is causing the bug rather than before it, to verify that it wasn’t bogus before hand, or better yet, both before and after the suspect line.
I don’t see how that is confirmation bias. Where does the beginning programmer discount or ignore disconfirming evidence? If the print statement shows the troublesome variable has the correct value following the suspect line, that is evidence against the suspect line being the bug.
The problem in this case is that programmer is paying attention to only part of the meaning of that line being a bug. If it is a bug, it would transform correct state to incorrect state, and the programmer is only validating the resulting incorrect state, not the preceding correct state.
Though, I will sometimes add a single debug statement, not to test some particular line of code, but to test if the bug is before or after the debug statement, so I can look more closely at the offending section and narrow it down further.
Yep. Actually on meta level I see more confirmation bias right here. For instance gwern doesn’t seek to disprove that it is confirmation bias, but seeks to confirm it, and sees how the print statement after suspect line wouldn’t disconfirm the bug being on that line if the bug is above that line, but doesn’t see that it would disconfirm the bug if the bug was below suspect line. (and in any case if you want to conclusively show that there is a bug on suspect line, you need to understand what the bug is, precisely, which is confirming evidence, and which requires you to know the state after and see if it matches what you think the bug would produce.)
I do imagine that confirmation bias does exist in programming whenever the programmer did not figure out or learn how to do it right, but learning that there’s this bias, and learning how to do it right, are different things entirely. The programmer that learned how to debug can keep track of set of places where the bug could be, update it correctly, and seek effective ways to narrow that down.
Maybe it isn’t confirmation bias. But
Wikipedia says that
“[p]eople display [confirmation] bias when they gather or remember
information selectively, or when they interpret it in a biased way”. If that’s
a good description, then gwern’s example would fall under “gather[ing]”.
The bias the programmer in gwern’s example is exhibiting is the same one that
makes people fail the Wason selection
task—searching for
confirming evidence rather than disconfirming evidence.
Edit: Retracting this because I think
JGWeissman is right.
If that’s a good description, then gwern’s example would fall under “gather[ing]”.
The programmer is indeed gathering evidence, but I don’t see how they are gathering evidence in a way that is meant to produce confirming rather than disconfirming evidence. As I have explained, the test could show a correct value for the troublesome variable and be evidence against the suspect line being the bug. The test will be somewhat biased towards confirmation in that it is really testing if the suspect line has a bug or any earlier line has a bug, but I don’t think this bias reflects the programmer seeking only confirming evidence so much as not understanding what they are testing.
The bias the programmer is exhibiting in gwern’s example is the same one that makes people fail the Wason selection task—searching for confirming evidence rather than disconfirming evidence.
That is not the cause of failure in the Wason selection task. The problem is not using contrapositives, that is, realizing “even implies red” is logically equivalent to its contrapositive “not red implies not even”, so to test “even implies red” you have to check cards that show an even number or a non red color.
This is similar to the failure to use contrapositives behind the Positive Test Bias, which is itself similar to gwern’s example in that it involves failure to test every aspect of the hypothesis.
Haven’t seen programmers do this a whole lot. In any case this should get cured by one bug hunting session where the method horribly fails.
Also i am not sure that this is ‘confirmation bias’ being cause of anything. The methodology of thought for determining where the bug is, has to be taught. For me to confirm that bug is where I think it is, takes two print statements, before, and after.
Sure. A common source of bugs is that different developers or other stakeholders have different expectations. I say “this variable is height in feet”, and you assume it’s meters. I forget to synchronize access to a piece of state accessed from multiple threads. I get over-cautious and add too much synchronization, leading to deadlock. I forget to handle the case where a variable is null.
None of those feel like cognitive biases, all are either common or infamous bugs.
If I understand right, the point was this: programmers routinely display flawed thinking (in the form of conceptual bugs) that don’t seem to stem from cognitive biases. This evidence, if you haven’t previously internalized it, should cause you to downwards-revise your estimate for the fraction of flawed thinking caused by biases.
Can you give an example?
One example I’ve seen first-hand and suffered is confirmation bias. Beginning programmers, at least, when they run into a bug, do not try to disconfirm what they think the bug is, but instead try to confirm it. For example, they’ll put print statements for a troublesome variable after the line they suspect is causing the bug rather than before it, to verify that it wasn’t bogus before hand, or better yet, both before and after the suspect line.
I don’t see how that is confirmation bias. Where does the beginning programmer discount or ignore disconfirming evidence? If the print statement shows the troublesome variable has the correct value following the suspect line, that is evidence against the suspect line being the bug.
The problem in this case is that programmer is paying attention to only part of the meaning of that line being a bug. If it is a bug, it would transform correct state to incorrect state, and the programmer is only validating the resulting incorrect state, not the preceding correct state.
Though, I will sometimes add a single debug statement, not to test some particular line of code, but to test if the bug is before or after the debug statement, so I can look more closely at the offending section and narrow it down further.
Yep. Actually on meta level I see more confirmation bias right here. For instance gwern doesn’t seek to disprove that it is confirmation bias, but seeks to confirm it, and sees how the print statement after suspect line wouldn’t disconfirm the bug being on that line if the bug is above that line, but doesn’t see that it would disconfirm the bug if the bug was below suspect line. (and in any case if you want to conclusively show that there is a bug on suspect line, you need to understand what the bug is, precisely, which is confirming evidence, and which requires you to know the state after and see if it matches what you think the bug would produce.)
I do imagine that confirmation bias does exist in programming whenever the programmer did not figure out or learn how to do it right, but learning that there’s this bias, and learning how to do it right, are different things entirely. The programmer that learned how to debug can keep track of set of places where the bug could be, update it correctly, and seek effective ways to narrow that down.
Maybe it isn’t confirmation bias. But Wikipedia says that “[p]eople display [confirmation] bias when they gather or remember information selectively, or when they interpret it in a biased way”. If that’s a good description, then gwern’s example would fall under “gather[ing]”.
The bias the programmer in gwern’s example is exhibiting is the same one that makes people fail the Wason selection task—searching for confirming evidence rather than disconfirming evidence.
Edit: Retracting this because I think JGWeissman is right.
The programmer is indeed gathering evidence, but I don’t see how they are gathering evidence in a way that is meant to produce confirming rather than disconfirming evidence. As I have explained, the test could show a correct value for the troublesome variable and be evidence against the suspect line being the bug. The test will be somewhat biased towards confirmation in that it is really testing if the suspect line has a bug or any earlier line has a bug, but I don’t think this bias reflects the programmer seeking only confirming evidence so much as not understanding what they are testing.
That is not the cause of failure in the Wason selection task. The problem is not using contrapositives, that is, realizing “even implies red” is logically equivalent to its contrapositive “not red implies not even”, so to test “even implies red” you have to check cards that show an even number or a non red color.
This is similar to the failure to use contrapositives behind the Positive Test Bias, which is itself similar to gwern’s example in that it involves failure to test every aspect of the hypothesis.
You’re right. I’m retracting the grandparent.
Haven’t seen programmers do this a whole lot. In any case this should get cured by one bug hunting session where the method horribly fails.
Also i am not sure that this is ‘confirmation bias’ being cause of anything. The methodology of thought for determining where the bug is, has to be taught. For me to confirm that bug is where I think it is, takes two print statements, before, and after.
That’s evidence against the proposition that I was looking for evidence for.
ha ha.
Sure. A common source of bugs is that different developers or other stakeholders have different expectations. I say “this variable is height in feet”, and you assume it’s meters. I forget to synchronize access to a piece of state accessed from multiple threads. I get over-cautious and add too much synchronization, leading to deadlock. I forget to handle the case where a variable is null.
None of those feel like cognitive biases, all are either common or infamous bugs.