You are asking for evidence that flawed thinking is easily affected by biases? What is the alternative hypothesis exactly (i.e. what do you want me to falsify), that it is not easily affected by biases?
Or are you asking for evidence “that it may superficially make biases look like what makes flawed thought flawed”? The alternative that needs to be falsified is that it ‘may not’ ?
The standard model here is that biases are one of the major things that makes flawed thought flawed. You suggested that model is false; that it is an illusion resulting from the causality working the opposite direction.
I’d like to see evidence that the observed correlation between flawed thinking and biases is due to “flawed thinking is easily affected by biases” rather than “biases cause flawed thinking”.
I’d like to see evidence that the observed correlation between flawed thinking and biases is due to “flawed thinking is easily affected by biases” rather than “biases cause flawed thinking”.
If I understand right, the point was this: programmers routinely display flawed thinking (in the form of conceptual bugs) that don’t seem to stem from cognitive biases. This evidence, if you haven’t previously internalized it, should cause you to downwards-revise your estimate for the fraction of flawed thinking caused by biases.
One example I’ve seen first-hand and suffered is confirmation bias. Beginning programmers, at least, when they run into a bug, do not try to disconfirm what they think the bug is, but instead try to confirm it. For example, they’ll put print statements for a troublesome variable after the line they suspect is causing the bug rather than before it, to verify that it wasn’t bogus before hand, or better yet, both before and after the suspect line.
I don’t see how that is confirmation bias. Where does the beginning programmer discount or ignore disconfirming evidence? If the print statement shows the troublesome variable has the correct value following the suspect line, that is evidence against the suspect line being the bug.
The problem in this case is that programmer is paying attention to only part of the meaning of that line being a bug. If it is a bug, it would transform correct state to incorrect state, and the programmer is only validating the resulting incorrect state, not the preceding correct state.
Though, I will sometimes add a single debug statement, not to test some particular line of code, but to test if the bug is before or after the debug statement, so I can look more closely at the offending section and narrow it down further.
Yep. Actually on meta level I see more confirmation bias right here. For instance gwern doesn’t seek to disprove that it is confirmation bias, but seeks to confirm it, and sees how the print statement after suspect line wouldn’t disconfirm the bug being on that line if the bug is above that line, but doesn’t see that it would disconfirm the bug if the bug was below suspect line. (and in any case if you want to conclusively show that there is a bug on suspect line, you need to understand what the bug is, precisely, which is confirming evidence, and which requires you to know the state after and see if it matches what you think the bug would produce.)
I do imagine that confirmation bias does exist in programming whenever the programmer did not figure out or learn how to do it right, but learning that there’s this bias, and learning how to do it right, are different things entirely. The programmer that learned how to debug can keep track of set of places where the bug could be, update it correctly, and seek effective ways to narrow that down.
Maybe it isn’t confirmation bias. But
Wikipedia says that
“[p]eople display [confirmation] bias when they gather or remember
information selectively, or when they interpret it in a biased way”. If that’s
a good description, then gwern’s example would fall under “gather[ing]”.
The bias the programmer in gwern’s example is exhibiting is the same one that
makes people fail the Wason selection
task—searching for
confirming evidence rather than disconfirming evidence.
Edit: Retracting this because I think
JGWeissman is right.
If that’s a good description, then gwern’s example would fall under “gather[ing]”.
The programmer is indeed gathering evidence, but I don’t see how they are gathering evidence in a way that is meant to produce confirming rather than disconfirming evidence. As I have explained, the test could show a correct value for the troublesome variable and be evidence against the suspect line being the bug. The test will be somewhat biased towards confirmation in that it is really testing if the suspect line has a bug or any earlier line has a bug, but I don’t think this bias reflects the programmer seeking only confirming evidence so much as not understanding what they are testing.
The bias the programmer is exhibiting in gwern’s example is the same one that makes people fail the Wason selection task—searching for confirming evidence rather than disconfirming evidence.
That is not the cause of failure in the Wason selection task. The problem is not using contrapositives, that is, realizing “even implies red” is logically equivalent to its contrapositive “not red implies not even”, so to test “even implies red” you have to check cards that show an even number or a non red color.
This is similar to the failure to use contrapositives behind the Positive Test Bias, which is itself similar to gwern’s example in that it involves failure to test every aspect of the hypothesis.
Haven’t seen programmers do this a whole lot. In any case this should get cured by one bug hunting session where the method horribly fails.
Also i am not sure that this is ‘confirmation bias’ being cause of anything. The methodology of thought for determining where the bug is, has to be taught. For me to confirm that bug is where I think it is, takes two print statements, before, and after.
Sure. A common source of bugs is that different developers or other stakeholders have different expectations. I say “this variable is height in feet”, and you assume it’s meters. I forget to synchronize access to a piece of state accessed from multiple threads. I get over-cautious and add too much synchronization, leading to deadlock. I forget to handle the case where a variable is null.
None of those feel like cognitive biases, all are either common or infamous bugs.
The mainstream model here in our technological civilization which we have to sustain using our ape brains, is that the correct thought methods have to be taught, and that the biases tend to substitute for solutions when one does not know how to answer the question, and are displaced by more accurate methods.
The mainstream model helps decrease people’s mistake rate in programming, software engineering, and other disciplines.
Now, the standard model “here” on lesswrong, I am not sure what it really is, and I do not want to risk making a strawman.
For example, if you need to build a bridge, and you need to decide on the thickness of the steel beams, you need to learn how to calculate that and how to check your calculations, and you need training so that you stop making mistakes such as mixing up the equations. A very experienced architect can guess-estimate the required thickness rather accurately (but won’t use that to build bridges).
Without that, if you want to guess-estimate required thickness, you will be influenced by cognitive biases such as framing effect, by time of the day, mood, colour of the steel beam, what you had for breakfast and the like, through zillions of emotions and biases. You might go ahead and blame all those influences for the invalidity of your estimate, but the cause is incompetence.
The technological civilization you are living in, all it’s accomplishments, are the demonstration of success of the traditional approach.
Go look how education works. Engineers sitting in classes learning how the colour of the beam or framing effect or other fallacies can influence guess estimate of required thickness, OR engineers sitting in classes learning how to actually find the damn thickness?
What I am saying is that you have enough facts at your disposal and need to process them. So the answer is ‘yes’. If you absolutely insist that I link a resource that I would expect wouldn’t add any new information to the information you already didn’t process: http://en.wikipedia.org/wiki/Mathematics_education . Teaching how to do math. Not teaching ‘how framing effect influences your calculations and how you must purify your mind of it’. (Same goes for any engineering courses, take your pick. Same goes for teaching the physicists or any other scientists).
You assert this repeatedly. Is there evidence?
You are asking for evidence that flawed thinking is easily affected by biases? What is the alternative hypothesis exactly (i.e. what do you want me to falsify), that it is not easily affected by biases?
Or are you asking for evidence “that it may superficially make biases look like what makes flawed thought flawed”? The alternative that needs to be falsified is that it ‘may not’ ?
The standard model here is that biases are one of the major things that makes flawed thought flawed. You suggested that model is false; that it is an illusion resulting from the causality working the opposite direction.
I’d like to see evidence that the observed correlation between flawed thinking and biases is due to “flawed thinking is easily affected by biases” rather than “biases cause flawed thinking”.
If I understand right, the point was this: programmers routinely display flawed thinking (in the form of conceptual bugs) that don’t seem to stem from cognitive biases. This evidence, if you haven’t previously internalized it, should cause you to downwards-revise your estimate for the fraction of flawed thinking caused by biases.
Can you give an example?
One example I’ve seen first-hand and suffered is confirmation bias. Beginning programmers, at least, when they run into a bug, do not try to disconfirm what they think the bug is, but instead try to confirm it. For example, they’ll put print statements for a troublesome variable after the line they suspect is causing the bug rather than before it, to verify that it wasn’t bogus before hand, or better yet, both before and after the suspect line.
I don’t see how that is confirmation bias. Where does the beginning programmer discount or ignore disconfirming evidence? If the print statement shows the troublesome variable has the correct value following the suspect line, that is evidence against the suspect line being the bug.
The problem in this case is that programmer is paying attention to only part of the meaning of that line being a bug. If it is a bug, it would transform correct state to incorrect state, and the programmer is only validating the resulting incorrect state, not the preceding correct state.
Though, I will sometimes add a single debug statement, not to test some particular line of code, but to test if the bug is before or after the debug statement, so I can look more closely at the offending section and narrow it down further.
Yep. Actually on meta level I see more confirmation bias right here. For instance gwern doesn’t seek to disprove that it is confirmation bias, but seeks to confirm it, and sees how the print statement after suspect line wouldn’t disconfirm the bug being on that line if the bug is above that line, but doesn’t see that it would disconfirm the bug if the bug was below suspect line. (and in any case if you want to conclusively show that there is a bug on suspect line, you need to understand what the bug is, precisely, which is confirming evidence, and which requires you to know the state after and see if it matches what you think the bug would produce.)
I do imagine that confirmation bias does exist in programming whenever the programmer did not figure out or learn how to do it right, but learning that there’s this bias, and learning how to do it right, are different things entirely. The programmer that learned how to debug can keep track of set of places where the bug could be, update it correctly, and seek effective ways to narrow that down.
Maybe it isn’t confirmation bias. But Wikipedia says that “[p]eople display [confirmation] bias when they gather or remember information selectively, or when they interpret it in a biased way”. If that’s a good description, then gwern’s example would fall under “gather[ing]”.
The bias the programmer in gwern’s example is exhibiting is the same one that makes people fail the Wason selection task—searching for confirming evidence rather than disconfirming evidence.
Edit: Retracting this because I think JGWeissman is right.
The programmer is indeed gathering evidence, but I don’t see how they are gathering evidence in a way that is meant to produce confirming rather than disconfirming evidence. As I have explained, the test could show a correct value for the troublesome variable and be evidence against the suspect line being the bug. The test will be somewhat biased towards confirmation in that it is really testing if the suspect line has a bug or any earlier line has a bug, but I don’t think this bias reflects the programmer seeking only confirming evidence so much as not understanding what they are testing.
That is not the cause of failure in the Wason selection task. The problem is not using contrapositives, that is, realizing “even implies red” is logically equivalent to its contrapositive “not red implies not even”, so to test “even implies red” you have to check cards that show an even number or a non red color.
This is similar to the failure to use contrapositives behind the Positive Test Bias, which is itself similar to gwern’s example in that it involves failure to test every aspect of the hypothesis.
You’re right. I’m retracting the grandparent.
Haven’t seen programmers do this a whole lot. In any case this should get cured by one bug hunting session where the method horribly fails.
Also i am not sure that this is ‘confirmation bias’ being cause of anything. The methodology of thought for determining where the bug is, has to be taught. For me to confirm that bug is where I think it is, takes two print statements, before, and after.
That’s evidence against the proposition that I was looking for evidence for.
ha ha.
Sure. A common source of bugs is that different developers or other stakeholders have different expectations. I say “this variable is height in feet”, and you assume it’s meters. I forget to synchronize access to a piece of state accessed from multiple threads. I get over-cautious and add too much synchronization, leading to deadlock. I forget to handle the case where a variable is null.
None of those feel like cognitive biases, all are either common or infamous bugs.
The mainstream model here in our technological civilization which we have to sustain using our ape brains, is that the correct thought methods have to be taught, and that the biases tend to substitute for solutions when one does not know how to answer the question, and are displaced by more accurate methods.
The mainstream model helps decrease people’s mistake rate in programming, software engineering, and other disciplines.
Now, the standard model “here” on lesswrong, I am not sure what it really is, and I do not want to risk making a strawman.
For example, if you need to build a bridge, and you need to decide on the thickness of the steel beams, you need to learn how to calculate that and how to check your calculations, and you need training so that you stop making mistakes such as mixing up the equations. A very experienced architect can guess-estimate the required thickness rather accurately (but won’t use that to build bridges).
Without that, if you want to guess-estimate required thickness, you will be influenced by cognitive biases such as framing effect, by time of the day, mood, colour of the steel beam, what you had for breakfast and the like, through zillions of emotions and biases. You might go ahead and blame all those influences for the invalidity of your estimate, but the cause is incompetence.
The technological civilization you are living in, all it’s accomplishments, are the demonstration of success of the traditional approach.
I’m not familiar with this “mainstream model”. Is there a resource that could explain this in more detail?
Go look how education works. Engineers sitting in classes learning how the colour of the beam or framing effect or other fallacies can influence guess estimate of required thickness, OR engineers sitting in classes learning how to actually find the damn thickness?
So am I to infer that your answer to my question is “no”?
What I am saying is that you have enough facts at your disposal and need to process them. So the answer is ‘yes’. If you absolutely insist that I link a resource that I would expect wouldn’t add any new information to the information you already didn’t process: http://en.wikipedia.org/wiki/Mathematics_education . Teaching how to do math. Not teaching ‘how framing effect influences your calculations and how you must purify your mind of it’. (Same goes for any engineering courses, take your pick. Same goes for teaching the physicists or any other scientists).