I think I see what you’re saying, but let me know if I’ve misinterpreted it.
Let’s look at the planning fallacy example. First, I would argue it is entirely possible to be aware of the existence of the planning fallacy and be aware that you are personally subject to it while not knowing exactly how to eliminate it. So you might draw up a diagram showing the bias visually before searching or brainstorming a debiasing method for it.
According to Daniel Kahneman, “Using… distributional information from other ventures similar to that being forecasted is called taking an ‘outside view’ and is the cure to the planning fallacy.”
So removing the planning fallacy is not a matter of simply compensating for the bias, but adopting a new pathway to that type of conclusion. I don’t think overcompensating for a bias can be said to remove it on a systemic level, and I don’t think it necessarily needs to be shown differently in the diagram. If you are able to habitualize taking the outside view to determine deadlines by default, you still may not perfectly predict how long things will take, but this will no longer be due to the planning fallacy.
If you are able to habitualize taking the outside view to determine deadlines by default, you still may not perfectly predict how long things will take, but this will no longer be due to the planning fallacy.
The point Czynski is making is that the diagram does not help us do that. Using the diagram, we mark an inference with a crooked line if we recognize that it is biased, and a straight line if we think it’s unbiased. So if we forget a given bias, the diagram does not help us remember to e.g. take the outside view.
Let’s say the diagram had three spots: evidence, prior, conclusion. And let’s say the diagram is a visual representation of Bayes’ Law. (I don’t know how to draw a diagram like that, but for the sake of argument, let’s pretend.) Then you would be forced to take the outside view in order to come up with a prior. So that kind of diagram would actually help you do the right thing instead of the wrong thing (at least for some biases).
You have not so much misinterpreted it as failed to understand it at all. Drawing the diagram visually does literally nothing to make the situation in any way clearer. It adds no information which you did not have beforehand. There does not appear to be anything about this diagram format that could be used to add new information even in principle. You have replaced “I think this will take X hours; however, planning fallacy.” with a drawing that depicts “I think this will take X hours; however, planning fallacy.” This is not helpful. It is almost a type error to think that this could be helpful.
You are overestimating the ambition of the diagram. I know it does not add any new information. I am (working on) presenting the information in a visual form. That’s why I called it a new way of visualizing biases, not a new way to get rid of them with this one simple trick. You can convey all the information shown in a Venn diagram without a diagram, but that doesn’t mean the diagram has no possible value. And if there were a community dedicated to understanding logical relations between finite collections of sets back in 1880, I’m sure they would have shot down John’s big idea at first too.
Venn diagrams allow one to visually see and check the logical relations between finite collections of sets. For example, it makes it easy to see that A-(U-A) = A (where U is the universe); or to give a more complicated example, U−(A∪B)=(U−A)∩(U−B).
Argument maps allow one to visually see the structure of an argument laid out, which:
helps avoid circular arguments;
ensures that we can see what’s supposed to be an unsupported assumption, which has to be agreed to for the argument to go through;
allows us to check whether any assumptions require further justification (ie cannot be justified by broad agreement or obviousness);
allows us to go through and check each inference step, in a way which is more difficult in an argument that’s written out in text, and very difficult if we’re just talking to someone and trying to get their argument—or, thinking to ourselves about our own arguments.
In other words, both of these techniques help stop you when you try to do something wrong, because the structure of the visuals help you see mistakes.
Your proposed diagrams don’t have this feature, because you have to stop yourself, IE you have to make a line crooked to indicate that the inference was biased.
It seems that people are focused a lot on the visualization as a tool for removing biases, rather than as a tool for mapping biases. Indeed, visualizations can have value as a summary tool rather than as a way to logically constrain thinking.
Some examples of such visualizations:
scatter plots to summarize a pattern
visualizations that use dots to convey the scale of a number intuitively (e.g. 1 person = 1 square, etc)
In these kinds of visualizations, you get a different way to look at the problem which may appeal to a different sense. I can already see value in this as a way to summarize biased thought.
That said, I do agree with the comments about perhaps tuning the diagram to provide a bit more constraints. Going off of abramdemski’s comment above, I think perhaps coloring or changing the lines by the type of reasoning that is happening would be useful. For instance, in your examples, you could have the attributes of “future prediction” for the planning fallacy example or something like “attribute inference” for the Bayesian inference example and maybe undistributed example. By disambiguating between these types in your diagram, you can add rules about the necessary input to correct a biased inference. A “future prediction” line without the “outside view” box would be highly suspect.
I think my comments about it being helpful in working through biases led people to think I intended these primarily as active problem-solving devices. Of course you can’t just draw a diagram with a jog in it and then say “Aha! That was a bias!” If anything, I think (particularly in more complex cases) the visuals could help make biases more tangible, almost as a kind of mnemonic device to internalize in the same way that you might create a diagram to help you study for a test. I would like to make the diagrams more robust to serve as a visual vocabulary for the types of ideas discussed on this site, and your comments on distinguishing types of biases visually are helpful and much appreciated. Would love to hear your thoughts on my latest post in response to this.
I think I see what you’re saying, but let me know if I’ve misinterpreted it.
Let’s look at the planning fallacy example. First, I would argue it is entirely possible to be aware of the existence of the planning fallacy and be aware that you are personally subject to it while not knowing exactly how to eliminate it. So you might draw up a diagram showing the bias visually before searching or brainstorming a debiasing method for it.
According to Daniel Kahneman, “Using… distributional information from other ventures similar to that being forecasted is called taking an ‘outside view’ and is the cure to the planning fallacy.”
So removing the planning fallacy is not a matter of simply compensating for the bias, but adopting a new pathway to that type of conclusion. I don’t think overcompensating for a bias can be said to remove it on a systemic level, and I don’t think it necessarily needs to be shown differently in the diagram. If you are able to habitualize taking the outside view to determine deadlines by default, you still may not perfectly predict how long things will take, but this will no longer be due to the planning fallacy.
The point Czynski is making is that the diagram does not help us do that. Using the diagram, we mark an inference with a crooked line if we recognize that it is biased, and a straight line if we think it’s unbiased. So if we forget a given bias, the diagram does not help us remember to e.g. take the outside view.
Let’s say the diagram had three spots: evidence, prior, conclusion. And let’s say the diagram is a visual representation of Bayes’ Law. (I don’t know how to draw a diagram like that, but for the sake of argument, let’s pretend.) Then you would be forced to take the outside view in order to come up with a prior. So that kind of diagram would actually help you do the right thing instead of the wrong thing (at least for some biases).
You have not so much misinterpreted it as failed to understand it at all. Drawing the diagram visually does literally nothing to make the situation in any way clearer. It adds no information which you did not have beforehand. There does not appear to be anything about this diagram format that could be used to add new information even in principle. You have replaced “I think this will take X hours; however, planning fallacy.” with a drawing that depicts “I think this will take X hours; however, planning fallacy.” This is not helpful. It is almost a type error to think that this could be helpful.
You are overestimating the ambition of the diagram. I know it does not add any new information. I am (working on) presenting the information in a visual form. That’s why I called it a new way of visualizing biases, not a new way to get rid of them with this one simple trick. You can convey all the information shown in a Venn diagram without a diagram, but that doesn’t mean the diagram has no possible value. And if there were a community dedicated to understanding logical relations between finite collections of sets back in 1880, I’m sure they would have shot down John’s big idea at first too.
Venn diagrams allow one to visually see and check the logical relations between finite collections of sets. For example, it makes it easy to see that A-(U-A) = A (where U is the universe); or to give a more complicated example, U−(A∪B)=(U−A)∩(U−B).
Argument maps allow one to visually see the structure of an argument laid out, which:
helps avoid circular arguments;
ensures that we can see what’s supposed to be an unsupported assumption, which has to be agreed to for the argument to go through;
allows us to check whether any assumptions require further justification (ie cannot be justified by broad agreement or obviousness);
allows us to go through and check each inference step, in a way which is more difficult in an argument that’s written out in text, and very difficult if we’re just talking to someone and trying to get their argument—or, thinking to ourselves about our own arguments.
In other words, both of these techniques help stop you when you try to do something wrong, because the structure of the visuals help you see mistakes.
Your proposed diagrams don’t have this feature, because you have to stop yourself, IE you have to make a line crooked to indicate that the inference was biased.
It seems that people are focused a lot on the visualization as a tool for removing biases, rather than as a tool for mapping biases. Indeed, visualizations can have value as a summary tool rather than as a way to logically constrain thinking.
Some examples of such visualizations:
scatter plots to summarize a pattern
visualizations that use dots to convey the scale of a number intuitively (e.g. 1 person = 1 square, etc)
In these kinds of visualizations, you get a different way to look at the problem which may appeal to a different sense. I can already see value in this as a way to summarize biased thought.
That said, I do agree with the comments about perhaps tuning the diagram to provide a bit more constraints. Going off of abramdemski’s comment above, I think perhaps coloring or changing the lines by the type of reasoning that is happening would be useful. For instance, in your examples, you could have the attributes of “future prediction” for the planning fallacy example or something like “attribute inference” for the Bayesian inference example and maybe undistributed example. By disambiguating between these types in your diagram, you can add rules about the necessary input to correct a biased inference. A “future prediction” line without the “outside view” box would be highly suspect.
I think my comments about it being helpful in working through biases led people to think I intended these primarily as active problem-solving devices. Of course you can’t just draw a diagram with a jog in it and then say “Aha! That was a bias!” If anything, I think (particularly in more complex cases) the visuals could help make biases more tangible, almost as a kind of mnemonic device to internalize in the same way that you might create a diagram to help you study for a test. I would like to make the diagrams more robust to serve as a visual vocabulary for the types of ideas discussed on this site, and your comments on distinguishing types of biases visually are helpful and much appreciated. Would love to hear your thoughts on my latest post in response to this.