You are overestimating the ambition of the diagram. I know it does not add any new information. I am (working on) presenting the information in a visual form. That’s why I called it a new way of visualizing biases, not a new way to get rid of them with this one simple trick. You can convey all the information shown in a Venn diagram without a diagram, but that doesn’t mean the diagram has no possible value. And if there were a community dedicated to understanding logical relations between finite collections of sets back in 1880, I’m sure they would have shot down John’s big idea at first too.
Venn diagrams allow one to visually see and check the logical relations between finite collections of sets. For example, it makes it easy to see that A-(U-A) = A (where U is the universe); or to give a more complicated example, U−(A∪B)=(U−A)∩(U−B).
Argument maps allow one to visually see the structure of an argument laid out, which:
helps avoid circular arguments;
ensures that we can see what’s supposed to be an unsupported assumption, which has to be agreed to for the argument to go through;
allows us to check whether any assumptions require further justification (ie cannot be justified by broad agreement or obviousness);
allows us to go through and check each inference step, in a way which is more difficult in an argument that’s written out in text, and very difficult if we’re just talking to someone and trying to get their argument—or, thinking to ourselves about our own arguments.
In other words, both of these techniques help stop you when you try to do something wrong, because the structure of the visuals help you see mistakes.
Your proposed diagrams don’t have this feature, because you have to stop yourself, IE you have to make a line crooked to indicate that the inference was biased.
It seems that people are focused a lot on the visualization as a tool for removing biases, rather than as a tool for mapping biases. Indeed, visualizations can have value as a summary tool rather than as a way to logically constrain thinking.
Some examples of such visualizations:
scatter plots to summarize a pattern
visualizations that use dots to convey the scale of a number intuitively (e.g. 1 person = 1 square, etc)
In these kinds of visualizations, you get a different way to look at the problem which may appeal to a different sense. I can already see value in this as a way to summarize biased thought.
That said, I do agree with the comments about perhaps tuning the diagram to provide a bit more constraints. Going off of abramdemski’s comment above, I think perhaps coloring or changing the lines by the type of reasoning that is happening would be useful. For instance, in your examples, you could have the attributes of “future prediction” for the planning fallacy example or something like “attribute inference” for the Bayesian inference example and maybe undistributed example. By disambiguating between these types in your diagram, you can add rules about the necessary input to correct a biased inference. A “future prediction” line without the “outside view” box would be highly suspect.
I think my comments about it being helpful in working through biases led people to think I intended these primarily as active problem-solving devices. Of course you can’t just draw a diagram with a jog in it and then say “Aha! That was a bias!” If anything, I think (particularly in more complex cases) the visuals could help make biases more tangible, almost as a kind of mnemonic device to internalize in the same way that you might create a diagram to help you study for a test. I would like to make the diagrams more robust to serve as a visual vocabulary for the types of ideas discussed on this site, and your comments on distinguishing types of biases visually are helpful and much appreciated. Would love to hear your thoughts on my latest post in response to this.
You are overestimating the ambition of the diagram. I know it does not add any new information. I am (working on) presenting the information in a visual form. That’s why I called it a new way of visualizing biases, not a new way to get rid of them with this one simple trick. You can convey all the information shown in a Venn diagram without a diagram, but that doesn’t mean the diagram has no possible value. And if there were a community dedicated to understanding logical relations between finite collections of sets back in 1880, I’m sure they would have shot down John’s big idea at first too.
Venn diagrams allow one to visually see and check the logical relations between finite collections of sets. For example, it makes it easy to see that A-(U-A) = A (where U is the universe); or to give a more complicated example, U−(A∪B)=(U−A)∩(U−B).
Argument maps allow one to visually see the structure of an argument laid out, which:
helps avoid circular arguments;
ensures that we can see what’s supposed to be an unsupported assumption, which has to be agreed to for the argument to go through;
allows us to check whether any assumptions require further justification (ie cannot be justified by broad agreement or obviousness);
allows us to go through and check each inference step, in a way which is more difficult in an argument that’s written out in text, and very difficult if we’re just talking to someone and trying to get their argument—or, thinking to ourselves about our own arguments.
In other words, both of these techniques help stop you when you try to do something wrong, because the structure of the visuals help you see mistakes.
Your proposed diagrams don’t have this feature, because you have to stop yourself, IE you have to make a line crooked to indicate that the inference was biased.
It seems that people are focused a lot on the visualization as a tool for removing biases, rather than as a tool for mapping biases. Indeed, visualizations can have value as a summary tool rather than as a way to logically constrain thinking.
Some examples of such visualizations:
scatter plots to summarize a pattern
visualizations that use dots to convey the scale of a number intuitively (e.g. 1 person = 1 square, etc)
In these kinds of visualizations, you get a different way to look at the problem which may appeal to a different sense. I can already see value in this as a way to summarize biased thought.
That said, I do agree with the comments about perhaps tuning the diagram to provide a bit more constraints. Going off of abramdemski’s comment above, I think perhaps coloring or changing the lines by the type of reasoning that is happening would be useful. For instance, in your examples, you could have the attributes of “future prediction” for the planning fallacy example or something like “attribute inference” for the Bayesian inference example and maybe undistributed example. By disambiguating between these types in your diagram, you can add rules about the necessary input to correct a biased inference. A “future prediction” line without the “outside view” box would be highly suspect.
I think my comments about it being helpful in working through biases led people to think I intended these primarily as active problem-solving devices. Of course you can’t just draw a diagram with a jog in it and then say “Aha! That was a bias!” If anything, I think (particularly in more complex cases) the visuals could help make biases more tangible, almost as a kind of mnemonic device to internalize in the same way that you might create a diagram to help you study for a test. I would like to make the diagrams more robust to serve as a visual vocabulary for the types of ideas discussed on this site, and your comments on distinguishing types of biases visually are helpful and much appreciated. Would love to hear your thoughts on my latest post in response to this.