The bucket diagrams don’t feel to me like the right diagrams to draw. I would be drawing causal diagrams (of aliefs); in the first example, something like “spelled oshun wrong → I can’t write → I can’t be a writer.” Once I notice that I feel like these arrows are there I can then ask myself whether they’re really there and how I could falsify that hypothesis, etc.
The causal chain feels like a post-justification and not what actually goes on in the child’s brain. I expect this to be computed using a vaguer sense of similarity that often ends up agreeing with causal chains (at least good enough in domains with good feedback loops). I agree that causal chains are more useful models of how you should think explicitly about things, but it seems to me that the purpose of these diagrams is to give a memorable symbol for the bug described here (use case: recognize and remember the applicability of the technique).
I had a similar thought while reading this post, but I’m not sure invoking causality is necessary (having a direction still seems necessary). Just in terms of propositional logic, I would explain this post as follows:
1. Initially, one has the implication X⟹Y stored in one’s mind.
2. Someone asserts X.
3. Now one’s mind (perhaps subconsciously) does a modus ponens, and obtains Y.
4. However, Y is an undesirable belief, so one wants to deny it.
5. Instead of rejecting the implication X⟹Y, one adamantly denies X.
The “buckets error” is the implication X⟹Y, and “flinching away” is the denial of X. Flinching away is about protecting one’s epistemology because denying X is still better than accepting Y. Of course, it would be best to reject the implication X⟹Y, but since one can’t do this (by assumption, one makes the buckets error), it is preferable to “flinch away” from X.
ETA (2019-02-01): It occurred to me that this is basically the same thing as “one man’s modus ponens is another man’s modus tollens” (see e.g. this post) but with some extra emotional connotations.
etc. For me, to unravel an irrational alief, I generally have to solve every node below it–e.g., by making sure that I get the benefit from some other alief.
I think they’re equivalent in a sense, but that bucket diagrams are still useful. A bucket can also occur when you conflate multiple causal nodes. So in the first example, the kid might not even have a conscious idea that there are three distinct causal nodes (“spelled oshun wrong”, “I can’t write”, “I can’t be a writer”), but instead treats them as a single node. If you’re able to catch the flinch, introspect, and notice that there are actually three nodes, you’re already a big part of the way there.
The bucket diagrams are too coarse, I think; they don’t keep track of what’s causing what and in what direction. That makes it harder to know what causal aliefs to inspect. And when you ask yourself questions like “what would be bad about knowing X?” you usually already get the answer in the form of a causal alief: “because then Y.” So the information’s already there; why not encode it in your diagram?
Agreed—this sort of “bucket error” can be generalized to “invisible uninspected background assumption”. But those don’t necessarily need to be biconditionals.
The bucket diagrams don’t feel to me like the right diagrams to draw. I would be drawing causal diagrams (of aliefs); in the first example, something like “spelled oshun wrong → I can’t write → I can’t be a writer.” Once I notice that I feel like these arrows are there I can then ask myself whether they’re really there and how I could falsify that hypothesis, etc.
The causal chain feels like a post-justification and not what actually goes on in the child’s brain. I expect this to be computed using a vaguer sense of similarity that often ends up agreeing with causal chains (at least good enough in domains with good feedback loops). I agree that causal chains are more useful models of how you should think explicitly about things, but it seems to me that the purpose of these diagrams is to give a memorable symbol for the bug described here (use case: recognize and remember the applicability of the technique).
I had a similar thought while reading this post, but I’m not sure invoking causality is necessary (having a direction still seems necessary). Just in terms of propositional logic, I would explain this post as follows:
1. Initially, one has the implication X⟹Y stored in one’s mind.
2. Someone asserts X.
3. Now one’s mind (perhaps subconsciously) does a modus ponens, and obtains Y.
4. However, Y is an undesirable belief, so one wants to deny it.
5. Instead of rejecting the implication X⟹Y, one adamantly denies X.
The “buckets error” is the implication X⟹Y, and “flinching away” is the denial of X. Flinching away is about protecting one’s epistemology because denying X is still better than accepting Y. Of course, it would be best to reject the implication X⟹Y, but since one can’t do this (by assumption, one makes the buckets error), it is preferable to “flinch away” from X.
ETA (2019-02-01): It occurred to me that this is basically the same thing as “one man’s modus ponens is another man’s modus tollens” (see e.g. this post) but with some extra emotional connotations.
In my head, it feels mostly like a tree, e.g:
“I must have spelled oshun right”
–Otherwise I can’t write well
– –If I can’t write well, I can’t be a writer
–Only stupid people misspell common words
– –If I’m stupid, people won’t like me
etc. For me, to unravel an irrational alief, I generally have to solve every node below it–e.g., by making sure that I get the benefit from some other alief.
I think they’re equivalent in a sense, but that bucket diagrams are still useful. A bucket can also occur when you conflate multiple causal nodes. So in the first example, the kid might not even have a conscious idea that there are three distinct causal nodes (“spelled oshun wrong”, “I can’t write”, “I can’t be a writer”), but instead treats them as a single node. If you’re able to catch the flinch, introspect, and notice that there are actually three nodes, you’re already a big part of the way there.
The bucket diagrams are too coarse, I think; they don’t keep track of what’s causing what and in what direction. That makes it harder to know what causal aliefs to inspect. And when you ask yourself questions like “what would be bad about knowing X?” you usually already get the answer in the form of a causal alief: “because then Y.” So the information’s already there; why not encode it in your diagram?
Fair point.
Agreed—this sort of “bucket error” can be generalized to “invisible uninspected background assumption”. But those don’t necessarily need to be biconditionals.
Does anyone know whether something like buckets/causal diagram nodes might have an analogue at the neural level?