I also frequently find myself in this situation. Maybe “shallow clarity”?
A bit related, “knowing where the ’sorry’s are” from this Buck post has stuck with me as a useful way of thinking about increasingly granular model-building.
Maybe a productive goal to have when I notice shallow clarity in myself is to look for the specific assumptions I’m making that the other person isn’t, and either a) try to grok the other person’s more granular understanding if that’s feasible, or
b) try to update the domain of validity of my simplified model / notice where its predictions break down, or
c) at least flag it as a simplification that’s maybe missing something important.
I also frequently find myself in this situation. Maybe “shallow clarity”?
A bit related, “knowing where the ’sorry’s are” from this Buck post has stuck with me as a useful way of thinking about increasingly granular model-building.
Maybe a productive goal to have when I notice shallow clarity in myself is to look for the specific assumptions I’m making that the other person isn’t, and either
a) try to grok the other person’s more granular understanding if that’s feasible, or
b) try to update the domain of validity of my simplified model / notice where its predictions break down, or
c) at least flag it as a simplification that’s maybe missing something important.