Illusion of transparency is thinking that contents of MIND1 and MIND2 must be similar, ignoring that MIND2 does not have information that strongly influences how MIND1 thinks.
Expecting short inferential distances is underestimating the vertical complexity (information that requires knowledge of other information) of a MAP.
EDIT: I don’t know if there is a standard name for this, and it would not surprise me if there isn’t. Seems to me that most biases are about how minds work and communicate, while “inferential distances” is about maps that did not exist in ancient environment.
“Inferential distance” is LW jargon. Does the bias have a standard name?
Illusion of transparency?
Illusion of transparency is thinking that contents of MIND1 and MIND2 must be similar, ignoring that MIND2 does not have information that strongly influences how MIND1 thinks.
Expecting short inferential distances is underestimating the vertical complexity (information that requires knowledge of other information) of a MAP.
EDIT: I don’t know if there is a standard name for this, and it would not surprise me if there isn’t. Seems to me that most biases are about how minds work and communicate, while “inferential distances” is about maps that did not exist in ancient environment.