The mystery is how a conception of the utility of outcomes that is vulnerable to such obvious counterexamples survived for so long. I can explain it only by a weakness of the scholarly mind that I have often observed in myself. I call it theory-induced blindness: once you have accepted a theory and used it as a tool in your thinking, it is extraordinarily difficult to notice its flaws. If you come upon an observation that does not seem to fit the model, you assume that there must be a perfectly good explanation that you are somehow missing. You give the theory the benefit of the doubt, trusting the community of experts who have accepted it.
Ontology lock in. If you have nice stuff built on top of something you’ll demand proof commensurate with the value of those things when someone questions the base layer even if those things built on top could be supported by alternative base layers. S1 is cautious about this, which is reasonable. Our environment is much safer for experimentation than it used to be.
John Maxwell posted this quote:
-- Daniel Kahneman
Ontology lock in. If you have nice stuff built on top of something you’ll demand proof commensurate with the value of those things when someone questions the base layer even if those things built on top could be supported by alternative base layers. S1 is cautious about this, which is reasonable. Our environment is much safer for experimentation than it used to be.
Great description. Yes, I think that’s exactly why people are reluctant to see other people’s points.