Model splintering happens when someone has updated on enough unusual sightings that it is worth their while to change their “language”.
I think of human mental model updates as being overwhelmingly “adding more things” rather than “editing existing things”. Like you see a funny video of a fish flopping around, and then a few days later you say “hey, look at the cat, she’s flopping around just like that fish video”. I’m not sure I’m disagreeing with you here, but your language kinda implies rare dramatic changes, I guess like someone changing religion and having an ontological crisis. That’s certainly an important case but much less common.
For real humans, I think this is a more gradual process—they learn and use some distinctions, and forget others, until their mental models are quite different a few years down the line.
The splintering can happen when a single feature splinters; it doesn’t have to be dramatic.
I think of human mental model updates as being overwhelmingly “adding more things” rather than “editing existing things”. Like you see a funny video of a fish flopping around, and then a few days later you say “hey, look at the cat, she’s flopping around just like that fish video”. I’m not sure I’m disagreeing with you here, but your language kinda implies rare dramatic changes, I guess like someone changing religion and having an ontological crisis. That’s certainly an important case but much less common.
For real humans, I think this is a more gradual process—they learn and use some distinctions, and forget others, until their mental models are quite different a few years down the line.
The splintering can happen when a single feature splinters; it doesn’t have to be dramatic.