It seems to me that a narrative is generally a maximum likelihood explanation behind an event. If you observe two weird events, an explanation that links them is more likely than an explanation that doesn’t. That’s why causality is such a great explanation mechanism. I don’t think making narratives is a bug. The bug is discarding the rest of the probability distribution… we are bad are remembering complex multimodal distributions.
Sometimes, a narrative will even add unnecessary details and it looks like a paradox (the explanation would be more likely without the details). However, the explanation without the detail would be a zone while the explanation with the detail is a point. If we try to remember modes, it makes perfect sense to add the details.
Coming from here, I don’t really understand the advice to
“In other words, concentrate your probability mass”
It seems that concentrating the probability mass would reinforce the belief in the most likely explanation which is often a narrative.
The point is to make your explanations have the possibility to increase your knowledge, rather than just satisfy your explanation-itch. If they can equally explain all outcomes, they aren’t really explanations.
To use Eliezer’s favorite example, phlogiston “feels” like an explanation for why things burn—but it doesn’t actually effect what you expect to see happen in the world.
An explanation cannot increase your knowledge.Your knowledge can only increase by observation. Increasing your knowledge is a decision theory problem (exploration/exploitation for example).
Phlogiston explains why some categories of things burn and some don’t. Phlogiston predicts that dry wood will always burn when heated to a certain temperature. Phlogiston explains why different kind of things burn as opposed to sometime burn and sometimes not burn. It explains that if you separate a piece of woods in smaller pieces, every smaller piece will also burn.
To clarify my original point, the problem isn’t the narrative. The narrative is a heuristic, it’s a method to update from an observation by remembering a simple unimodal distribution centered on the narrative (what I think most likely happened, how confident I am)
Edited my reply to correct and clarify (though I’ll pass on debating the merits of phlogiston theory).
After re-reading your original comment (it took me a while to parse it) I generally agree with your points. In particular I think “The bug is discarding the rest of the probability distribution” is a good way of summarizing the problem, and something I’ll be mulling over.
It seems to me that a narrative is generally a maximum likelihood explanation behind an event. If you observe two weird events, an explanation that links them is more likely than an explanation that doesn’t. That’s why causality is such a great explanation mechanism. I don’t think making narratives is a bug. The bug is discarding the rest of the probability distribution… we are bad are remembering complex multimodal distributions.
Sometimes, a narrative will even add unnecessary details and it looks like a paradox (the explanation would be more likely without the details). However, the explanation without the detail would be a zone while the explanation with the detail is a point. If we try to remember modes, it makes perfect sense to add the details.
Coming from here, I don’t really understand the advice to
“In other words, concentrate your probability mass”
It seems that concentrating the probability mass would reinforce the belief in the most likely explanation which is often a narrative.
That portion could probably stand to be clarified—at the very least I should provide a link to what I’m referring to: http://yudkowsky.net/rational/technical
The point is to make your explanations have the possibility to increase your knowledge, rather than just satisfy your explanation-itch. If they can equally explain all outcomes, they aren’t really explanations.
To use Eliezer’s favorite example, phlogiston “feels” like an explanation for why things burn—but it doesn’t actually effect what you expect to see happen in the world.
An explanation cannot increase your knowledge.Your knowledge can only increase by observation. Increasing your knowledge is a decision theory problem (exploration/exploitation for example).
Phlogiston explains why some categories of things burn and some don’t. Phlogiston predicts that dry wood will always burn when heated to a certain temperature. Phlogiston explains why different kind of things burn as opposed to sometime burn and sometimes not burn. It explains that if you separate a piece of woods in smaller pieces, every smaller piece will also burn.
To clarify my original point, the problem isn’t the narrative. The narrative is a heuristic, it’s a method to update from an observation by remembering a simple unimodal distribution centered on the narrative (what I think most likely happened, how confident I am)
Edited my reply to correct and clarify (though I’ll pass on debating the merits of phlogiston theory).
After re-reading your original comment (it took me a while to parse it) I generally agree with your points. In particular I think “The bug is discarding the rest of the probability distribution” is a good way of summarizing the problem, and something I’ll be mulling over.