Both an explanation and a prediction seek to minimize the loss of information, but the information under concern differs between the two.
For an explanation, the goal is to make it as human understandable as possible, which is to say, minimize the loss of information resulting from an expert human predicting relevant phenomena.
For a prediction, the goal is to make it as machine understandable as possible, which is to say, minimize the loss of information resulting from a machine predicting relevant phenomena.
The reason there isn’t a crisp distinction between the two is because there isn’t a crisp distinction between a human and a machine. If humans had much larger working memories and more reliable calculation abilities, then explanations and predictions would look more similar: both could involve lots of detail. But since humans have limited memory and ability to calculate, explanations look more “narrative” than predictions (or from the other perspective, predictions look more “technical” than explanations).
Note that before computers and automation, machine memory and calculation wasn’t always better than the human equivalent, which would have elided the distinction between explanation and prediction in a way that could never happen today. e.g., if all you have to work with is a compass and straight edge, then any geometric prediction is also going to look like an explanation because we humans grok the compass and straightedge in a way we’ll never, without modifications anyway, grok the more technical predictions modern geometry can make. The exceptions that prove the rule are very long geometric methods/proofs, which strain human memory and so feel more like predictions than methods/proofs that can be summarized in a picture.
As machines get more sophisticated, the distinction will grow larger, as we’ve already seen in debates about whether automated proofs with 10^8 steps are “really proofs”—this gets at the idea that if the steps are no longer grokable by humans, then it’s just a prediction and not an explanation, and we seem to want proofs to be both.
(Just an attempt at an answer)
Both an explanation and a prediction seek to minimize the loss of information, but the information under concern differs between the two.
For an explanation, the goal is to make it as human understandable as possible, which is to say, minimize the loss of information resulting from an expert human predicting relevant phenomena.
For a prediction, the goal is to make it as machine understandable as possible, which is to say, minimize the loss of information resulting from a machine predicting relevant phenomena.
The reason there isn’t a crisp distinction between the two is because there isn’t a crisp distinction between a human and a machine. If humans had much larger working memories and more reliable calculation abilities, then explanations and predictions would look more similar: both could involve lots of detail. But since humans have limited memory and ability to calculate, explanations look more “narrative” than predictions (or from the other perspective, predictions look more “technical” than explanations).
Note that before computers and automation, machine memory and calculation wasn’t always better than the human equivalent, which would have elided the distinction between explanation and prediction in a way that could never happen today. e.g., if all you have to work with is a compass and straight edge, then any geometric prediction is also going to look like an explanation because we humans grok the compass and straightedge in a way we’ll never, without modifications anyway, grok the more technical predictions modern geometry can make. The exceptions that prove the rule are very long geometric methods/proofs, which strain human memory and so feel more like predictions than methods/proofs that can be summarized in a picture.
As machines get more sophisticated, the distinction will grow larger, as we’ve already seen in debates about whether automated proofs with 10^8 steps are “really proofs”—this gets at the idea that if the steps are no longer grokable by humans, then it’s just a prediction and not an explanation, and we seem to want proofs to be both.