I think it’s more than that—he’s saying that if you have a plausible explanation for an event, the event itself is plausible, explanations being models of the world. It’s a warning against setting up excuses for why your model fails to predict the future in advance—you shouldn’t expect your model to fail, so when it does you don’t say, “Oh, here’s how this extremely surprising event fits my model anyway.” Instead, you say “damn, looks like I was wrong.”
I suspect the point is that it’s not worthwhile to look for potential explanations for improbable events until they actually happen.
I think it’s more than that—he’s saying that if you have a plausible explanation for an event, the event itself is plausible, explanations being models of the world. It’s a warning against setting up excuses for why your model fails to predict the future in advance—you shouldn’t expect your model to fail, so when it does you don’t say, “Oh, here’s how this extremely surprising event fits my model anyway.” Instead, you say “damn, looks like I was wrong.”
I don’t, however, think it’s meant to be a warning against contrived thought experiments.