When models give particular ways of updating on future evidence, current predictions being wrong doesn’t by itself make models wrong. Models learn, the way they learn is already part of them. An updating model is itself wrong when other available models are better in some harder-to-pin-down sense, not just at being right about particular predictions. When future evidence isn’t in scope of a model, that invalidates the model. But not all models are like that with respect to relevant future evidence, even when such evidence dramatically changes their predictions in retrospect.
When models give particular ways of updating on future evidence, current predictions being wrong doesn’t by itself make models wrong. Models learn, the way they learn is already part of them. An updating model is itself wrong when other available models are better in some harder-to-pin-down sense, not just at being right about particular predictions. When future evidence isn’t in scope of a model, that invalidates the model. But not all models are like that with respect to relevant future evidence, even when such evidence dramatically changes their predictions in retrospect.