Is there a procedure in Bayesian inference to determine how much new information in the future invalidates your model?
Say I have some kind of time-series data, and I make an inference from it up to the current time. If the data is costly to get in the future, would I have a way of determining when cost of increasing error exceeds the cost of getting the new data and updating my inference?
Generally speaking, for this you need a meta-model, that is, a model of how your model will change (e.g. become outdated) with the arrival of new information. Plus, if you want to compare costs, you need a loss function which will tell you how costly the errors of your model are.
Is there a procedure in Bayesian inference to determine how much new information in the future invalidates your model?
Say I have some kind of time-series data, and I make an inference from it up to the current time. If the data is costly to get in the future, would I have a way of determining when cost of increasing error exceeds the cost of getting the new data and updating my inference?
Generally speaking, for this you need a meta-model, that is, a model of how your model will change (e.g. become outdated) with the arrival of new information. Plus, if you want to compare costs, you need a loss function which will tell you how costly the errors of your model are.
Unfortunately to pull this off you need to look closely to both your model and the model of the error, there’s no general method AFAIK.