Is there a procedure in Bayesian inference to determine how much new information in the future invalidates your model?
Say I have some kind of time-series data, and I make an inference from it up to the current time. If the data is costly to get in the future, would I have a way of determining when cost of increasing error exceeds the cost of getting the new data and updating my inference?
Effectiveness is desirable; effectiveness is measured by results; consistency and verifiability are how we measure what is real.
As a corollary, things that have no evidence do not merit belief. We needn’t presume that we are not in a simulation, we can evaluate the evidence for it.
The central perspective shift is recognizing that beliefs are not assertions about reality, but assertions about our knowledge of reality. This what is meant by the map and the territory.