Ideal rational entities just assign probabilities to each possibility (using Occam’s razor if nothing better suggests itself), and then update using Bayes theorem when they see evidence. They can then make action in light of these probabilities (trying to maximise some utility function). The points on your list all follow from trying to obey the laws of probability and utility maximisation. In the case of the earthquake there are a variety of explanations, some of them are fairly likely (“nearby avalanche”, “large creatures (e.g. George) moving about”, etc. ), some are fairly unlikely (“plate tectonics”, “attack by a technologically advanced tribe”) and some are very unlikely (“God did it”, “random movement due to thermal vibrations”). As they gain more evidence their beliefs will change. But they never have to accept one belief or another, they can quite happily maintain their probability distribution and be honest about their ignorance.
Also note that their probability distribution can be useful even if it is spread across many possibilities (i.e. if they are very ignorant). For example, if all of the most likely hypotheses predict recurrent but infrequent earthquakes, then they can be confident in this prediction even though they don’t know which of those hypotheses is correct.
The trick is to stop thinking in terms like
and
Ideal rational entities just assign probabilities to each possibility (using Occam’s razor if nothing better suggests itself), and then update using Bayes theorem when they see evidence. They can then make action in light of these probabilities (trying to maximise some utility function). The points on your list all follow from trying to obey the laws of probability and utility maximisation. In the case of the earthquake there are a variety of explanations, some of them are fairly likely (“nearby avalanche”, “large creatures (e.g. George) moving about”, etc. ), some are fairly unlikely (“plate tectonics”, “attack by a technologically advanced tribe”) and some are very unlikely (“God did it”, “random movement due to thermal vibrations”). As they gain more evidence their beliefs will change. But they never have to accept one belief or another, they can quite happily maintain their probability distribution and be honest about their ignorance.
Also note that their probability distribution can be useful even if it is spread across many possibilities (i.e. if they are very ignorant). For example, if all of the most likely hypotheses predict recurrent but infrequent earthquakes, then they can be confident in this prediction even though they don’t know which of those hypotheses is correct.