That’s not the only problem. An agent that assigns equal probability to all possible experiences will never update.
Oh, that’s sneaky.
Perhaps a perfect agent should occasionally—very occasionally—perturb a random selection of its own priors by some very small factor (10^-10 or smaller) in order to avoid such a potential mathematical dead end?
Nice try, but random perturbations won’t help here.
I think that this re-emphasises the importance of good priors.
That’s not the only problem. An agent that assigns equal probability to all possible experiences will never update.
Oh, that’s sneaky.
Perhaps a perfect agent should occasionally—very occasionally—perturb a random selection of its own priors by some very small factor (10^-10 or smaller) in order to avoid such a potential mathematical dead end?
Nice try, but random perturbations won’t help here.
I think that this re-emphasises the importance of good priors.