Would a similar statement couched in terms of limits be true?
As an agent’s computational ability increases, its beliefs should converge with those of similar agents regardless of their priors.
The limit you proposed doesn’t help. One’s beliefs after applying Bayes’ rule are determined by the prior and by the evidence. We’re talking about a situation where the evidence is the the same and finite, and the priors differ. Having more compute power doesn’t enter into it.
Would a similar statement couched in terms of limits be true?
As an agent’s computational ability increases, its beliefs should converge with those of similar agents regardless of their priors.
The limit you proposed doesn’t help. One’s beliefs after applying Bayes’ rule are determined by the prior and by the evidence. We’re talking about a situation where the evidence is the the same and finite, and the priors differ. Having more compute power doesn’t enter into it.