Barkley, I think you may be regarding likelihood distributions as fixed properties held in common by all agents, whereas I am regarding them as variables folded into the prior—if you have a probability distribution over sequences of observables, it implicitly includes beliefs about parameters and likelihoods. Where agents disagree about prior likelihood functions, not just prior parameter probabilities, their beliefs may trivially fail to converge.
Andrew’s point may be particularly relevant here—it may indeed be that statisticians call what I am talking about a “model”. (Although in some cases, like the Laplace’s Law of Succession inductor, I think they might call it a “model class”?) Jaynes, however, would have called it our “prior information” and he would have written “the probability of A, given that we observe B” as p(A|B,I) where I stands for all our prior beliefs including parameter distributions and likelihood distributions. While we may often want to discriminate between different models and model classes, it makes no sense to talk about discriminating between “prior informations”—your prior information is everything you start out with.
Barkley, I think you may be regarding likelihood distributions as fixed properties held in common by all agents, whereas I am regarding them as variables folded into the prior—if you have a probability distribution over sequences of observables, it implicitly includes beliefs about parameters and likelihoods. Where agents disagree about prior likelihood functions, not just prior parameter probabilities, their beliefs may trivially fail to converge.
Andrew’s point may be particularly relevant here—it may indeed be that statisticians call what I am talking about a “model”. (Although in some cases, like the Laplace’s Law of Succession inductor, I think they might call it a “model class”?) Jaynes, however, would have called it our “prior information” and he would have written “the probability of A, given that we observe B” as p(A|B,I) where I stands for all our prior beliefs including parameter distributions and likelihood distributions. While we may often want to discriminate between different models and model classes, it makes no sense to talk about discriminating between “prior informations”—your prior information is everything you start out with.