I’m not sure how that works. Bayes’ theorem, per se, is correct. I’m not talking about a level of abstraction in which I try to define decisions/beliefs as symbols, I’m talking about the bare “two different brains with different initial states, subject to the same input, will end up in different final states”.
Differences in opinions between two agents could instead be explained by having had different experiences, beliefs being path dependent (order of updates matters), or inference being influenced by random chance.
All of that can be accounted for in a Bayesian framework though? Different experiences produce different posteriors of course, and as for path dependence and random chance, I think you can easily get those by introducing some kind of hidden states, describing things we don’t quite know about the inner workings of the brain.
All of that can be accounted for in a Bayesian framework though?
I mean that those factors don’t presuppose different priors. You could still end up with different “posteriors” even with the same “starting point”.
An example for an (informal) alternative to Bayesian updating, that doesn’t require subjective priors, is Inference to the Best Explanation. One could, of course, model the criteria that determine the goodness of explanations as a sort of “prior”. But those criteria would be part of the hypothetical IBE algorithm, not a free variable like in Bayesian updating. One could also claim that there are no objective facts about the goodness of explanations and that IBE is invalid. But that’s an open question.
Whenever I’ve seen people invoking Inference to the Best Explanation to justify a conclusion (as opposed to philosophising about the logic of argument), they have given no reason why their preferred explanation is the Best, they have just pronounced it so. A Bayesian reasoner can (or should be able to) show their work, but the ItoBE reasoner has no work to show.
IBE arguments don’t exactly work that way. The argument is usually that one person is arguing that some hypothesis H is the best available explanation for the evidence E in question, and if the other person agrees with that, it is hard for them to not also agree that H is probably true (or something like that). Most people already accept IBE as an inference rule. They wouldn’t say “Yes, the existence of an external world seems to be the best available explanation for our experiences, but I still don’t believe the external world exists” nor “Yes, the best available explanation for the missing cheese is that a mouse ate it, but I still don’t believe a mouse ate the cheese”. And if they do disagree about H being the best available explanation, they usually feel compelled to argue that some H’ is a better explanation.
Without an account of that, IBE is the claim that something being the best available explanation is evidence that it is true.
That being said, we typically judge the goodness of a possible explanation by a number of explanatory virtues like simplicity, empirical fit, consistency, internal coherence, external coherence (with other theories), consilience, unification etc. To clarify and justify those virtues on other (including Bayesian) grounds is something epistemologists work on.
I’d definitely call any assumption about which forms preferred explanations should take as a “prior”. Maybe I have a more flexible concept of what counts as Bayesian than you, in that sense? Priors don’t need to be free parameters, the process has to start somewhere. But if you already have some data and then acquire some more data, obviously the previous data will still affect your conclusions.
The problem with calling parts of a learning algorithm a prior that are not free variables, is that then anything (every part of any learning algorithm) would count as a prior. So even the Bayesian conditionalization rule itself. But that’s not what Bayesians consider part of a prior.
I’m not sure how that works. Bayes’ theorem, per se, is correct. I’m not talking about a level of abstraction in which I try to define decisions/beliefs as symbols, I’m talking about the bare “two different brains with different initial states, subject to the same input, will end up in different final states”.
All of that can be accounted for in a Bayesian framework though? Different experiences produce different posteriors of course, and as for path dependence and random chance, I think you can easily get those by introducing some kind of hidden states, describing things we don’t quite know about the inner workings of the brain.
I mean that those factors don’t presuppose different priors. You could still end up with different “posteriors” even with the same “starting point”.
An example for an (informal) alternative to Bayesian updating, that doesn’t require subjective priors, is Inference to the Best Explanation. One could, of course, model the criteria that determine the goodness of explanations as a sort of “prior”. But those criteria would be part of the hypothetical IBE algorithm, not a free variable like in Bayesian updating. One could also claim that there are no objective facts about the goodness of explanations and that IBE is invalid. But that’s an open question.
Whenever I’ve seen people invoking Inference to the Best Explanation to justify a conclusion (as opposed to philosophising about the logic of argument), they have given no reason why their preferred explanation is the Best, they have just pronounced it so. A Bayesian reasoner can (or should be able to) show their work, but the ItoBE reasoner has no work to show.
These can often be operationalized ‘How much of the variance in the output do you predict is controlled by your proposed input?’
IBE arguments don’t exactly work that way. The argument is usually that one person is arguing that some hypothesis H is the best available explanation for the evidence E in question, and if the other person agrees with that, it is hard for them to not also agree that H is probably true (or something like that). Most people already accept IBE as an inference rule. They wouldn’t say “Yes, the existence of an external world seems to be the best available explanation for our experiences, but I still don’t believe the external world exists” nor “Yes, the best available explanation for the missing cheese is that a mouse ate it, but I still don’t believe a mouse ate the cheese”. And if they do disagree about H being the best available explanation, they usually feel compelled to argue that some H’ is a better explanation.
What is the measure of goodness? How does one judge what is the “better” explanation? Without an account of that, what is IBE?
Without an account of that, IBE is the claim that something being the best available explanation is evidence that it is true.
That being said, we typically judge the goodness of a possible explanation by a number of explanatory virtues like simplicity, empirical fit, consistency, internal coherence, external coherence (with other theories), consilience, unification etc. To clarify and justify those virtues on other (including Bayesian) grounds is something epistemologists work on.
I’d definitely call any assumption about which forms preferred explanations should take as a “prior”. Maybe I have a more flexible concept of what counts as Bayesian than you, in that sense? Priors don’t need to be free parameters, the process has to start somewhere. But if you already have some data and then acquire some more data, obviously the previous data will still affect your conclusions.
The problem with calling parts of a learning algorithm a prior that are not free variables, is that then anything (every part of any learning algorithm) would count as a prior. So even the Bayesian conditionalization rule itself. But that’s not what Bayesians consider part of a prior.