Well the second of those things already has very serious problems. See for example Quine’s Confirmation Holism. We’ve know for a long time that our theories are under-determined by our observations and that we need some other way of adjudicating empirically equivalent theories. This was our basis for preferring Special Relativity over Lorentz Ether Theory. Parsimony seems like one important criteria but involves two questions:
One man’s simple seems like another man’s complex. How do you rigorously identify the more parsimonious between two hypotheses. Lots of people thing God is a very simple hypothesis. The most seemingly productive approach that I know of is the algorithmic complexity one that is popular here.
Is parsimony important because parsimonious theories are more likely be ‘real’ or is the issue really one of developing clear and helpful prediction generating devices?
The way the algorithmic probability stuff has been leveraged is by building candidates for universal priors. But this doesn’t seem like the right way to do it. Beliefs are about anticipating future experience so they should take the form of ’Sensory experience x will occur at time t” (or something reducible to this). Theories aren’t like this. Theories are frameworks that let us take some sensory experience and generate beliefs about our future sensory experiences.
So I’m not sure it makes sense to have beliefs distinguishing empirically identical theories. That seems like a kind of category error- a map-territory confusion. The question is, what do we do with this algorithmic complexity stuff that was so promising. I think we still have good reasons to be thinking cleanly about complicated science- the QM interpretation debate isn’t totally irrelevant. But it isn’t obvious algorithmic simplicity is what we want out of our theories (nor is it clear that what we want is the same thing other agents might want out of their theories). (ETA: Though of course K-complexity might still be helpful in making predictions between two possible futures that are empirically distinct. For example, we can assign a low probability to finding evidence of a moon landing conspiracy since the theory that would predict discovering such evidence is unparsimonious. But if that is the case, if theories can be ruled improbable on the basis of the structure of the theory alone why can we only do this with empirically distinct theories? Shouldn’t all theories be understandable in this way?)
Well the second of those things already has very serious problems. See for example Quine’s Confirmation Holism. We’ve know for a long time that our theories are under-determined by our observations and that we need some other way of adjudicating empirically equivalent theories. This was our basis for preferring Special Relativity over Lorentz Ether Theory. Parsimony seems like one important criteria but involves two questions:
One man’s simple seems like another man’s complex. How do you rigorously identify the more parsimonious between two hypotheses. Lots of people thing God is a very simple hypothesis. The most seemingly productive approach that I know of is the algorithmic complexity one that is popular here.
Is parsimony important because parsimonious theories are more likely be ‘real’ or is the issue really one of developing clear and helpful prediction generating devices?
The way the algorithmic probability stuff has been leveraged is by building candidates for universal priors. But this doesn’t seem like the right way to do it. Beliefs are about anticipating future experience so they should take the form of ’Sensory experience x will occur at time t” (or something reducible to this). Theories aren’t like this. Theories are frameworks that let us take some sensory experience and generate beliefs about our future sensory experiences.
So I’m not sure it makes sense to have beliefs distinguishing empirically identical theories. That seems like a kind of category error- a map-territory confusion. The question is, what do we do with this algorithmic complexity stuff that was so promising. I think we still have good reasons to be thinking cleanly about complicated science- the QM interpretation debate isn’t totally irrelevant. But it isn’t obvious algorithmic simplicity is what we want out of our theories (nor is it clear that what we want is the same thing other agents might want out of their theories). (ETA: Though of course K-complexity might still be helpful in making predictions between two possible futures that are empirically distinct. For example, we can assign a low probability to finding evidence of a moon landing conspiracy since the theory that would predict discovering such evidence is unparsimonious. But if that is the case, if theories can be ruled improbable on the basis of the structure of the theory alone why can we only do this with empirically distinct theories? Shouldn’t all theories be understandable in this way?)