When we try to build a model of the underlying universe, what we’re really talking about it is trying to derive properties of a program which we are observing (and a component of), and which produces our sense experiences. Probably quite a short program in its initial state, in fact (though possibly not one limited by the finite precision of traditional Turing Machines).
So, that gives us a few rules that seem likely to be general: the underlying model must be internally consistent and mathematically describable, and must have a total K-complexity less than the amount of information in the observable universe (or else we couldn’t reason about it).
So the question to ask is really “can I imagine a program state that would make this proposition true, given my current beliefs about my organization of the program?”
This is resilient to the atoms / QM thing, at least, as you can always change the underlying program description to better fit the evidence.
Although, in practice, most of what intelligent entities do can more precisely be described as ‘grammar fitting’ than ‘program induction.’ We reason probabalistically, essentially by throwing heuristics at a wall to see what offers marginal returns on predicting future sense impressions, since trying to guess the next word in a sentence by reverse-deriving the original state of the universe-program and iterating it forwards is not practical for most people. That massive mess of semi-rational, anticipatorially-justified rules of thumb is what allows us to reason in the day to day.
So a more pragmatic question is ‘how does this change my anticipation of future events?’ or ‘What sense experiences do I expect to have differently as a result of this belief?’
It is only when we seek to understand more deeply and generally, or when dealing with problems of things not directly observable, that it is practical to try to reason about the actual program underlying the universe.
When we try to build a model of the underlying universe, what we’re really talking about it is trying to derive properties of a program which we are observing (and a component of), and which produces our sense experiences. Probably quite a short program in its initial state, in fact (though possibly not one limited by the finite precision of traditional Turing Machines).
So, that gives us a few rules that seem likely to be general: the underlying model must be internally consistent and mathematically describable, and must have a total K-complexity less than the amount of information in the observable universe (or else we couldn’t reason about it).
So the question to ask is really “can I imagine a program state that would make this proposition true, given my current beliefs about my organization of the program?”
This is resilient to the atoms / QM thing, at least, as you can always change the underlying program description to better fit the evidence.
Although, in practice, most of what intelligent entities do can more precisely be described as ‘grammar fitting’ than ‘program induction.’ We reason probabalistically, essentially by throwing heuristics at a wall to see what offers marginal returns on predicting future sense impressions, since trying to guess the next word in a sentence by reverse-deriving the original state of the universe-program and iterating it forwards is not practical for most people. That massive mess of semi-rational, anticipatorially-justified rules of thumb is what allows us to reason in the day to day.
So a more pragmatic question is ‘how does this change my anticipation of future events?’ or ‘What sense experiences do I expect to have differently as a result of this belief?’
It is only when we seek to understand more deeply and generally, or when dealing with problems of things not directly observable, that it is practical to try to reason about the actual program underlying the universe.