What you really want is a vast hierarchical forest of causal models, ordered by what parameterizes what. A bridge hypothesis, or reduction, is then a continuous function from the high-dimensional outcome-space of one causal model to the lower-dimensional free-parameter space of another causal model, specifically, a function that “compresses well” with respect to the empirical data available about the “truer” model’s outcome space (ie: perturbing the velocity of one molecule in a molecular simulation of a gas cloud doesn’t cause a large change to the temperature parameter of a higher-level thermodynamic simulation of the same gas cloud). I don’t know what sort of function these would be, but they should be learnable from data.
Metaphysical monism, dualism, or pluralism then consists in the assumptions we make about the graph-structure of the model hierarchy. We can a strict tree structure, in which each higher-level (more abstract, lower-dimensional parameter space) model is parameterized on only one parent, but that leaves us unable to apply multiple theories to one situation (ie: we can’t make predictions about how a human being behaves when he helps you move house, because we need both some physics and some psychology to know when he’s tired from lifting heavy boxes). We thus should assume a DAG structure, and that gives us a weak metaphysical pluralism (we can thus apply both physics and psychology where appropriate).
But what we think we want is strong metaphysical monism: the assumption, built into our algorithm, that ultimately there is only one root node in the Grand Hierarchy of Models, a Grand Unified Theory of reality, even if we don’t actually know what it is. What we think we need to avoid is strong metaphysical pluralism: the (AFAIK, erroneous) inference by our algorithm that there are multiple root-level nodes in the Grand Hierarchy of Models, and thus multiple incommensurable fundamental realities.
Questions:
What would reality look like if it had multiple, incommensurable root-level “programs” running it forward?
Is it worth building a hierarchical inference algorithm on the hard-coded assumption that only one root-level reality exists, or is it better to allow for metaphysical uncertainty by “only” designing in a prior that assigns greater probability to model hierarchies with fewer, ideally only one, program?
Actually, isn’t it more correct to build the hierarchies from the bottom up as we acquire the larger and larger amounts of empirical data necessary to build theories with higher-dimensional free-parameter spaces? And in that circumstance, how do we encode the preference for building reductions and unifying theories wherever possible, with a kind of “metaphysical simplicity prior”?
What you really want is a vast hierarchical forest of causal models, ordered by what parameterizes what. A bridge hypothesis, or reduction, is then a continuous function from the high-dimensional outcome-space of one causal model to the lower-dimensional free-parameter space of another causal model, specifically, a function that “compresses well” with respect to the empirical data available about the “truer” model’s outcome space (ie: perturbing the velocity of one molecule in a molecular simulation of a gas cloud doesn’t cause a large change to the temperature parameter of a higher-level thermodynamic simulation of the same gas cloud). I don’t know what sort of function these would be, but they should be learnable from data.
Metaphysical monism, dualism, or pluralism then consists in the assumptions we make about the graph-structure of the model hierarchy. We can a strict tree structure, in which each higher-level (more abstract, lower-dimensional parameter space) model is parameterized on only one parent, but that leaves us unable to apply multiple theories to one situation (ie: we can’t make predictions about how a human being behaves when he helps you move house, because we need both some physics and some psychology to know when he’s tired from lifting heavy boxes). We thus should assume a DAG structure, and that gives us a weak metaphysical pluralism (we can thus apply both physics and psychology where appropriate).
But what we think we want is strong metaphysical monism: the assumption, built into our algorithm, that ultimately there is only one root node in the Grand Hierarchy of Models, a Grand Unified Theory of reality, even if we don’t actually know what it is. What we think we need to avoid is strong metaphysical pluralism: the (AFAIK, erroneous) inference by our algorithm that there are multiple root-level nodes in the Grand Hierarchy of Models, and thus multiple incommensurable fundamental realities.
Questions:
What would reality look like if it had multiple, incommensurable root-level “programs” running it forward?
Is it worth building a hierarchical inference algorithm on the hard-coded assumption that only one root-level reality exists, or is it better to allow for metaphysical uncertainty by “only” designing in a prior that assigns greater probability to model hierarchies with fewer, ideally only one, program?
Actually, isn’t it more correct to build the hierarchies from the bottom up as we acquire the larger and larger amounts of empirical data necessary to build theories with higher-dimensional free-parameter spaces? And in that circumstance, how do we encode the preference for building reductions and unifying theories wherever possible, with a kind of “metaphysical simplicity prior”?