Better, then, to unlabel the blue line entirely, and when someone wants to know what ontological difference exists between the higher rows and the lower, say “Mu.” … Reductionism proper is just this: noticing that green arrows are always present, and always point up.
There’s a tension here: green arrows are a property of our maps, but to the extent that our maps are accurate, they do actually reflect the territory. So sure, the green arrows are always present and always point from physically smaller to physically larger, but what are the ontological determinants of the arrow head positions, i.e, why are the rows where they are?
It often occurs in physics that from a small number of seemingly obvious and simple propositions, a great deal of far-ranging consequences can be deduced. This discussion brings up an example: the seemingly simple and obvious propositions are (i) that an element of a row is composed of a large number of components of the row immediately below; and (ii) our minds pick out rows by virtue of some kind of apparent “systematizability” or “model-ability”.
From this we can deduce that there is an important ontological property which distinguishes a row from the one immediately below. It is the property of macrostate reproducibility, that is, if there is an accurate model of macrostate (i.e., upper row) quantities, then the fine details of the microstate (i.e., row immediately below) just don’t matter—we can automatically infer that almost anything that can happen at the microstate level will lead to that same description at the macrostate level.
The paradigmatic example is thermodynamics, in which all of the important information about the microstate (particle momenta and positions) is captured by a few macrostate variables (e.g., pressure, volume, temperature, chemical potential, etc.). Empirically, it is observed that knowledge of the values of only a subset of the macrostate variables suffices to predict the values of the remaining macrostate variables. We can use that empirical observation plus an accurate model of the microstates to create an accurate model of macrostates as follows: (i) create a probability distribution over the microstates by maximizing entropy subject to the constraint that the predictive subset of macrostate variables are fixed; (ii) take expectations over this probability distribution to predict the remaining macrostate variables.
The point: when someone wants to know what ontological difference exists between the higher rows and the lower, one trivial observation (upper rows are compositions of lower ones) and one mildly more subtle anthropic one (our minds distinguish rows on the basis of some kind of systematizability) provide enough information to give a much better answer than “Mu”. This perspective gives a better definition of reductionism too: reductionism is the process of discovering which details of the microstate matter for accurate macrostate modeling, and which can be ignored.
what are the ontological determinants of the arrow head positions, i.e, why are the rows where they are?
I would say they aren’t. There are many ways—probably an infinite number—to divide the same blue line into rows, depending on the theories and models invoked; the six in my diagrams are just an example. I don’t think the row divisions we as a civilization are given to use are privileged in any particular way.
almost anything that can happen at the microstate level will lead to that same description
The same description, yes; but the description isn’t the thing. Each microstate is identical with exactly one macrostate and vice versa, could we but perceive it in full; it does often happen that the descriptions of a large set of microstates all lead to a single description of just one macrostate, but this is only a fact about the information we’ve chosen to omit for our own convenience, not about the reality.
all the important information about the microstate
“Important” is the key word; reality never treats anything as unimportant—only we do. I think the distinction you’re making is an epistemic rather than an ontological one.
There are many ways—probably an infinite number—to divide the same blue line into rows, depending on the theories and models invoked; the six in my diagrams are just an example. [emphasis added]
Nothing I’m trying to communicate depends on the particular six rows you chose as the example. Rather, what I’m getting at is that the sheer fact of “model-ability” reflects an ontological property.
You have an idea about the way the universe operates, and it is, as far as I can tell, an incorrect idea. The heart of it is this:
Each microstate is identical with exactly one macrostate and vice versa, could we but perceive it in full; it does often happen that the descriptions of a large set of microstates all lead to a single description of just one macrostate, but this is only a fact about the information we’ve chosen to omit for our own convenience, not about the reality.
The phrase “our own convenience” is the problem: that a system-composed-of-lower-level-components is convenient for us turns out, non-obviously, to be contingent on a fact about the system, not just facts about us! We as engineers (and the process of evolution by natural selection) are able to create systems which reliably do something (transmit force, store energy, process information, etc.) because it is possible to aggregate lower-level components such that the macrostate behavior of the system is robust to the overwhelming majority of the lower level degrees of freedom.
This is why you have a persistent sense of personal identity—why the “you” that falls asleep feels the same as (and can in principle be objectively identified with) the “you” that wakes up, despite of the immense number of changes in your low-level state that take place while you’re asleep. Almost all of those changes (e.g., thermal noise in your neurons, ongoing biochemical processes, some of which integrate up to physiological processes) occur in low-level degrees of freedom that just don’t matter to the question of who you are. (Think of organ transplants!)
“Important” is the key word; reality never treats anything as unimportant—only we do. I think the distinction you’re making is an epistemic rather than an ontological one.
Yeah, I can see how that would happen—we don’t have a good jargon for distinguishing the kind of “importance” I’m trying to communicate. The key point is that systems do exist in which the robustly determined upper-level degrees of freedom in one sub-system are coupled essentially only to the robustly determined upper-level degrees of freedom of another subsystem. In such a setup, the uncontrolled low-level degrees of freedom of the subsystems have no (okay, negligible) physical influence on one another. This is a fact about the system, not a fact about humans. (It does require counterfactual reasoning to discern this fact, which might confuse the issue.)
Here’s an example of a system in which one set of subsystem microstate detail is irrelevant to a second set of subsystem microstate detail. A thermally well-isolated piston contains a gas at a certain pressure, temperature, and volume. When a force is exerted on the head of the piston, the microstate of the gas changes in a way that depends only on the magnitude of the force, and not on (essentially) any of the microstate detail about how that force came to be exerted.
There’s a tension here: green arrows are a property of our maps, but to the extent that our maps are accurate, they do actually reflect the territory. So sure, the green arrows are always present and always point from physically smaller to physically larger, but what are the ontological determinants of the arrow head positions, i.e, why are the rows where they are?
It often occurs in physics that from a small number of seemingly obvious and simple propositions, a great deal of far-ranging consequences can be deduced. This discussion brings up an example: the seemingly simple and obvious propositions are (i) that an element of a row is composed of a large number of components of the row immediately below; and (ii) our minds pick out rows by virtue of some kind of apparent “systematizability” or “model-ability”.
From this we can deduce that there is an important ontological property which distinguishes a row from the one immediately below. It is the property of macrostate reproducibility, that is, if there is an accurate model of macrostate (i.e., upper row) quantities, then the fine details of the microstate (i.e., row immediately below) just don’t matter—we can automatically infer that almost anything that can happen at the microstate level will lead to that same description at the macrostate level.
The paradigmatic example is thermodynamics, in which all of the important information about the microstate (particle momenta and positions) is captured by a few macrostate variables (e.g., pressure, volume, temperature, chemical potential, etc.). Empirically, it is observed that knowledge of the values of only a subset of the macrostate variables suffices to predict the values of the remaining macrostate variables. We can use that empirical observation plus an accurate model of the microstates to create an accurate model of macrostates as follows: (i) create a probability distribution over the microstates by maximizing entropy subject to the constraint that the predictive subset of macrostate variables are fixed; (ii) take expectations over this probability distribution to predict the remaining macrostate variables.
The point: when someone wants to know what ontological difference exists between the higher rows and the lower, one trivial observation (upper rows are compositions of lower ones) and one mildly more subtle anthropic one (our minds distinguish rows on the basis of some kind of systematizability) provide enough information to give a much better answer than “Mu”. This perspective gives a better definition of reductionism too: reductionism is the process of discovering which details of the microstate matter for accurate macrostate modeling, and which can be ignored.
I would say they aren’t. There are many ways—probably an infinite number—to divide the same blue line into rows, depending on the theories and models invoked; the six in my diagrams are just an example. I don’t think the row divisions we as a civilization are given to use are privileged in any particular way.
The same description, yes; but the description isn’t the thing. Each microstate is identical with exactly one macrostate and vice versa, could we but perceive it in full; it does often happen that the descriptions of a large set of microstates all lead to a single description of just one macrostate, but this is only a fact about the information we’ve chosen to omit for our own convenience, not about the reality.
“Important” is the key word; reality never treats anything as unimportant—only we do. I think the distinction you’re making is an epistemic rather than an ontological one.
Nothing I’m trying to communicate depends on the particular six rows you chose as the example. Rather, what I’m getting at is that the sheer fact of “model-ability” reflects an ontological property.
You have an idea about the way the universe operates, and it is, as far as I can tell, an incorrect idea. The heart of it is this:
The phrase “our own convenience” is the problem: that a system-composed-of-lower-level-components is convenient for us turns out, non-obviously, to be contingent on a fact about the system, not just facts about us! We as engineers (and the process of evolution by natural selection) are able to create systems which reliably do something (transmit force, store energy, process information, etc.) because it is possible to aggregate lower-level components such that the macrostate behavior of the system is robust to the overwhelming majority of the lower level degrees of freedom.
This is why you have a persistent sense of personal identity—why the “you” that falls asleep feels the same as (and can in principle be objectively identified with) the “you” that wakes up, despite of the immense number of changes in your low-level state that take place while you’re asleep. Almost all of those changes (e.g., thermal noise in your neurons, ongoing biochemical processes, some of which integrate up to physiological processes) occur in low-level degrees of freedom that just don’t matter to the question of who you are. (Think of organ transplants!)
Yeah, I can see how that would happen—we don’t have a good jargon for distinguishing the kind of “importance” I’m trying to communicate. The key point is that systems do exist in which the robustly determined upper-level degrees of freedom in one sub-system are coupled essentially only to the robustly determined upper-level degrees of freedom of another subsystem. In such a setup, the uncontrolled low-level degrees of freedom of the subsystems have no (okay, negligible) physical influence on one another. This is a fact about the system, not a fact about humans. (It does require counterfactual reasoning to discern this fact, which might confuse the issue.)
Here’s an example of a system in which one set of subsystem microstate detail is irrelevant to a second set of subsystem microstate detail. A thermally well-isolated piston contains a gas at a certain pressure, temperature, and volume. When a force is exerted on the head of the piston, the microstate of the gas changes in a way that depends only on the magnitude of the force, and not on (essentially) any of the microstate detail about how that force came to be exerted.