I don’t see any implications for determinism here, or even for complexity.
It’s just a statement that these abstract models (and many others) that we commonly use are not directly extensible into the finer-grained model.
One thing to note is that in both cases, there are alternative abstract models with variables that are fully specified by the finer-grained reality model. They’re just less convenient to use.
When you said ’not directly extensible’ I understood that as meaning ‘logistically impossible to perfectly map onto a model communicable to humans’. With the fishes fluctuating in weight, in reality, between and during every observation, and between every batch. So even if, perfect, weight information was obtained somehow, that would only be for that specific Planck second. And then averaging, etc., will always have some error inherently. So every step on the way there is a ’loose coupling’, so that the final product, a mental-model of what we just read, is partially illusory.
Perhaps I am misunderstanding?
Though to me it seems clear, there will always be extra bits of information, of specification, that cannot be captured in any model. Regardless of our further progression in modelling. Whether that’s from an abstract model to a finer-grained model, or from a finer-grained model to a whole universe atomic simulation, or from a whole universe atomic simulation to actual reality.
You are misunderstanding the post. There are no “extra bits of information” hiding anywhere in reality; where the “extra bits of information” are lurking is within the implicit assumptions you made when you constructed your model the way you did.
As long as your model is making use of abstractions—that is, using “summary data” to create and work with a lower-dimensional representation of reality than would be obtained by meticulously tracking every variable of relevance—you are implicitly making a choice about what information you are summarizing and how you are summarizing it.
This choice is forced to some extent, in the sense that there are certain ways of summarizing the data that barely simplify computation at all compared to using the “full” model. But even conditioning on a usefully simplifying (natural) abstraction having been selected, there will still be degrees of freedom remaining, and those degrees of freedom are determined by you (the person doing the summarizing). This is where the “extra information” comes from; it’s not because of inherent uncertainty in the physical measurements, but because of an unforced choice that was made between multiple abstract models summarizing the same physical measurements.
Of course, in reality you are also dealing with measurement uncertainty. But that’s not what the post is about; the thing described in the post happens even if you somehow manage to get your hands on a set of uncertainty-free measurements, because the moment you pick a particular way to carve up those measurements, you induce a (partially) arbitrary abstraction layer on top of the measurements. As the post itself says:
If there’s only a limited number of data points, then this has the same inherent uncertainty as before: sample mean is not distribution mean. But even if there’s an infinite number of data points, there’s still some unresolvable uncertainty: there are points which are boundary-cases between the “tree” cluster and the “apple” cluster, and the distribution-mean depends on how we classify those. There is no physical measurement we can make which will perfectly tell us which things are “trees” or “apples”; this distinction exists only in our model, not in the territory. In turn, the tree-distribution-parameters do not perfectly correspond to any physical things in the territory.
This implies nothing about determinism, physics, or the nature of reality (“illusory” or otherwise).
Ah, I understand what your getting at now dxu, thanks for taking the time clarify. Yes, there likely are not extra bits of information hiding away somewhere, unless there really are hidden parameters in space-time (as one of the possible resolutions to Bell’s theorem).
When I said ‘there will always be’ I meant it as ‘any conceivable observer will always encounter an environment with extra bits of information outside of their observational capacity’, and thus beyond any model or mapping. I can see how it could have been misinterpreted.
In regards to my comment on determinism, that was just some idle speculation which TAG helpfully clarified.
Perhaps it’s our difference in perspective but the very paragraph you quoted in your comment seems to indicate that our perceptive faculties will always contain uncertainties, resulting classification errors, and therefore correspondence mismatch.
I’m then extrapolating to the consequence that we will then always be subject to ad-hoc adjustments to adapt, as the ambiguity, uncertainties, etc., will have to be translated into concrete actions which are needed in order for us to continue to exist. This then results in an erroneous mental model, or what I term as ‘partially illusory knowledge’.
It’s a bit of an artistic flair but I make the further jump to consider that since all real objects are in fact constantly fluctuating at the Planck scales, in many different ways, every possible observation must lead to, at best, ’partially illusory knowledge’. Since even if there’s an infinitesimally small variance that still counts as a deviation from ‘completely true knowledge’. Maybe I’m just indulging in word games here.
The way I use “extensibility” here is between two different models of reality, and just means that one can be obtained from the other merely by adding details to it without removing any parts of it. In this case I’m considering two models, both with abstractions such as the idea that “fish” exist as distinct parts of the universe, have definite “weights” that can be “measured”, and so on.
One model is more abstract: there is a “population weight distribution” from which fish weights at some particular time are randomly drawn. This distribution has some free parameters, affected by the history of the tank.
One model is more fine-grained: there are a bunch of individual fish, each with their own weights, presumably determined by their own individual life circumstances. The concept of “population weight distribution” does not exist in the finer-grained model at all. There is no “abstract” population apart from the actual population of 100 fish in the tank.
So yes, in that sense the “population mean” variable does not directly represent anything in the physical world (or at least our finer-grained model of it). This does not make it useless. Its presence in the more abstract model allows us to make predictions about other tanks that we have not yet observed, and the finer-grained model does not.
I don’t see any implications for determinism here, or even for complexity.
It’s just a statement that these abstract models (and many others) that we commonly use are not directly extensible into the finer-grained model.
One thing to note is that in both cases, there are alternative abstract models with variables that are fully specified by the finer-grained reality model. They’re just less convenient to use.
When you said ’not directly extensible’ I understood that as meaning ‘logistically impossible to perfectly map onto a model communicable to humans’. With the fishes fluctuating in weight, in reality, between and during every observation, and between every batch. So even if, perfect, weight information was obtained somehow, that would only be for that specific Planck second. And then averaging, etc., will always have some error inherently. So every step on the way there is a ’loose coupling’, so that the final product, a mental-model of what we just read, is partially illusory.
Perhaps I am misunderstanding?
Though to me it seems clear, there will always be extra bits of information, of specification, that cannot be captured in any model. Regardless of our further progression in modelling. Whether that’s from an abstract model to a finer-grained model, or from a finer-grained model to a whole universe atomic simulation, or from a whole universe atomic simulation to actual reality.
You are misunderstanding the post. There are no “extra bits of information” hiding anywhere in reality; where the “extra bits of information” are lurking is within the implicit assumptions you made when you constructed your model the way you did.
As long as your model is making use of abstractions—that is, using “summary data” to create and work with a lower-dimensional representation of reality than would be obtained by meticulously tracking every variable of relevance—you are implicitly making a choice about what information you are summarizing and how you are summarizing it.
This choice is forced to some extent, in the sense that there are certain ways of summarizing the data that barely simplify computation at all compared to using the “full” model. But even conditioning on a usefully simplifying (natural) abstraction having been selected, there will still be degrees of freedom remaining, and those degrees of freedom are determined by you (the person doing the summarizing). This is where the “extra information” comes from; it’s not because of inherent uncertainty in the physical measurements, but because of an unforced choice that was made between multiple abstract models summarizing the same physical measurements.
Of course, in reality you are also dealing with measurement uncertainty. But that’s not what the post is about; the thing described in the post happens even if you somehow manage to get your hands on a set of uncertainty-free measurements, because the moment you pick a particular way to carve up those measurements, you induce a (partially) arbitrary abstraction layer on top of the measurements. As the post itself says:
This implies nothing about determinism, physics, or the nature of reality (“illusory” or otherwise).
Ah, I understand what your getting at now dxu, thanks for taking the time clarify. Yes, there likely are not extra bits of information hiding away somewhere, unless there really are hidden parameters in space-time (as one of the possible resolutions to Bell’s theorem).
When I said ‘there will always be’ I meant it as ‘any conceivable observer will always encounter an environment with extra bits of information outside of their observational capacity’, and thus beyond any model or mapping. I can see how it could have been misinterpreted.
In regards to my comment on determinism, that was just some idle speculation which TAG helpfully clarified.
Perhaps it’s our difference in perspective but the very paragraph you quoted in your comment seems to indicate that our perceptive faculties will always contain uncertainties, resulting classification errors, and therefore correspondence mismatch.
I’m then extrapolating to the consequence that we will then always be subject to ad-hoc adjustments to adapt, as the ambiguity, uncertainties, etc., will have to be translated into concrete actions which are needed in order for us to continue to exist. This then results in an erroneous mental model, or what I term as ‘partially illusory knowledge’.
It’s a bit of an artistic flair but I make the further jump to consider that since all real objects are in fact constantly fluctuating at the Planck scales, in many different ways, every possible observation must lead to, at best, ’partially illusory knowledge’. Since even if there’s an infinitesimally small variance that still counts as a deviation from ‘completely true knowledge’. Maybe I’m just indulging in word games here.
The way I use “extensibility” here is between two different models of reality, and just means that one can be obtained from the other merely by adding details to it without removing any parts of it. In this case I’m considering two models, both with abstractions such as the idea that “fish” exist as distinct parts of the universe, have definite “weights” that can be “measured”, and so on.
One model is more abstract: there is a “population weight distribution” from which fish weights at some particular time are randomly drawn. This distribution has some free parameters, affected by the history of the tank.
One model is more fine-grained: there are a bunch of individual fish, each with their own weights, presumably determined by their own individual life circumstances. The concept of “population weight distribution” does not exist in the finer-grained model at all. There is no “abstract” population apart from the actual population of 100 fish in the tank.
So yes, in that sense the “population mean” variable does not directly represent anything in the physical world (or at least our finer-grained model of it). This does not make it useless. Its presence in the more abstract model allows us to make predictions about other tanks that we have not yet observed, and the finer-grained model does not.