It seems to me that the stipulations made here about the inferential potential or little information is made from the naive viewpoint that piece of information are independent.
The idea of the plenitude of information with inferential ability that is readily accessible to a smart enough agent doesn’t hold if that information consists of things which are mostly dependent on each other.
A <try to taboo this word whenever you see it> hooked up to a webcam, would invent General Relativity as a hypothesis—perhaps not the dominant hypothesis, compared to Newtonian mechanics, but still a hypothesis under direct consideration—by the time it had seen the third frame of a falling apple. It might guess it from the first frame, if it saw the statics of a bent blade of grass.
This statement could be true, however, this doesn’t mean that upon seeing a second blade of grass it could generate a new hypothesis, or upon seeing all that is on earth on a macroscopic or even on a microscopic (up to limit of current instruments).
Heck, if you see a single bit, as long as you have the ideas of causality, you can generate infinite hypothesis for why that bit was caused to be zero or one… you can even assign probabilities to them based on their complexity. A single bit is enough to generate all hypothesis about how the universe might work ,but you’re just left with an infinite and very flat search space.
So, this view of the world boils down to:
Most properties of the world can be inferred with a very small probability from a very small amount of information. This is literally an inversion of the basic scientific assumption that observations about properties of the world carry over into other systems. If one can find properties that are generalizable, one can at least speculate as to what they are even by observing a single one of the things they generalize to.
However, new information serves to shrink the search space and increase our probability for a hypothesis being true
Which is… true, but it’s such an obvious thing that I don’t think anyone would disagree with it. It’s just formulated in a very awkward way in this article to make it seem “new”. Or at least, I’ve got no additional insight from this other than the above.
It seems to me that the stipulations made here about the inferential potential or little information is made from the naive viewpoint that piece of information are independent.
The idea of the plenitude of information with inferential ability that is readily accessible to a smart enough agent doesn’t hold if that information consists of things which are mostly dependent on each other.
Isn’t this point already assumed in the post? Note how the civilization isn’t really learning anything new anymore by the fourth grid:
The Fourth Grid doesn’t add much to the picture.
This only makes sense if the info is highly redundant, aka not independent.
It seems to me that the stipulations made here about the inferential potential or little information is made from the naive viewpoint that piece of information are independent.
The idea of the plenitude of information with inferential ability that is readily accessible to a smart enough agent doesn’t hold if that information consists of things which are mostly dependent on each other.
This statement could be true, however, this doesn’t mean that upon seeing a second blade of grass it could generate a new hypothesis, or upon seeing all that is on earth on a macroscopic or even on a microscopic (up to limit of current instruments).
Heck, if you see a single bit, as long as you have the ideas of causality, you can generate infinite hypothesis for why that bit was caused to be zero or one… you can even assign probabilities to them based on their complexity. A single bit is enough to generate all hypothesis about how the universe might work ,but you’re just left with an infinite and very flat search space.
So, this view of the world boils down to:
Most properties of the world can be inferred with a very small probability from a very small amount of information. This is literally an inversion of the basic scientific assumption that observations about properties of the world carry over into other systems. If one can find properties that are generalizable, one can at least speculate as to what they are even by observing a single one of the things they generalize to.
However, new information serves to shrink the search space and increase our probability for a hypothesis being true
Which is… true, but it’s such an obvious thing that I don’t think anyone would disagree with it. It’s just formulated in a very awkward way in this article to make it seem “new”. Or at least, I’ve got no additional insight from this other than the above.
Isn’t this point already assumed in the post? Note how the civilization isn’t really learning anything new anymore by the fourth grid:
This only makes sense if the info is highly redundant, aka not independent.