If you take a small slice of laminar cortex and hook it up to an optic feed and show it image sequences, it develops into gabor-like filters which recognize/encode 2D edges. The gabor filters have been mathematically studied and are optimal entropy maximizing transforms for real world images. The edges are real because of the underlying statistical structure of the universe, and they don’t form if you show white noise or nothingness.
Now take that same type of operation and stack many of them on top of each other and add layers of recursion and you get something that starts clustering the universe into patterns—words.
These patterns which we regard as “psychologically inclined to regard as significant” are actual universal structural patterns in the universe, so even if the particular named sequences are arbitrary and the ‘importance’ is debatable, the patterns themselves are not arbitrary. See the cluster structure of thingspace and related posts.
Can you provide even an approximate definition of the “complexity”
See above. Complexity is approximated by words and concepts in the minds of intelligences. This relates back to optimal practical compression which is the core of intelligence.
Kolmogorov complexity is a start, but it’s not computationally tractable so it’s not a good definition. The proper definition of complexity requires an algorithmic definition of general optimal structural compression, which is the core sub-problem of intelligence. So in the future when we completely solve AI, we will have more concrete definitions of complexity. Until then, human judgement is a good approximation. And a first order approximation is “complexity is that which we use words to describe”.
If you take a small slice of laminar cortex and hook it up to an optic feed and show it image sequences, it develops into gabor-like filters which recognize/encode 2D edges. The gabor filters have been mathematically studied and are optimal entropy maximizing transforms for real world images. The edges are real because of the underlying statistical structure of the universe, and they don’t form if you show white noise or nothingness.
Humans possess powerful pattern recognizing systems. We’re adapted to cope with the material universe around us, it’s no wonder if we recognize patterns in it, but not in white noise or nothingness.
“{Interestingness to humans} has an exponential relationship with time over the lifetime of the universe” packs a lot less of a sense of physical inevitability. The universe is not optimized for the development of {interestingness to humans}. We’ve certainly made the world a lot more interesting for ourselves in our recent history, but that doesn’t suggest it’s part of a universal trend. The calendar you linked to, for instance, lists the K-T extinction event, the most famous although not the greatest of five global mass extinction events. Each of those resulted in a large, albeit temporary, reduction in global ecosystem diversity, which strikes me as a pretty big hit to {interestingness to humans}. And while technology has been increasing exponentially throughout the global stage recently, there have been plenty of empire collapses and losses of culture which probably mark significant losses of {interestingness to humans} as well.
So, what your post really relies upon is the proposition that {interestingness to humans} can be made to experience an endless exponential increase over time, without leaving Earth. I am convinced that the reasonable default assumption given the available data is that it cannot.
Humans possess powerful pattern recognizing systems
Yes, and I was trying to show how this relates to intelligence, and how intelligence requires compression, and thus relates to complexity.
We’re adapted to cope with the material universe around us, it’s no wonder if we recognize patterns in it, but not in white noise or nothingness.
We recognize patterns because the universe is actually made of patterns. The recognition is no more arbitrary than thermodynamics or quantum physics.
What you appear to be doing is substituting in a black box function of your own mind as a fundamental character of the universe. You see qualities that seem interesting and complex, and you label them “complexity”
No. One of the principle sub-functions of minds/intelligences in general is general compression. The patterns are part of the fundamental character of the universe, and it is that reality which shapes minds, not the other way around.
Complexity is not {interestingness to humans}. Although of course {interestingness to humans} is related to complexity, because our minds learn/model/represent patterns, we find patterns ‘interesting’ because they allow us to model that which exists, and complexity is a pattern-measure.
I suspect we could agree more on complexity if we could algorithmically define it, even though that shouldn’t be necessary (but I will resort to that shortly as a secondary measure). We could probably agree on what ‘humans’ are without a mathematical definition, and we could probably agree on how the number of humans has been changing over time.
Imagine if we could also loosely agree on what ‘things’ or unique patterns are in general, and then we could form a taxonomy over all patterns, where some patterns have is-a relationships to other patterns and are in turn built out of sub-patterns, forming a loosely hierarchical network. We could then roughly define pattern complexity as the hierarchical network rank order of the pattern in the pattern network. A dog is a mammal which is an animal, so complexity increases along that path, for example, and a dog is more complex than any of it’s subcomponents. We could then define ‘events’ as temporal changes in the set of patterns (within some pocket of the universe). We could then rank events in terms of complexity changes, based on the change in complexity of the whole composite-pattern (within space-time pockets).
Then we make a graph of a set of the top N events.
We then see the U shape trend in complexity change over time.
If you want a more mathematical definition, take Kolmogorov complexity and modify it to be computationally tractable. If K(X) is the K-complexity of string X defined by the minimal program which outputs X (maximal compression), then we define CK(X, M, T) as the minimal program which best approximates X subject to memory-space M and time T constraints. Moving from intractable lossless compression to lossy practical compression makes this modified definition of complexity computable in theory (but it’s exact definition still requires optimal lossy compression algorithms). We are interested in CK complexity of the order computable to humans and AIs in the near future.
Complexity != {interestingness to humans}
“{Interestingness to humans} has an exponential relationship with time over the lifetime of the universe” packs a lot less of a sense of physical inevitability.
Complexity over time does appears to follow an inevitable upward accelerating trend in many localized sub-pockets of the universe over time, mirroring the big bang in reverse, and again the trend is not exponential—it’s a 1/x type shape.
The trend is nothing like a smooth line. It is noisy, and there have been some apparent complexity dips, as you mention, although the overall trend is undeniably accelerating and the best fit is the U shape leading towards a local vertical asymptote. As a side note, complexity/systems theorists would point out that most extinctions actually caused large increases in net complexity, and were some of the most important evolutionary stimuli. Counterintuitive, but true.
Complexity is not {interestingness to humans}. Although of course {interestingness to humans} is related to complexity, because our minds learn/model/represent patterns, we find patterns ‘interesting’ because they allow us to model that which exists, and complexity is a pattern-measure.
I suspect we could agree more on complexity if we could algorithmically define it, even though that shouldn’t be necessary (but I will resort to that shortly as a secondary measure). We could probably agree on what ‘humans’ are without a mathematical definition, and we could probably agree on how the number of humans has been changing over time.
Things can be extraordinarily complex without being particularly interesting to humans. We don’t have a fully general absolute pattern recognizing system; that would be an evolutionary hindrance even if it were something that could practically be developed. There are simply too many possible patterns in too many possible contexts. It’s not advantageous for us to be interested in all of them.
I think we don’t agree on what this “complexity” is because it’s not a natural category. You’re insisting that it’s fundamental because it feels fundamental to you, but you can’t demonstrate that it’s fundamental, and I simply don’t buy that it is.
The trend is nothing like a smooth line. It is noisy, and there have been some apparent complexity dips, as you mention, although the overall trend is undeniably accelerating and the best fit is the U shape leading towards a local vertical asymptote. As a side note, complexity/systems theorists would point out that most extinctions actually caused large increases in net complexity, and were some of the most important evolutionary stimuli. Counterintuitive, but true.
Eventually. Ecosystem diversity eventually bounces back, and while a large number of genuses and families die out, most orders retain representatives, so there’s still plenty of genetic diversity to spread out and reoccupy old niches, and potentially create new ones in the process. But there’s no fundamental principle that demands that massive extinction events must lead to increased ecosystem complexity even in the long term; for a long term decrease, you’d simply have to wipe out genetic diversity on a higher level. An UFAI event, for example, could easily lead to a massive drop in ecosystem complexity.
The number of possible patterns in an information cluster is superexponential with the size of the information cluster
Firstly, you are misquoting EY’s post: the possible number of patterns in a string grows exponentially with the number of bits, as expected. It is the number of ‘concepts’ which grows super-exponentially, where EY is defining concept very loosely as any program which classifies patterns. The super-exponential growth in concepts is combinatoric and just stems from naive specific classifiers which recognize combinations of specific patterns.
Secondly, this doesn’t really relate to universal pattern recognition, which is concerned only with optimal data classifications according to a criteria such as entropy maximization.
As a simple example, consider the set of binary strings of length N. There are 2^N possible observable strings, and a super-exponential combinatoric set of naive classifiers. But consider observed data sequences of the form 10010 10010 10010 repeated ad infinitum. Any form of optimal extropy maximization will reduce this to something of the form repeat “10010” indefinitely.
In general any given sequence of observations has a single unique compressed (extropy reduced) representation, which corresponds to it’s fundamental optimal ‘pattern’ representation.
Can you demonstrate that the patterns you’re recognizing are non-arbitrary?
Depends on what you mean. It’s rather trivial to construct simple universal extropy maximizers/optimizers—just survey the basic building blocks of unsupervised learning algorithms. The cortical circuit performs similar computations.
For example the 2D edge patterns that cortical tissue (and any good unsupervised learning algorithm) learns to represent when exposed to real world video are absolutely not arbitrary in the slightest. This should be obvious.
If you mean higher level thought abstractions by “the patterns you’re recognizing”, then the issue becomes more complex. Certainly the patterns we currently recognize at the highest level are not optimal extractions, if that’s what you mean. But nor are they arbitrary. If they were arbitrary our cortex would have no purpose, would confer no selection advantage, and would not exist.
We don’t have a fully general absolute pattern recognizing system;
We do have a fully general pattern recognition system. I’m not sure what you mean by “general absolute”.
that would be an evolutionary hindrance even if it were something that could practically be developed.
They are trivial to construct, and require far less genetic information to specify than specific pattern recognition systems.
Specific recognition systems have the tremendous advantage that they work instantly without any optimization time. A general recognition system has to be slowly trained on the patterns of data present in the observations—this requires time and lots of computation.
Simpler short lived organisms rely more on specific recognition systems and circuitry for this reason as they allow newborn creatures to start with initial ‘pre-programmed’ intelligence. This actual requires considerably more genetic complexity than general learning systems.
Mammals grew larger brains with increasing reliance on general learning/recognition systems because it provides a tremendous flexibility advantage at the cost of requiring larger brains, longer gestation, longer initial development immaturity, etc. In primates and humans especially this trend is maximized. Human infant brains have very little going on initially except powerful general meta-algorithms which will eventually generate specific algorithms in response to the observed environment.
I think we don’t agree on what this “complexity” is because it’s not a natural category
The concept of “natural category” is probably less well defined that “complexity” itself, so it probably won’t shed too much light on our discussion.
That being said, from that post he describes it as:
I’ve chosen the phrase “unnatural category” to describe a category whose boundary you draw in a way that sensitively depends on the exact values built into your utility function.
In that sense complexity is absolutely a natural category.
Look at Kolmogorov_complexity. It is a fundamental computable property of information, and information is the fundamental property of modern physics. So that definition of complexity is as natural as you can get, and is right up there with entropy. Unfortunately that definition itself is not perfect and is too close to entropy, but computable variants of it exist .. .. one used in a computational biology paper I was browsing recently (measuring the tendency towards increased complexity in biological systems) defined complexity as compressed information minus entropy, which may be the best fit to the intuitive concept.
Intuitively I could explain it as follows.
The information complexity of an intelligent system is a measure of the fundamental statistical pattern structure it extracts from it’s environment. If the information it observes is already at maximum entropy (such as pure noise), then it is already maximally compressed, no further extraction is possible, and no learning is possible. At the other extreme if the information observed is extremely uniform (low entropy) then it can be fully described/compressed by extremely simple low complexity programs. A learning system extracts entropy from it’s environment and grows in complexity in proportion.
Depends on what you mean. It’s rather trivial to construct simple universal extropy maximizers/optimizers—just survey the basic building blocks of unsupervised learning algorithms. The cortical circuit performs similar computations.
For example the 2D edge patterns that cortical tissue (and any good unsupervised learning algorithm) learns to represent when exposed to real world video are absolutely not arbitrary in the slightest. This should be obvious.
It’s objective that our responses exist, and they occur in response to particular things. It’s not obvious that they occur in response to natural categories, rather than constructed categories like “sexy.”
We do have a fully general pattern recognition system. I’m not sure what you mean by “general absolute”.
“General absolute” was probably a poor choice of words, but I meant to express a system capable of recognizing all types of patterns in all contexts. There is an absolute, non arbitrary pattern here, do you recognize it?
Kolmogorov complexity is a fundamental character, but it’s not at all clear that we should want a Kolmogorov complexity optimizer acting on our universe, or that Kolmogorov complexity actually has much to do with the “complexity” you’re talking about. A message or system can be high in Kolmogorov complexity without being interesting to us, and it still seems to me that you’re conflating complexity with interestingness when they really don’t bear that sort of relationship.
“General absolute” was probably a poor choice of words, but I meant to express a system capable of recognizing all types of patterns in all contexts. There is an absolute, non arbitrary pattern here, do you recognize it?
I see your meaning—and no practical system is capable of recognizing all types of patterns in all contexts. A universal/general learn algorithm is simply one that can learn to recognize any pattern, given enough time/space/training. That doesn’t mean it will recognize any random pattern it hasn’t already learned.
I see hints of structure in your example but it doesn’t ring any bells.
Kolmogorov complexity is a fundamental character, but it’s not at all clear that we should want a Kolmogorov complexity optimizer acting on our universe
No, and that’s not my primary interest. Complexity seems to be the closest fit for something-important-which-has-been-changing over time on earth. If we had a good way to measure it, we could then make a quantitative model of that change and use that to predict the rate of change in the future, perhaps even ultimately reducing it to physical theory.
For example, one of the interesting new recent physics papers (entropic gravity) proposes that gravity is actually not a fundamental force or even spacetime curvature, but actually an entropic statistical pseudo-force. The paper is interesting because as a side effect it appears to correctly derive the mysterious cosmological constant for acceleration. As an unrelated side note I have an issue with it because it uses the holographic principle/berkenstein bound for information density which still appears to lead to lost-information paradoxes in my mind.
But anyway, if you look at a random patch of space-time, it is always slowly evolving to a higher-entropy state (2nd law), and this may be the main driver of most macroscopic tendencies (even gravity). It’s also quite apparent that a closely related measure—complexity—increases non-linearly in a fashion perhaps loosely like gravitational collapse. The non-linear dynamics are somewhat related—complexity tends to increase in proportion to the existing local complexity as a fraction of available entropy. In some regions this appears to go super-critical, like on earth, where in most places the growth is minuscule or non-existent.
It’s not apparent that complexity is increasing over time. In some respects, things seem to be getting more interesting over time, although I think that a lot of this is due to selective observation, but we don’t have any good reason to believe we’re dealing with a natural category here. If we were dealing with something like Kolmogorov complexity, at least we could know if we were dealing with a real phenomenon, but instead we’re dealing with some ill defined category for which we cannot establish a clear connection to any real physical quality.
For all that you claim that it’s obvious that some fundamental measure of complexity is increasing nonlinearly over time, not a lot of other people are making the same claim, having observed the same data, so it’s clearly not as obvious as all that.
If you take a small slice of laminar cortex and hook it up to an optic feed and show it image sequences, it develops into gabor-like filters which recognize/encode 2D edges. The gabor filters have been mathematically studied and are optimal entropy maximizing transforms for real world images. The edges are real because of the underlying statistical structure of the universe, and they don’t form if you show white noise or nothingness.
Now take that same type of operation and stack many of them on top of each other and add layers of recursion and you get something that starts clustering the universe into patterns—words.
These patterns which we regard as “psychologically inclined to regard as significant” are actual universal structural patterns in the universe, so even if the particular named sequences are arbitrary and the ‘importance’ is debatable, the patterns themselves are not arbitrary. See the cluster structure of thingspace and related posts.
See above. Complexity is approximated by words and concepts in the minds of intelligences. This relates back to optimal practical compression which is the core of intelligence.
Kolmogorov complexity is a start, but it’s not computationally tractable so it’s not a good definition. The proper definition of complexity requires an algorithmic definition of general optimal structural compression, which is the core sub-problem of intelligence. So in the future when we completely solve AI, we will have more concrete definitions of complexity. Until then, human judgement is a good approximation. And a first order approximation is “complexity is that which we use words to describe”.
Humans possess powerful pattern recognizing systems. We’re adapted to cope with the material universe around us, it’s no wonder if we recognize patterns in it, but not in white noise or nothingness.
What you appear to be doing is substituting in a black box function of your own mind as a fundamental character of the universe. You see qualities that seem interesting and complex, and you label them “complexity” when they would be better characterized as {interestingness to humans} (or more precisely, {interestingness to jacob_cannell}, but there’s a lot of overlap there.)
“{Interestingness to humans} has an exponential relationship with time over the lifetime of the universe” packs a lot less of a sense of physical inevitability. The universe is not optimized for the development of {interestingness to humans}. We’ve certainly made the world a lot more interesting for ourselves in our recent history, but that doesn’t suggest it’s part of a universal trend. The calendar you linked to, for instance, lists the K-T extinction event, the most famous although not the greatest of five global mass extinction events. Each of those resulted in a large, albeit temporary, reduction in global ecosystem diversity, which strikes me as a pretty big hit to {interestingness to humans}. And while technology has been increasing exponentially throughout the global stage recently, there have been plenty of empire collapses and losses of culture which probably mark significant losses of {interestingness to humans} as well.
So, what your post really relies upon is the proposition that {interestingness to humans} can be made to experience an endless exponential increase over time, without leaving Earth. I am convinced that the reasonable default assumption given the available data is that it cannot.
Yes, and I was trying to show how this relates to intelligence, and how intelligence requires compression, and thus relates to complexity.
We recognize patterns because the universe is actually made of patterns. The recognition is no more arbitrary than thermodynamics or quantum physics.
No. One of the principle sub-functions of minds/intelligences in general is general compression. The patterns are part of the fundamental character of the universe, and it is that reality which shapes minds, not the other way around.
Complexity is not {interestingness to humans}. Although of course {interestingness to humans} is related to complexity, because our minds learn/model/represent patterns, we find patterns ‘interesting’ because they allow us to model that which exists, and complexity is a pattern-measure.
I suspect we could agree more on complexity if we could algorithmically define it, even though that shouldn’t be necessary (but I will resort to that shortly as a secondary measure). We could probably agree on what ‘humans’ are without a mathematical definition, and we could probably agree on how the number of humans has been changing over time.
Imagine if we could also loosely agree on what ‘things’ or unique patterns are in general, and then we could form a taxonomy over all patterns, where some patterns have is-a relationships to other patterns and are in turn built out of sub-patterns, forming a loosely hierarchical network. We could then roughly define pattern complexity as the hierarchical network rank order of the pattern in the pattern network. A dog is a mammal which is an animal, so complexity increases along that path, for example, and a dog is more complex than any of it’s subcomponents. We could then define ‘events’ as temporal changes in the set of patterns (within some pocket of the universe). We could then rank events in terms of complexity changes, based on the change in complexity of the whole composite-pattern (within space-time pockets).
Then we make a graph of a set of the top N events.
We then see the U shape trend in complexity change over time.
If you want a more mathematical definition, take Kolmogorov complexity and modify it to be computationally tractable. If K(X) is the K-complexity of string X defined by the minimal program which outputs X (maximal compression), then we define CK(X, M, T) as the minimal program which best approximates X subject to memory-space M and time T constraints. Moving from intractable lossless compression to lossy practical compression makes this modified definition of complexity computable in theory (but it’s exact definition still requires optimal lossy compression algorithms). We are interested in CK complexity of the order computable to humans and AIs in the near future.
Complexity != {interestingness to humans}
Complexity over time does appears to follow an inevitable upward accelerating trend in many localized sub-pockets of the universe over time, mirroring the big bang in reverse, and again the trend is not exponential—it’s a 1/x type shape.
The trend is nothing like a smooth line. It is noisy, and there have been some apparent complexity dips, as you mention, although the overall trend is undeniably accelerating and the best fit is the U shape leading towards a local vertical asymptote. As a side note, complexity/systems theorists would point out that most extinctions actually caused large increases in net complexity, and were some of the most important evolutionary stimuli. Counterintuitive, but true.
The number of possible patterns in an information cluster is superexponential with the size of the information cluster. Can you demonstrate that the patterns you’re recognizing are non-arbitrary? Patterns that are natural to us often seem fundamental even when they are not.
Things can be extraordinarily complex without being particularly interesting to humans. We don’t have a fully general absolute pattern recognizing system; that would be an evolutionary hindrance even if it were something that could practically be developed. There are simply too many possible patterns in too many possible contexts. It’s not advantageous for us to be interested in all of them.
I think we don’t agree on what this “complexity” is because it’s not a natural category. You’re insisting that it’s fundamental because it feels fundamental to you, but you can’t demonstrate that it’s fundamental, and I simply don’t buy that it is.
Eventually. Ecosystem diversity eventually bounces back, and while a large number of genuses and families die out, most orders retain representatives, so there’s still plenty of genetic diversity to spread out and reoccupy old niches, and potentially create new ones in the process. But there’s no fundamental principle that demands that massive extinction events must lead to increased ecosystem complexity even in the long term; for a long term decrease, you’d simply have to wipe out genetic diversity on a higher level. An UFAI event, for example, could easily lead to a massive drop in ecosystem complexity.
Firstly, you are misquoting EY’s post: the possible number of patterns in a string grows exponentially with the number of bits, as expected. It is the number of ‘concepts’ which grows super-exponentially, where EY is defining concept very loosely as any program which classifies patterns. The super-exponential growth in concepts is combinatoric and just stems from naive specific classifiers which recognize combinations of specific patterns.
Secondly, this doesn’t really relate to universal pattern recognition, which is concerned only with optimal data classifications according to a criteria such as entropy maximization.
As a simple example, consider the set of binary strings of length N. There are 2^N possible observable strings, and a super-exponential combinatoric set of naive classifiers. But consider observed data sequences of the form 10010 10010 10010 repeated ad infinitum. Any form of optimal extropy maximization will reduce this to something of the form repeat “10010” indefinitely.
In general any given sequence of observations has a single unique compressed (extropy reduced) representation, which corresponds to it’s fundamental optimal ‘pattern’ representation.
Depends on what you mean. It’s rather trivial to construct simple universal extropy maximizers/optimizers—just survey the basic building blocks of unsupervised learning algorithms. The cortical circuit performs similar computations.
For example the 2D edge patterns that cortical tissue (and any good unsupervised learning algorithm) learns to represent when exposed to real world video are absolutely not arbitrary in the slightest. This should be obvious.
If you mean higher level thought abstractions by “the patterns you’re recognizing”, then the issue becomes more complex. Certainly the patterns we currently recognize at the highest level are not optimal extractions, if that’s what you mean. But nor are they arbitrary. If they were arbitrary our cortex would have no purpose, would confer no selection advantage, and would not exist.
We do have a fully general pattern recognition system. I’m not sure what you mean by “general absolute”.
They are trivial to construct, and require far less genetic information to specify than specific pattern recognition systems.
Specific recognition systems have the tremendous advantage that they work instantly without any optimization time. A general recognition system has to be slowly trained on the patterns of data present in the observations—this requires time and lots of computation.
Simpler short lived organisms rely more on specific recognition systems and circuitry for this reason as they allow newborn creatures to start with initial ‘pre-programmed’ intelligence. This actual requires considerably more genetic complexity than general learning systems.
Mammals grew larger brains with increasing reliance on general learning/recognition systems because it provides a tremendous flexibility advantage at the cost of requiring larger brains, longer gestation, longer initial development immaturity, etc. In primates and humans especially this trend is maximized. Human infant brains have very little going on initially except powerful general meta-algorithms which will eventually generate specific algorithms in response to the observed environment.
The concept of “natural category” is probably less well defined that “complexity” itself, so it probably won’t shed too much light on our discussion.
That being said, from that post he describes it as:
In that sense complexity is absolutely a natural category.
Look at Kolmogorov_complexity. It is a fundamental computable property of information, and information is the fundamental property of modern physics. So that definition of complexity is as natural as you can get, and is right up there with entropy. Unfortunately that definition itself is not perfect and is too close to entropy, but computable variants of it exist .. .. one used in a computational biology paper I was browsing recently (measuring the tendency towards increased complexity in biological systems) defined complexity as compressed information minus entropy, which may be the best fit to the intuitive concept.
Intuitively I could explain it as follows.
The information complexity of an intelligent system is a measure of the fundamental statistical pattern structure it extracts from it’s environment. If the information it observes is already at maximum entropy (such as pure noise), then it is already maximally compressed, no further extraction is possible, and no learning is possible. At the other extreme if the information observed is extremely uniform (low entropy) then it can be fully described/compressed by extremely simple low complexity programs. A learning system extracts entropy from it’s environment and grows in complexity in proportion.
It’s objective that our responses exist, and they occur in response to particular things. It’s not obvious that they occur in response to natural categories, rather than constructed categories like “sexy.”
“General absolute” was probably a poor choice of words, but I meant to express a system capable of recognizing all types of patterns in all contexts. There is an absolute, non arbitrary pattern here, do you recognize it?
Kolmogorov complexity is a fundamental character, but it’s not at all clear that we should want a Kolmogorov complexity optimizer acting on our universe, or that Kolmogorov complexity actually has much to do with the “complexity” you’re talking about. A message or system can be high in Kolmogorov complexity without being interesting to us, and it still seems to me that you’re conflating complexity with interestingness when they really don’t bear that sort of relationship.
I see your meaning—and no practical system is capable of recognizing all types of patterns in all contexts. A universal/general learn algorithm is simply one that can learn to recognize any pattern, given enough time/space/training. That doesn’t mean it will recognize any random pattern it hasn’t already learned.
I see hints of structure in your example but it doesn’t ring any bells.
No, and that’s not my primary interest. Complexity seems to be the closest fit for something-important-which-has-been-changing over time on earth. If we had a good way to measure it, we could then make a quantitative model of that change and use that to predict the rate of change in the future, perhaps even ultimately reducing it to physical theory.
For example, one of the interesting new recent physics papers (entropic gravity) proposes that gravity is actually not a fundamental force or even spacetime curvature, but actually an entropic statistical pseudo-force. The paper is interesting because as a side effect it appears to correctly derive the mysterious cosmological constant for acceleration. As an unrelated side note I have an issue with it because it uses the holographic principle/berkenstein bound for information density which still appears to lead to lost-information paradoxes in my mind.
But anyway, if you look at a random patch of space-time, it is always slowly evolving to a higher-entropy state (2nd law), and this may be the main driver of most macroscopic tendencies (even gravity). It’s also quite apparent that a closely related measure—complexity—increases non-linearly in a fashion perhaps loosely like gravitational collapse. The non-linear dynamics are somewhat related—complexity tends to increase in proportion to the existing local complexity as a fraction of available entropy. In some regions this appears to go super-critical, like on earth, where in most places the growth is minuscule or non-existent.
It’s not apparent that complexity is increasing over time. In some respects, things seem to be getting more interesting over time, although I think that a lot of this is due to selective observation, but we don’t have any good reason to believe we’re dealing with a natural category here. If we were dealing with something like Kolmogorov complexity, at least we could know if we were dealing with a real phenomenon, but instead we’re dealing with some ill defined category for which we cannot establish a clear connection to any real physical quality.
For all that you claim that it’s obvious that some fundamental measure of complexity is increasing nonlinearly over time, not a lot of other people are making the same claim, having observed the same data, so it’s clearly not as obvious as all that.