The normal methods of explanation, and the standard definitions, for ‘information’, such as the ‘resolution of uncertainty’ are especially difficult to put into practice.
As these presuppose having knowledge already comprised, and/or formed from, a large quantity of information. Such as the concepts of ‘uncertainty’ and ‘resolution’.
How does one know they’ve truly learned these concepts, necessary for recognizing information, without already understanding the nature of information?
This seems to produce a recursive problem, a.k.a, a ‘chicken and egg’ problem.
Additionally, the capability to recognize information and differentiate it from random noise must already exist, in order to recognize and understand any definition of information, in fact to understand any sentence at all. So it’s a multiply recursive problem.
Since, presumably, most members of this forum can understand sentences, how does this occur?
And since presumably no one could do so at birth, how does this capability arise in the intervening period from birth to adulthood?
Information = correlated with future sensory inputs. Noise = not correlated with future sensory inputs. Then strengthen by iteration (particularly useful information = correlated with those parts of future sensory inputs that were previously identified as particularly informative—that’s the part where prior knowledge of what’s important is relevant).
Evolution works the same way (except rather than future sensory information, it “cares” about future genetic fitness).
The theory is sensible but it would imply that human thoughts are entirely based on iterative pattern matching. Which is difficult to accept for a number of reasons. Such as the fact that many people, including many posters on LW, claim to have ‘agency‘.
Also, this leads to the problem of how ’prior knowledge’ arose (Or how ‘agency‘ arose). Was it really just the primordial single celled organism flailing around for a few billion years until ‘knowledge’ came to be?
Ah, no—what I specified is, effectively, a definition of information and how it can be recognized in theory. In practice, humans “run” a bunch of imprecise approximate algorithms that evolution stumbled upon. Just because the definition is recursive, does not mean that following that recursive definition is the optimal algorithm for a very bounded reasoner to arrive at the best approximation quickly. And humans are not particularly optimal either. But there is a lot of evidence that a lot of what is going in humans are effectively prediction error minimizers. (E.g. https://www.lesswrong.com/posts/Cu7yv4eM6dCeA67Af/minimization-of-prediction-error-as-a-foundation-for-human—first relevant Google result, no particular enforcement of that post).
Isn’t minimizing prediction error a form of pattern matching?
Not necessarily. There seems to be a category error in your question: “minimizing prediction error” = task, “pattern matching” = algorithm. Multiple algorithms could potentially be used for the same task (particularly in case you are just trying your best at an approximate solution, or if evolution is doing the best it is able to blindly stumble upon). For example, if you have a good model, you can use it to minimize prediction errors analytically.
These categories are according to whom?
I could not find them in any of the major dictionaries, such as the OED, or SEP.
Huh? You asked a question that I categorize as asking “Isn’t [task] a form of [an algorithm]”, which does not seem to me as a a well-formed question as the same task can be accomplished by many different algorithms. My point about the category error is meant to pint out that the perceived ill-formedness is not due to specific task/algorithm, but due to the question structure. If I am misunderstanding your question, I would appreciate a rephrasing that would make more clear what exactly you meant to ask.
’=‘ is usually used to denote strict equality, as in x = y. If you had some non-standard meaning in mind, I‘m not sure unless you spell it out explicitly.
For example, if by ‘=‘ you mean ‘often reduces to’, as in:
’pattern matching’ often reduces to an algorithm
Then you could skip the equals sign to reduce ambiguity.
If you really intended to literally equate ’pattern matching’ with ’algorithims’ then you need to actually prove it, or reference someone else who does.
I used `` “instance” = category ″ notation, which I admit is not necessarily clear. Ok, let me restate to be more clear. “Isn’t minimizing prediction error a form of pattern matching?” Is in my opinion an ill-formed question due to a category (type) mismatch. “Minimizing prediction error” is a particular task, and “pattern matching” is a particular type of algorithms—that is, particular methods one can choose to use to accomplish a task, and “Isn’t [a particular task] a form of [methods for accomplishing the task]” are ill-formed because tasks and methods do not have the “a form of” relationship. The relationship they do have is that methods can be suitable (or not suitable) for a particular task, and, depending on a task, multiple different methods might be suitable—as it is indeed the case for the “minimizing prediction error” task.
You still have to actually prove this or reference someone who does.
Anyways,
What makes you believe one must be a ‘particular task’ and another a ‘particular type of algorithm’?
Because contrary to your claim there are algorithms that minimize prediction error and tasks such as pattern matching, and some folks prefer this division. In fact you can use the search tool on LW and find quite a few examples.
“algorithms that minimize prediction error” = algorithms that perform the task of minimizing the prediction error.
Anyway, if you disagree on categorization, the point that there are indeed plenty of algorithms for minimization prediction error task, and many of them have nothing to do with pattern matching still suggests to me that your original question was a weird question to ask.
“pattern matching” is a particular type of task using algorithms
See how this can just go on endlessly if everyone is allowed to claim their own categorization scheme and insist it must be true by default?
It’s clear that no serious conversations could occur on LW if that was the case.
In other words asserting your own scheme and claiming it’s the one that should be used be all participants going forwards isn’t going to be treated seriously. The usual expectation is for the claimant to provide solid proof for any novel claim.
I keep saying that your question does not make sense to me, and never claim that my interpretation must be universal, keep asking you to clarify your question, yet you do not.
So by “pattern matching” you mean simply task of finding the most closely matching pattern, not the methodology/class of algorithms of first finding the closest matching pattern, then doing the next step based on what that pattern is? If so, what does that even have to do with the minimization of prediction error?
I also do not understand why you insist on discussing the instrumental terminology disagreement, rather than addressing my bigger point that minimization of prediction errors could be done via models (implicit or explicit). As an example, babies learn to segment reality into objects and understand object permanence at a very young age, and many animals understand object permanence too. This is relevant because object permanence is a form of predictive rule that helps reduce prediction error. Over time, humans form more complex models of the world, some implicit, and some explicitly reasoned with, which help predict future sensory inputs.
I don’t see that in your first comment addressing my question. Reproduced below:
If you intended to point to the replies after this, nothing said afterwards could retroactively nullify or preempt the prior claims (‘There seems to be a category error in your question’) presented in the initial reply. You have to explicitly retract the prior claims.
Because the usual custom in discussions is to address points as they were presented chronologically, unless announced otherwise beforehand.
Sorry, I do not believe continuing this “conversation” is a productive use of my time.
Okay, my point still stands as it’s common knowledge. If you don’t wish to follow the usual rule of addressing claims as they appear then consider announcing it next time at the beginning of a discussion before time is spent.
P.S. and it’s ironic given the title of your post—seems you truly do not know the difference between noise and information, as you insist on discussing the noise, rather than trying to filter it out in pursuit of information.
If you plan to waste people’s time by insisting on pedantically addressing each disagreement, no matter how insignificant, to death, in chronological order, rather then following the more LW standard convention of making an effort to quickly arrive at the crux of any disagreement, announce it first in your post before people spend time engaging with your posts.
That was the prior claim.
After making two replies to my one comment, you seem to now believe the opposite?
Changing positions and/or claims after every comment seems to indicate a more fundamental issue.
As a word of advice, contradicting your own last stated claim so obviously is highly damaging to credibility.
If you don’t intend to actually leave a conversation then try not to claim the opposite.