Nothing is “mere.” I, too, can see the stars on a desert night, and feel them. But do I see less or more? The vastness of the heavens stretches my imagination—stuck on this carousel, my little eye can catch one-million-year-old light. A vast pattern—of which I am a part—perhaps my stuff was belched from some forgotten star, as one is belching there. Or see them with the greater eye of Palomar, rushing all apart from some common starting point when they were perhaps all together. What is the pattern, or the meaning, or the why? It does not do harm to the mystery to know a little about it.
- Richard P. Feynman on The Relation of Physics to Other Sciences
Dalcy
Speaking from the perspective of someone still developing basic mathematical maturity and often lacking prerequisites, it’s very useful as a learning aid. For example, it significantly expanded the range of papers or technical results accessible to me. If I’m reading a paper containing unfamiliar math, I no longer have to go down the rabbit hole of tracing prerequisite dependencies, which often expand exponentially (partly because I don’t know which results or sections in the prerequisite texts are essential, making it difficult to scope my focus). Now I can simply ask the LLM for a self-contained exposition. Using traditional means of self-studying like [search engines / Wikipedia / StackExchange] is very often no match for this task, mostly in terms of time spent or wasted effort; simply having someone I can directly ask my highly specific (and often dumb) questions or confusions and receive equally specific responses is just really useful.
Non-Shannon-type Inequalities
The first new qualitative thing in Information Theory when you move from two variables to three variables is the presence of negative values: information measures (entropy, conditional entropy, mutual information) are always nonnegative for two variables, but there can be negative triple mutual information .
This so far is a relatively well-known fact. But what is the first new qualitative thing when moving from three to four variables? Non-Shannon-type Inequalities.
A fundamental result in Information Theory is that always holds.
Given random variables and , from now on we write with the obvious interpretation of the variables standing for the joint variables they correspond to as indices.
Since always holds, a nonnegative linear combination of a bunch of these is always a valid inequality, which we call a Shannon-type Inequality.
Then the question is, whether Shannon-type Inequalities capture all valid information inequalities of variable. It turns out, yes for , (approximately) yes for , and no for .
Behold, the glorious Zhang-Yeung inequality, a Non-Shannon-type Inequality for :
Explanation of the math, for anyone curious.
Given random variables and , it turns out that is equivalent to (submodularity), if , and .
This lets us write the inequality involving conditional mutual information in terms of joint entropy instead.
Let then be a subset of , each element corresponding to the values of the joint entropy assigned to each subset of some random variables . For example, an element of would be for some random variables and , with a different element being a different tuple induced by a different random variable .
Now let represent elements of satisfying the three aforementioned conditions on joint entropy. For example, ’s element would be satisfying e.g., (monotonicity). This is also a convex cone, so its elements really do correspond to “nonnegative linear combinations” of Shannon-type inequalities.
Then, the claim that “nonnegative linear combinations of Shannon-type inequalities span all inequalities on the possible Shannon measures” would correspond to the claim that for all .
The content of the papers linked above is to show that:
but (closure[1])
and , and also for all .
- ^
This implies that, while there exists a -tuple satisfying Shannon-type inequalities that can’t be constructed or realized by any random variables , there does exist a sequence of random variables whose induced -tuple of joint entropies converge to that tuple in the limit.
Relevant: Alignment as a Bottleneck to Usefulness of GPT-3
between alignment and capabilities, which is the main bottleneck to getting value out of GPT-like models, both in the short term and the long(er) term?
By the way, Gemini 2.5 Pro and o3-mini-high is good at tic-tac-toe. I was surprised because the last time I tested this on o1-preview, it did quite terribly.
Where in the literature can I find the proof of the lower bound?
Previous discussion, comment by A.H. :
Sorry to be a party pooper, but I find the story of Jason Padgett (the guy who ‘banged his head and become a math genius’) completely unconvincing. From the video that you cite, here is the ‘evidence’ that he is ‘math genius’:
He tells us, with no context, ‘the inner boundary of pi is f(x)=x sin(pi/x)’. Ok!
He makes ‘math inspired’ drawings (some of which admittedly are pretty cool but they’re not exactly original) and sells them on his website
He claims that a physicist (who is not named or interviewed) saw him drawing in the mall, and, on the basis of this, suggested that he study physics.
He went to ‘school’ and studied math and physics. He says started with basic algebra and calculus and apparently ‘aced all the classes’, but doesn’t tell us what level he reached. Graduate? Post-graduate?
He was ‘doing integrals with triangles instead of integrals with rectangles’
He tells us ‘every shape in the universe is a fractal’
Some fMRI scans were done on his brain which found ‘he had conscious access to parts of the brain we don’t normally have access to’.
I wrote “your brain can wind up settling on either of [the two generative models]”, not both at once.
Ah that makes sense. So the picture I should have is: whatever local algorithm oscillates between multiple local MAP solutions over time that correspond to qualitatively different high-level information (e.g., clockwise vs counterclockwise). Concretely, something like the metastable states of a Hopfield network, or the update steps of predictive coding (literally gradient update to find MAP solution for perception!!) oscillating between multiple local minima?
Curious about the claim regarding bistable perception as the brain “settling” differently on two distinct but roughly equally plausible generative model parameters behind an observation. In standard statistical terms, should I think of it as: two parameters having similarly high Bayesian posterior probability, but the brain not explicitly representing this posterior, instead using something like local hill climbing to find a local MAP solution—bistable perception corresponding to the two different solutions this process converges to?
If correct, to what extent should I interpret the brain as finding a single solution (MLE/MAP) versus representing a superposition or distribution over multiple solutions (fully Bayesian)? Specifically, in which context should I interpret the phrase “the brain settling on two different generative models”?
Towards building blocks of ontologies
I just read your koan and wow it’s a great post, thank you for writing it. It also gave me some new insights as to how to think about my confusions and some answers. Here’s my chain of thought:
if I want my information theoretic quantities to not degenerate, then I need some distribution over the weights. What is the natural distribution to consider?
Well, there’s the Bayesian posterior.
But I feel like there is a sense in which an individual neural network with its weight should be considered as a deterministic information processing system on its own, without reference to an ensemble.
Using the Bayesian posterior won’t let me do this:
If I have a fixed neural network that contains a circuit that takes activation (at a particular location in the network) to produce activation (at a different location), it would make sense to ask questions about the nature of information processing that does, like .
But intuitively, taking the weight as an unknown averages everything out—even if my original fixed network had a relatively high probability density in the Bayesian posterior, it is unlikely that and would be related by similar circuit mechanisms given another random sample weight from the posterior.
Same with sampling from the post-SGD distribution.
So it would be nice to find a way to interpolate the two. And I think the idea of a tempered local Bayesian posterior from your koan post basically is the right way to do this! (and all of this makes me think papers that measure mutual information between activations in different layers via introducing a noise distribution over the parameters of are a lot more reasonable than I originally thought)
I like the definition, it’s the minimum expected code length for a distribution under constraints on the code (namely, constraints on the kind of beliefs you’re allowed to have—after having that belief, the optimal code is as always the negative log prob).
Also the examples in Proposition 1 were pretty cool in that it gave new characterizations of some well-known quantities—log determinant of the covariance matrix does indeed intuitively measure the uncertainty of a random variable, but it is very cool to see that it in fact has entropy interpretations!
It’s kinda sad because after a brief search it seems like none of the original authors are interested in extending this framework.
That makes sense. I’ve updated towards thinking this is reasonable (albeit binning and discretization is still ad hoc) and captures something real.
We could formalize it like where with being some independent noise parameterized by \sigma. Then would become finite. We could think of binning the output of a layer to make it stochastic in a similar way.
Ideally we’d like the new measure to be finite even for deterministic maps (this is the case for above) and some strict data processing inequality like to hold, intuition being that each step of the map adds more noise.
But is just up to a constant that depends on the noise statistic, so the above is an equality.
Issue is that the above intuition is based on each application of and adding additional noise to the input (just like how discretization lets us do this: each layer further discretizes and bins its input, leading in gradual loss of information hence letting mutual information capture something real in the sense of amount of bits needed to recover information up to certain precision across layers), but I_\sigma just adds an independent noise. So any relaxation if will have to depend on the functional structure of .
With that (+ Dmitry’s comment on precision scale), I think the papers that measure mutual information between activations in different layers with a noise distribution over the parameters of sound a lot more reasonable than I originally thought.
Ah you’re right. I was thinking about the deterministic case.
Your explanation of the jacobian term accounting for features “squeezing together” makes me update towards thinking maybe the quantizing done to turn neural networks from continuous & deterministic to discrete & stochastic, while ad hoc, isn’t as unreasonable as I originally thought it was. This paper is where I got the idea that discretization is bad because it “conflates ‘information theoretic stuff’ with ‘geometric stuff’, like clustering”—but perhaps this is in fact capturing something real.
Thank you, first three examples make sense and seem like an appropriate use of mutual information. I’d like to ask about the fourth example though, where you take the weights as unknown:
What’s the operational meaning of under some ? More importantly: to what kind of theoretical questions we could ask are these quantities an answer to? (I’m curious of examples in which such quantities came out as a natural answer to the research questions you were asking in practice.)
I would guess the choice of (maybe the bayesian posterior, maybe the post-training SGD distribution) and the operational meaning they have will depend on what kind of questions we’re interested in answering, but I don’t yet have a good idea as to in what contexts these quantities come up as answers to natural research questions.
And more generally, do you think shannon information measures (at least in your research experience) basically work for most theoretical purposes in saying interesting stuff about deterministic neural networks, or perhaps do you think we need something else?
Reason being (sorry this is more vibes, I am confused and can’t seem to state something more precise): Neural networks seem like they could (and should perhaps) be analyzed as individual deterministic information processing systems without reference to ensembles, since each individual weights are individual algorithms that on its own does some “information processing” and we want to understand its nature.
Meanwhile shannon information measures seem bad at this, since all they care about is the abstract partition that functions induce on their domain by the preimage map. Reversible networks (even when trained) for example have the same partition induced even if you keep stacking more layers, so from the perspective of information theory, everything looks the same—even though as individual weights without considering the whole ensemble, the networks are changing in its information processing nature in some qualitative way, like representation learning—changing the amount of easily accessible / usable information—hence why I thought something in the line of V-information might be useful.
I would be surprised (but happy!) if such notions could be captured by cleverly using shannon information measures. I think that’s what the papers attempting to put a distribution over weights were doing, using languages like (in my words) ” is an arbitrary choice and that’s fine, since it is used as means of probing information” or smth, but the justifications feel hand-wavy. I have yet to see a convincing argument for why this works, or more generally how shannon measures could capture these aspects of information processing (like usable information).
[Question] What’s the Right Way to think about Information Theoretic quantities in Neural Networks?
Proof Explained for “Robust Agents Learn Causal World Model”
An Illustrated Summary of “Robust Agents Learn Causal World Model”
The 3-4 Chasm of Theoretical Progress
epistemic status: unoriginal. trying to spread a useful framing of theoretical progress introduced from an old post.
Tl;dr, often the greatest theoretical challenge comes from the step of crossing the chasm from [developing an impractical solution to a problem] to [developing some sort of a polytime solution to a problem], because the nature of their solutions can be opposites.
Summarizing Diffractor’s post on Program Search and Incomplete Understanding:
Solving a foundational problem to its implementation often takes the following steps (some may be skipped):
developing a philosophical problem
developing a solution given infinite computing power
developing an impractical solution
developing some sort of polytime solution
developing a practical solution
and he says that it is often during the 3 → 4 step in which understanding gets stuck and the most technical and brute-force math (and i would add sometimes philosophical) work is needed, because:
a common motif in 3) is that they’re able to proving interesting things about their solutions, like asymptotic properties, by e.g., having their algorithms iterate through all turing machines, hence somewhat conferring the properties of the really good turing machine solution that exists somewhere in this massive search space to the overall search algorithm (up to a massive constant, usually).
think of Levin’s Universal Search, AIXItl, Logical Induction.
he says such algorithms are secretly a black box algorithm; there are no real gears.
Meanwhile, algorithms in 4) have the opposite nature—they are polynomial often because they characterize exploitable patterns that make a particular class of problems easier than most others, which requires Real Understanding. So algorithms of 3) and 4) often look nothing alike.
I liked this post and the idea of the “3-4 chasm,” because it explicitly captures the vibes of why I personally felt the vibes that, e.g., AIT, might be less useful for my work: after reading this post, I realized that for example when I refer to the word “structure,” I’m usually pointing at the kind of insights required to cross the 3-4 gap, while others might be using the same word to refer to things at a different level. This causes me to get confused as to how some tool X that someone brought up is supposed to help with the 3-4 gap I’m interested in.[1]
Vanessa Cosoy refers to this post, saying (in my translation of her words) that a lot of the 3-4 gap in computational learning theory has to do with our lack of understanding of deep learning theory, like how the NP-complete barrier is circumvented in practical problems, what are restrictions we can put on out hypothesis class to make them efficiently learnable in the same way our world seems efficiently learnable, etc.
She mentions that this gap, at least in the context of deep learning theory, isn’t too much of a pressing problem because it already has mainstream attention—which explains why a lot of her work seems to lie in the 1-3 regime.
I asked GPT for examples of past crossings of the 3-4 chasm in other domains, and it suggested [Shannon’s original technically-constructive-but-highly-infeasible proof for the existence of optimal codes] vs. [recent progress on Turbocodes that actually approach this limit while being very practical], which seems like a perfect example.
- ^
AIT in specific seems to be useful primarily in the 1-2 level?
Thank you! I tried it on this post and while the post itself is pretty short, the raw content that i get seems to be extremely long (making it larger than the o1 context window, for example), with a bunch of font-related information inbetween. Is there a way to fix this?
Previous discussion, comment by johnswentworth: