Summary An abstraction of a high-dimensional random variable X is a low-dimensional summary G(X) that can be used to make predictions about X. In the case that X is sampled from some parameterized distribution P(X | theta), G(X) may take the form of a sufficient statistic, i.e., a function of X such that P(theta | X) = P(theta | G(X)). To make predictions about X, one may then determine theta from P(theta | G(X)), and predict a new data point X from theta. In this post, John shows that if you have a very low-dimensional sufficient statistic G(X), then in many cases, X will “almost follow” the exponential family form and thus be fairly easy to deal with.
Further Thoughts I don’t yet quite understand what John wants to use the theorem for, but I will hopefully learn this in his project update post. My current guess is that he would like to identify abstractions as sufficient statistics of exponential families and that this might be a data structure that is “easier to identify” in the world and in trained machine learning models than our initially “broad guess” for what abstractions could be. Note that I’m only writing this down to have a prediction to update once I’m reading John’s update post. This seems strictly more useful than not having a prediction at all, even though I don’t place a high chance on my prediction actually being fully correct/nuanced enough.
Another thought is that I feel slightly uneasy about the viewpoint that abstractions are the thing we use to make “predictions about X”. In reality, if a person is in a particular state (meaning that the person is represented by an extremely high-dimensional sample vector X), then to make predictions, I definitely only use a very low-dimensional summary based on my sense of the person’s body part positions and coarse brain state. However, these predictions are not about X: I don’t use the summary to sample vectors that are consistent with the summary; instead, I use the summary to make predictions about summaries themselves. I.e., what will be the person’s mental state and body positions a moment from now, and how will this impact abstractions of other objects in the vicinity of that person? There should then be a “commutative diagram” relating objects in reality to their low-dimensional abstractions, and real-world state transitions to predictions.
I hope to eventually learn more about how this abstraction work feeds into such questions.
Summary
An abstraction of a high-dimensional random variable X is a low-dimensional summary G(X) that can be used to make predictions about X. In the case that X is sampled from some parameterized distribution P(X | theta), G(X) may take the form of a sufficient statistic, i.e., a function of X such that P(theta | X) = P(theta | G(X)). To make predictions about X, one may then determine theta from P(theta | G(X)), and predict a new data point X from theta.
In this post, John shows that if you have a very low-dimensional sufficient statistic G(X), then in many cases, X will “almost follow” the exponential family form and thus be fairly easy to deal with.
Further Thoughts
I don’t yet quite understand what John wants to use the theorem for, but I will hopefully learn this in his project update post. My current guess is that he would like to identify abstractions as sufficient statistics of exponential families and that this might be a data structure that is “easier to identify” in the world and in trained machine learning models than our initially “broad guess” for what abstractions could be.
Note that I’m only writing this down to have a prediction to update once I’m reading John’s update post. This seems strictly more useful than not having a prediction at all, even though I don’t place a high chance on my prediction actually being fully correct/nuanced enough.
Another thought is that I feel slightly uneasy about the viewpoint that abstractions are the thing we use to make “predictions about X”. In reality, if a person is in a particular state (meaning that the person is represented by an extremely high-dimensional sample vector X), then to make predictions, I definitely only use a very low-dimensional summary based on my sense of the person’s body part positions and coarse brain state. However, these predictions are not about X: I don’t use the summary to sample vectors that are consistent with the summary; instead, I use the summary to make predictions about summaries themselves. I.e., what will be the person’s mental state and body positions a moment from now, and how will this impact abstractions of other objects in the vicinity of that person? There should then be a “commutative diagram” relating objects in reality to their low-dimensional abstractions, and real-world state transitions to predictions.
I hope to eventually learn more about how this abstraction work feeds into such questions.