Results don’t discuss encoding/representation of abstractions
Totally agree with this one, it’s the main thing I’ve worked on over the past month and will probably be the main thing in the near future. I’d describe the previous results (i.e. ignoring encoding/representation) as characterizing the relationship between the high-level and the high-level.
Definitions depend on choice of variables Xi
The local/causal structure of our universe gives a very strong preferred way to “slice it up”; I expect that’s plenty sufficient for convergence of abstractions. For instance, it doesn’t make sense to use variables which “rotate together” the states of five different local patches of spacetime which are not close to each other. (For instance, those five different local patches will generally not be rotated together by default in an evolving agent’s sensory feed.)
That does still leave degrees of freedom in how we represent all the local patches, but those are exactly the degrees of freedom which don’t matter for natural abstraction. (Under the minimal latent formulation: we can represent each individual variable or set-of-variables-which-we’re-making-independent-of-some-other-stuff in a different way without changing anything informationally. Under the redundancy formulation: assume our resampling process allows simultaneous resampling of small sets of variables, to avoid the thing where there’s two variables very tightly coupled but they’re otherwise independent of everything else. With that modification in place, same argument as the minimal latent formulation applies.)
Theorems focus on infinite limits, but abstractions happen in finite regimes
Totally agree with this one too, and it has also been a major focus for me over the past couple months.
I’d also offer this as one defense of my relatively low level of formality to date: finite approximations are clearly the right way to go, and I didn’t yet know the best way to handle finite approximations. I gave proof sketches at roughly the level of precision which I expected to generalize to the eventual “right” formalizations. (The more general principle here is to only add formality when it’s the right formality, and not to prematurely add ad-hoc formulations just for the sake of making things more formal. If we don’t yet know the full right formality, then we should sketch at the level we think we do know.)
Missing theoretical support for several key claims
Basically agree with this. In particular, I think the quoted block is indeed a place where I was a bit overexcited at the time and made too strong a claim. More generally, for a while I was thinking of “deterministic constraints” as basically implying “low-dimensional” in practice, based on intuitions from physics. But in hindsight, that’s at least not externally-legibly true, and arguably not true in general at all.
Figuring out whether the Universality Hypothesis is true
… What we’re less convinced of is that the current theoretical approach is a good way to tackle this question. One worrying sign is that almost two years after the project announcement (and over three years after work on natural abstractions began), there still haven’t been major empirical tests, even though that was the original motivation for developing all of the theory. … Of course sometimes experiments do require upfront theory work. But in this case, we think that e.g. empirical interpretability work is already making progress on the Universality Hypothesis, whereas we’re unsure whether the natural abstractions agenda is much closer to major empirical tests than it was two years ago.
See the section on “Low level of precision...”. Also, You Are Not Measuring What You Think You Are Measuring is a very relevant principle here—I have lots of (not necessarily externally-legible) bits of evidence about a rough version of natural abstraction, but the details I’m still figuring out are (not coincidentally) exactly the details where it’s hard to tell whether we’re measuring the right thing.
Abstractions as a bottleneck for agent foundations: The high-level story for why abstractions seem important for formalizing e.g. values seems very plausible to us. It’s less clear to us whether they are necessary (or at least a good first step)
Yeah, I don’t think this should be externally-legibly clear right now. I think people need to spend a lot of time trying and failing to tackle agent foundations problem themselves, repeatedly running into the need for a proper model of abstraction, in order for this to be clear.
Accelerating alignment research: The promise behind this motivation is that having a theory of natural abstractions will make it much easier to find robust formalizations of abstractions such as “agency”, “optimizer”, or “modularity”. … To us, such an outcome seems unlikely, though it may still be worth pursuing
I probably put higher probability on success here then you do, but I don’t think it should be legibly clear.
Interpretability: … Figuring out the real-world meaning of internal network activations is one of the core themes of safety-motivated interpretability work. And reverse-engineering a network into “pseudocode” is not just some separate problem, it’s deeply intertwined. We typically understand the inputs of a network, so if we can figure out how the network transforms these inputs, that can let us test hypotheses for what the meaning of internal activations is.
An intuitive understanding of inputs plus a circuit is not, in general, sufficient to interpret the internal things computed by the circuit. Easy counterargument: neural nets are circuits, so if those two pieces were enough, we’d already be done; there would be no interpretability problem in the first place.
Existing work has managed to go from pseudocode/circuits to interpretation of inputs mainly by looking at cases where the circuits in question are very small and simple—e.g. edge detectors in Olah’s early work, or the sinusoidal elements in Neel’s work on modular addition. But this falls apart quickly as the circuits get bigger—e.g. later layers in vision nets, once we get past early things like edge and texture detectors.
Low level of precision and formalization
I mentioned earlier the heuristic of “only add formality when it’s the right formality; don’t prematurely add ad-hoc formulations just for the sake of making things more formal”.
More generally, if you’re used to academia, then bear in mind the incentives of academia push towards making one’s work defensible to a much greater degree than is probably optimal for truth-seeking. Formalization is one part of this: in academia, the incentive is usually to add ad-hoc formalization in order to get a full formal proof rather than a sketch, even if the ad-hoc formalization added does not match reality well. On the experimental side, the incentive is usually on bulletproof results, rather than gaining lots of information. (… and that’s the better case. In the worse case, the incentive is on jumping through certain hoops which are nominally about bulletproofing, but don’t even do that job very well, like e.g. statistical significance.) And yes, defensibility does have value even for truth-seeking, but there are tradeoffs and I advise against anchoring too much on academia.
With that in mind: both my current work and most of my work to date is aimed more at truth-seeking than defensibility. I don’t think I currently have all the right pieces, and I’m trying to get the right pieces quickly. For that purpose, it’s important to make the stuff I think I understand as legible as possible so that others can help. I try to accurately convey my models and epistemic state. But it’s not important to e.g. make it easy for others to point out mistakes in places where I didn’t think the formality was right anway. If and when I have all the pieces, then I can worry about defensible proof.
That said, I agree with at least some parts of the critique. Being both precise and readable at the same time is hard, man.
Few experiments
As we briefly discussed earlier, we think it’s worrying that there haven’t been major experiments on the Natural Abstraction Hypothesis, given that John thinks of it as mostly an empirical claim. We would be excited to see more discussion on experiments that can be done right now to test (parts of) the natural abstractions agenda! We elaborate on a preliminary idea in the appendix (though it has a number of issues).
I do love your experiment ideas! The experiments I ran last summer had a similar flavor—relatively-simple checks on MNIST nets—though they were focused on the “information at a distance” lens rather than the redundancy or minimal latent lenses.
Anyway, similar answer here as the previous section: at this point I’m mainly trying to get to the right answers quickly, not trying to provide some impressive defensible proof. I run experiments insofar as they give me bits about what the right answers are.
Thanks for the responses! I think we qualitatively agree on a lot, just put emphasis on different things or land in different places on various axes. Responses to some of your points below:
The local/causal structure of our universe gives a very strong preferred way to “slice it up”; I expect that’s plenty sufficient for convergence of abstractions. [...]
Let me try to put the argument into my own words: because of locality, any “reasonable” variable transformation can in some sense be split into “local transformations”, each of which involve only a few variables. These local transformations aren’t a problem because if we, say, resample n variables at a time, then transforming m<n variables doesn’t affect redundant information.
I’m tentatively skeptical that we can split transformations up into these local components. E.g. to me it seems that describing some large number N of particles by their center of mass and the distance vectors from the center of mass is a very reasonable description. But it sounds like you have a notion of “reasonable” in mind that’s more specific then the set of all descriptions physicists might want to use.
I also don’t see yet how exactly to make this work given local transformations—e.g. I think my version above doesn’t quite work because if you’re resampling a finite number n of variables at a time, then I do think transforms involving fewer than n variables can sometimes affect redundant information. I know you’ve talked before about resampling any finite number of variables in the context of a system with infinitely many variables, but I think we’ll want a theory that can also handle finite systems. Another reason this seems tricky: if you compose lots of local transformations, for overlapping local neighborhoods, you get a transformation involving lots of variables. I don’t currently see how to avoid that.
I’d also offer this as one defense of my relatively low level of formality to date: finite approximations are clearly the right way to go, and I didn’t yet know the best way to handle finite approximations. I gave proof sketches at roughly the level of precision which I expected to generalize to the eventual “right” formalizations. (The more general principle here is to only add formality when it’s the right formality, and not to prematurely add ad-hoc formulations just for the sake of making things more formal. If we don’t yet know the full right formality, then we should sketch at the level we think we do know.)
Oh, I did not realize from your posts that this is how you were thinking about the results. I’m very sympathetic to the point that formalizing things that are ultimately the wrong setting doesn’t help much (e.g. in our appendix, we recommend people focus on the conceptual open problems like finite regimes or encodings, rather than more formalization). We may disagree about how much progress the results to date represent regarding finite approximations. I’d say they contain conceptual ideas that may be important in a finite setting, but I also expect most of the work will lie in turning those ideas into non-trivial statements about finite settings. In contrast, most of your writing suggests to me that a large part of the theoretical work has been done (not sure to what extent this is a disagreement about the state of the theory or about communication).
Existing work has managed to go from pseudocode/circuits to interpretation of inputs mainly by looking at cases where the circuits in question are very small and simple—e.g. edge detectors in Olah’s early work, or the sinusoidal elements in Neel’s work on modular addition. But this falls apart quickly as the circuits get bigger—e.g. later layers in vision nets, once we get past early things like edge and texture detectors.
I totally agree with this FWIW, though we might disagree on some aspects of how to scale this to more realistic cases. I’m also very unsure whether I get how you concretely want to use a theory of abstractions for interpretability. My best story is something like: look for good abstractions in the model and then for each one, figure out what abstraction this is by looking at training examples that trigger the abstraction. If NAH is true, you can correctly figure out which abstraction you’re dealing with from just a few examples. But the important bit is that you start with a part of the model that’s actually a natural abstraction, which is why this approach doesn’t work if you just look at examples that make a neuron fire, or similar ad-hoc ideas.
More generally, if you’re used to academia, then bear in mind the incentives of academia push towards making one’s work defensible to a much greater degree than is probably optimal for truth-seeking.
I agree with this. I’ve done stuff in some of my past papers that was just for defensibility and didn’t make sense from a truth-seeking perspective. I absolutely think many people in academia would profit from updating in the direction you describe, if their goal is truth-seeking (which it should be if they want to do helpful alignment research!)
On the other hand, I’d guess the optimal amount of precision (for truth-seeking) is higher in my view than it is in yours. One crux might be that you seem to have a tighter association between precision and tackling the wrong questions than I do. I agree that obsessing too much about defensibility and precision will lead you to tackle the wrong questions, but I think this is feasible to avoid. (Though as I said, I think many people, especially in academia, don’t successfully avoid this problem! Maybe the best quick fix for them would be to worry less about precision, but I’m not sure how much that would help.) And I think there’s also an important failure mode where people constantly think about important problems but never get any concrete results that can actually be used for anything.
It also seems likely that different levels of precision are genuinely right for different people (e.g. I’m unsurprisingly much more confident about what the right level of precision is for me than about what it is for you). To be blunt, I would still guess the style of arguments and definitions in your posts only work well for very few people in the long run, but of course I’m aware you have lots of details in your head that aren’t in your posts, and I’m also very much in favor of people just listening to their own research taste.
both my current work and most of my work to date is aimed more at truth-seeking than defensibility. I don’t think I currently have all the right pieces, and I’m trying to get the right pieces quickly.
Yeah, to be clear I think this is the right call, I just think that more precision would be better for quickly arriving at useful true results (with the caveats above about different styles being good for different people, and the danger of overshooting).
Being both precise and readable at the same time is hard, man.
Yeah, definitely. And I think different trade-offs between precision and readability are genuinely best for different readers, which doesn’t make it easier. (I think this is a good argument for separate distiller roles: if researchers have different styles, and can write best to readers with a similar style of thinking, then plausibly any piece of research should have a distillation written by someone with a different style, even if the original was already well written for a certain audience. It’s probably not that extreme, I think often it’s at least possible to find a good trade-off that works for most people, though hard).
We may disagree about how much progress the results to date represent regarding finite approximations. I’d say they contain conceptual ideas that may be important in a finite setting, but I also expect most of the work will lie in turning those ideas into non-trivial statements about finite settings. In contrast, most of your writing suggests to me that a large part of the theoretical work has been done (not sure to what extent this is a disagreement about the state of the theory or about communication).
Perhaps your instincts here are better than mine! Going to the finite case has indeed turned out to be more difficult than I expected at the time of writing most of the posts you reviewed.
Brief responses to the critiques:
Totally agree with this one, it’s the main thing I’ve worked on over the past month and will probably be the main thing in the near future. I’d describe the previous results (i.e. ignoring encoding/representation) as characterizing the relationship between the high-level and the high-level.
The local/causal structure of our universe gives a very strong preferred way to “slice it up”; I expect that’s plenty sufficient for convergence of abstractions. For instance, it doesn’t make sense to use variables which “rotate together” the states of five different local patches of spacetime which are not close to each other. (For instance, those five different local patches will generally not be rotated together by default in an evolving agent’s sensory feed.)
That does still leave degrees of freedom in how we represent all the local patches, but those are exactly the degrees of freedom which don’t matter for natural abstraction. (Under the minimal latent formulation: we can represent each individual variable or set-of-variables-which-we’re-making-independent-of-some-other-stuff in a different way without changing anything informationally. Under the redundancy formulation: assume our resampling process allows simultaneous resampling of small sets of variables, to avoid the thing where there’s two variables very tightly coupled but they’re otherwise independent of everything else. With that modification in place, same argument as the minimal latent formulation applies.)
Totally agree with this one too, and it has also been a major focus for me over the past couple months.
I’d also offer this as one defense of my relatively low level of formality to date: finite approximations are clearly the right way to go, and I didn’t yet know the best way to handle finite approximations. I gave proof sketches at roughly the level of precision which I expected to generalize to the eventual “right” formalizations. (The more general principle here is to only add formality when it’s the right formality, and not to prematurely add ad-hoc formulations just for the sake of making things more formal. If we don’t yet know the full right formality, then we should sketch at the level we think we do know.)
Basically agree with this. In particular, I think the quoted block is indeed a place where I was a bit overexcited at the time and made too strong a claim. More generally, for a while I was thinking of “deterministic constraints” as basically implying “low-dimensional” in practice, based on intuitions from physics. But in hindsight, that’s at least not externally-legibly true, and arguably not true in general at all.
See the section on “Low level of precision...”. Also, You Are Not Measuring What You Think You Are Measuring is a very relevant principle here—I have lots of (not necessarily externally-legible) bits of evidence about a rough version of natural abstraction, but the details I’m still figuring out are (not coincidentally) exactly the details where it’s hard to tell whether we’re measuring the right thing.
Yeah, I don’t think this should be externally-legibly clear right now. I think people need to spend a lot of time trying and failing to tackle agent foundations problem themselves, repeatedly running into the need for a proper model of abstraction, in order for this to be clear.
I probably put higher probability on success here then you do, but I don’t think it should be legibly clear.
An intuitive understanding of inputs plus a circuit is not, in general, sufficient to interpret the internal things computed by the circuit. Easy counterargument: neural nets are circuits, so if those two pieces were enough, we’d already be done; there would be no interpretability problem in the first place.
Existing work has managed to go from pseudocode/circuits to interpretation of inputs mainly by looking at cases where the circuits in question are very small and simple—e.g. edge detectors in Olah’s early work, or the sinusoidal elements in Neel’s work on modular addition. But this falls apart quickly as the circuits get bigger—e.g. later layers in vision nets, once we get past early things like edge and texture detectors.
I mentioned earlier the heuristic of “only add formality when it’s the right formality; don’t prematurely add ad-hoc formulations just for the sake of making things more formal”.
More generally, if you’re used to academia, then bear in mind the incentives of academia push towards making one’s work defensible to a much greater degree than is probably optimal for truth-seeking. Formalization is one part of this: in academia, the incentive is usually to add ad-hoc formalization in order to get a full formal proof rather than a sketch, even if the ad-hoc formalization added does not match reality well. On the experimental side, the incentive is usually on bulletproof results, rather than gaining lots of information. (… and that’s the better case. In the worse case, the incentive is on jumping through certain hoops which are nominally about bulletproofing, but don’t even do that job very well, like e.g. statistical significance.) And yes, defensibility does have value even for truth-seeking, but there are tradeoffs and I advise against anchoring too much on academia.
With that in mind: both my current work and most of my work to date is aimed more at truth-seeking than defensibility. I don’t think I currently have all the right pieces, and I’m trying to get the right pieces quickly. For that purpose, it’s important to make the stuff I think I understand as legible as possible so that others can help. I try to accurately convey my models and epistemic state. But it’s not important to e.g. make it easy for others to point out mistakes in places where I didn’t think the formality was right anway. If and when I have all the pieces, then I can worry about defensible proof.
That said, I agree with at least some parts of the critique. Being both precise and readable at the same time is hard, man.
I do love your experiment ideas! The experiments I ran last summer had a similar flavor—relatively-simple checks on MNIST nets—though they were focused on the “information at a distance” lens rather than the redundancy or minimal latent lenses.
Anyway, similar answer here as the previous section: at this point I’m mainly trying to get to the right answers quickly, not trying to provide some impressive defensible proof. I run experiments insofar as they give me bits about what the right answers are.
Thanks for the responses! I think we qualitatively agree on a lot, just put emphasis on different things or land in different places on various axes. Responses to some of your points below:
Let me try to put the argument into my own words: because of locality, any “reasonable” variable transformation can in some sense be split into “local transformations”, each of which involve only a few variables. These local transformations aren’t a problem because if we, say, resample n variables at a time, then transforming m<n variables doesn’t affect redundant information.
I’m tentatively skeptical that we can split transformations up into these local components. E.g. to me it seems that describing some large number N of particles by their center of mass and the distance vectors from the center of mass is a very reasonable description. But it sounds like you have a notion of “reasonable” in mind that’s more specific then the set of all descriptions physicists might want to use.
I also don’t see yet how exactly to make this work given local transformations—e.g. I think my version above doesn’t quite work because if you’re resampling a finite number n of variables at a time, then I do think transforms involving fewer than n variables can sometimes affect redundant information. I know you’ve talked before about resampling any finite number of variables in the context of a system with infinitely many variables, but I think we’ll want a theory that can also handle finite systems. Another reason this seems tricky: if you compose lots of local transformations, for overlapping local neighborhoods, you get a transformation involving lots of variables. I don’t currently see how to avoid that.
Oh, I did not realize from your posts that this is how you were thinking about the results. I’m very sympathetic to the point that formalizing things that are ultimately the wrong setting doesn’t help much (e.g. in our appendix, we recommend people focus on the conceptual open problems like finite regimes or encodings, rather than more formalization). We may disagree about how much progress the results to date represent regarding finite approximations. I’d say they contain conceptual ideas that may be important in a finite setting, but I also expect most of the work will lie in turning those ideas into non-trivial statements about finite settings. In contrast, most of your writing suggests to me that a large part of the theoretical work has been done (not sure to what extent this is a disagreement about the state of the theory or about communication).
I totally agree with this FWIW, though we might disagree on some aspects of how to scale this to more realistic cases. I’m also very unsure whether I get how you concretely want to use a theory of abstractions for interpretability. My best story is something like: look for good abstractions in the model and then for each one, figure out what abstraction this is by looking at training examples that trigger the abstraction. If NAH is true, you can correctly figure out which abstraction you’re dealing with from just a few examples. But the important bit is that you start with a part of the model that’s actually a natural abstraction, which is why this approach doesn’t work if you just look at examples that make a neuron fire, or similar ad-hoc ideas.
I agree with this. I’ve done stuff in some of my past papers that was just for defensibility and didn’t make sense from a truth-seeking perspective. I absolutely think many people in academia would profit from updating in the direction you describe, if their goal is truth-seeking (which it should be if they want to do helpful alignment research!)
On the other hand, I’d guess the optimal amount of precision (for truth-seeking) is higher in my view than it is in yours. One crux might be that you seem to have a tighter association between precision and tackling the wrong questions than I do. I agree that obsessing too much about defensibility and precision will lead you to tackle the wrong questions, but I think this is feasible to avoid. (Though as I said, I think many people, especially in academia, don’t successfully avoid this problem! Maybe the best quick fix for them would be to worry less about precision, but I’m not sure how much that would help.) And I think there’s also an important failure mode where people constantly think about important problems but never get any concrete results that can actually be used for anything.
It also seems likely that different levels of precision are genuinely right for different people (e.g. I’m unsurprisingly much more confident about what the right level of precision is for me than about what it is for you). To be blunt, I would still guess the style of arguments and definitions in your posts only work well for very few people in the long run, but of course I’m aware you have lots of details in your head that aren’t in your posts, and I’m also very much in favor of people just listening to their own research taste.
Yeah, to be clear I think this is the right call, I just think that more precision would be better for quickly arriving at useful true results (with the caveats above about different styles being good for different people, and the danger of overshooting).
Yeah, definitely. And I think different trade-offs between precision and readability are genuinely best for different readers, which doesn’t make it easier. (I think this is a good argument for separate distiller roles: if researchers have different styles, and can write best to readers with a similar style of thinking, then plausibly any piece of research should have a distillation written by someone with a different style, even if the original was already well written for a certain audience. It’s probably not that extreme, I think often it’s at least possible to find a good trade-off that works for most people, though hard).
Perhaps your instincts here are better than mine! Going to the finite case has indeed turned out to be more difficult than I expected at the time of writing most of the posts you reviewed.