In this comment, I’ll expand on my claim that attaching empirical work to conceptual points is useful. (This is extracted from a unreleased post I wrote a long time ago.)
Even if the main contribution of some work is a conceptual framework or other conceptual ideas, it’s often extremely important to attach some empirical work regardless of whether the empirical work should result in any substantial update for a well-informed individual. Often, careful first principles reasoning combined with observations on tangentially related empirical work, suffices to predict the important parts of empirical work. The importance of empirical work is both because some people won’t be able to verify careful first principles reasoning and because some people won’t engage with this sort of reasoning if there aren’t also empirical results stapled to the reasoning (at least as examples). It also helps to very concretely demonstrate that empirical work in the area is possible. This was a substantial part of the motivation for why we tried to get empirical results for AI control even though the empirical work seemed unlikely to update us much on the viability of control overall[1]. I think this view on the necessity of empirical work also (correctly) predicts the sleeper agents paper[2] will likely cause people to be substantially more focused on deceptive alignment and more likely to create model organisms of deceptive alignment even though the actual empirical contribution isn’t that large over careful first principles reasoning combined with analyzing tangentially related empirical work, leading to complaints like Dan H’s.
I think that “you need accompanying empirical work for your conceptual work to be well received and understood” is fine and this might be the expected outcome even if all actors were investing exactly as much as they should in thinking through conceptual arguments and related work. (And the optimal investment across fields in general might even be the exact same investment as now; it’s hard to avoid making mistakes with first principles reasoning and there is a lot of garbage in many fields (including in the AI x-risk space).)
Kudos for correctly identifying the main cruxy point here, even though I didn’t talk about it directly.
The main reason I use the term “propaganda” here is that it’s an accurate description of the useful function of such papers, i.e. to convince people of things, as opposed to directly advancing our cutting-edge understanding/tools. The connotation is that propagandists over the years have correctly realized that presenting empirical findings is not a very effective way to convince people of things, and that applies to these papers as well.
And I would say that people are usually correct to not update much on empirical findings! Not Measuring What You Think You Are Measuring is a very strong default, especially among the type of papers we’re talking about here.
The connotation is that propagandists over the years have correctly realized that presenting empirical findings is not a very effective way to convince people of things
I would be interested to understand why you would categorize something like “Frontier Models Are Capable of In-Context Scheming” as non-empirical or as falling into “Not Measuring What You Think You Are Measuring”.
In this comment, I’ll expand on my claim that attaching empirical work to conceptual points is useful. (This is extracted from a unreleased post I wrote a long time ago.)
Even if the main contribution of some work is a conceptual framework or other conceptual ideas, it’s often extremely important to attach some empirical work regardless of whether the empirical work should result in any substantial update for a well-informed individual. Often, careful first principles reasoning combined with observations on tangentially related empirical work, suffices to predict the important parts of empirical work. The importance of empirical work is both because some people won’t be able to verify careful first principles reasoning and because some people won’t engage with this sort of reasoning if there aren’t also empirical results stapled to the reasoning (at least as examples). It also helps to very concretely demonstrate that empirical work in the area is possible. This was a substantial part of the motivation for why we tried to get empirical results for AI control even though the empirical work seemed unlikely to update us much on the viability of control overall[1]. I think this view on the necessity of empirical work also (correctly) predicts the sleeper agents paper[2] will likely cause people to be substantially more focused on deceptive alignment and more likely to create model organisms of deceptive alignment even though the actual empirical contribution isn’t that large over careful first principles reasoning combined with analyzing tangentially related empirical work, leading to complaints like Dan H’s.
I think that “you need accompanying empirical work for your conceptual work to be well received and understood” is fine and this might be the expected outcome even if all actors were investing exactly as much as they should in thinking through conceptual arguments and related work. (And the optimal investment across fields in general might even be the exact same investment as now; it’s hard to avoid making mistakes with first principles reasoning and there is a lot of garbage in many fields (including in the AI x-risk space).)
Though it did predictably provide sufficient updates on various topics to be worth doing regardless of the argument I’m making in this section.
I’m an author on this paper, though only in an advisory capacity.
Kudos for correctly identifying the main cruxy point here, even though I didn’t talk about it directly.
The main reason I use the term “propaganda” here is that it’s an accurate description of the useful function of such papers, i.e. to convince people of things, as opposed to directly advancing our cutting-edge understanding/tools. The connotation is that propagandists over the years have correctly realized that presenting empirical findings is not a very effective way to convince people of things, and that applies to these papers as well.
And I would say that people are usually correct to not update much on empirical findings! Not Measuring What You Think You Are Measuring is a very strong default, especially among the type of papers we’re talking about here.
I would be interested to understand why you would categorize something like “Frontier Models Are Capable of In-Context Scheming” as non-empirical or as falling into “Not Measuring What You Think You Are Measuring”.