How do we know that “good research” is good? (aka “direct evaluation” vs “eigen-evaluation”)

AI Alignment is my motivating context but this could apply elsewhere too.

The nascent field of AI Alignment research is pretty happening these days. There are multiple orgs and dozens to low hundreds of full-time researchers pursuing approaches to ensure AI goes well for humanity. Many are heartened that there’s at least some good research happening, at least in the opinion of some of the good researchers. This is reason for hope, I have heard.

But how do we know whether or not we have produced “good research?”

I think there are two main routes to determining that research is good, and yet only one applies in the research field of aligning superintelligent AIs.

“It’s good because it works”

The first and better way to know that your research is good is because it allows you to accomplish some goal you care about[1] [1]. Examples:

  • My work on efficient orbital mechanics calculation is good because it successfully lets me predict the trajectory of satellites.

  • My work on the disruption of cell signaling in malign tumors is good because it helped me develop successful anti-cancer vaccines.

  • My work on solid-state physics is good because it allowed me to produce superconductors at a higher temperature and lower pressure than previously attained.[2]

In each case, there’s some outcome I care about pretty inherently for itself, and if the research helps me attain that outcome it’s good (or conversely if it doesn’t, it’s bad). The good researchers in my field are those who have produced a bunch of good research towards the aims of the field.

Sometimes it’s not clear-cut. Perhaps I figured out some specific cell signaling pathways that will be useful if it turns out that cell signaling disruption in general is useful, and that’s TBD on therapies currently being trialed and we might not know how good (i.e. useful) my research was for many more years. This actually takes us into what I think is the second meaning of “good research”.

“It’s good because we all agree it’s good”

If our goal is successfully navigating the creation of superintelligent AI in a way such that humans are happy with the outcome, then it is too early to properly score existing research on how helpful it will be. No one has aligned a superintelligence. No one’s research has contributed to the alignment of an actual superintelligence.

At this point, the best we can do is share our predictions about how useful research will turn out to be. “This is good research” = “I think this research will turn out to be helpful”. “That person is a good researcher” = “That person produces much research that will turn out to be useful and/​or has good models and predictions of which research will turn out to help”.

To talk about the good research that’s being produced is simply to say that we have a bunch of shared predictions that there exists research that will eventually help. To speak of the “good researchers” is to speak of the people who lots of people agree their work is likely helpful and opinions likely correct.

Even if the predictions are based on reasoning that we scrutinize and debate extensively, they are still predictions of usefulness and not observations of usefulness.

Someone might object that there’s empirical research that we can see yielding results in terms of interpretability/​steering or demonstrating deception-like behavior and similar. While you can observe an outcome there, that’s not the outcome we really care about of aligning superintelligent AI, and the relevance of this work is still just prediction. It’s being successful at kinds of cell signaling modeling before we’re confident that’s a useful approach.

More like “good” = “our community pagerank Eigen-evaluation of research rates this research highly”

It’s a little bit interesting to unpack “agreeing that some research is good”. Obviously, not everyone’s opinion matters equally. Alignment research has new recruits and it has its leading figures. When leading figures evaluate research and researchers positively, others will tend to trust them.

Yet the leading figures are only leading figures because other people agreed their work was good, including before they were leading figures with extra vote strength. But now that they’re leading figures, their votes count extra.

This isn’t that much of a problem though. I think the way this operates in practice is like an “Eigen” system such as Google’s PageRank and the proposed ideas of Eigenmorality and Eigenkarma[3].

Imagine everyone starts out with equal voting strength in the communal research evaluation. At t1, people evaluate research and the researchers gain or lose respect,. This in turn raises or lowers their vote strength in the communal assessment. With further timesteps, research-respect accrues to certain individuals who are deemed good or leading figures, and whose evaluations of other research and researchers are deemed especially trustworthy.

Name recognition in a rapidly growing field where there isn’t time for everyone to read everything likely functions to entrench leading figures and canonize their views.

In the absence of the ability to objectively evaluate research against the outcome we care about, I think this is a fine way, maybe the best way, for things to operate. But it admits a lot more room for error.

Four reasons why tracking this distinction is important

Remembering that we don’t have good feedback here

Operating without feedback loops is pretty terrifying. I intend to elaborate on this in future posts, but my general feeling is humans are generally poor at make predictions several steps out from what we can empirically test. Modern science is largely the realization that to understand the world, we have to test empirically and carefully[4]. I think it’s important to not forget that’s what we’re doing in AI alignment research, and recognizing that good alignment research means predicted useful rather concretely evaluated as useful is part of that.

Staying alert to degradations of the communal Eigen-evaluation of research

While in the absence of direct feedback this system makes sense, I think it works better when everyone’s contributing their own judgments and starts to degrade when it becomes overwhelmingly about popularity and who defers to who. We want the field more like a prediction market and less like a fashion subculture.

There’s less incentive to try very different ideas, since even if those ideas would work eventually, you won’t be able to prove it. Consider how a no-name could come along and prove their ideas of heavier-than-air flight are correct by just building a contraption that clearly flies, vs. convincing people your novel conceptual alignment ideas are any good is a much longer uphill battle.

Maintaining methods for top new work to gain recognition

Those early on the scene had the advantage of there was less stuff to read back then, so easier to get name recognition for your contributions. Over time, there’s more competition and I can see work of equal or greater caliber having a much harder time getting broadly noticed. Ideally, we’ve got curation processes in place that mean someone could become an equally-respected leading figure as those of yore, even now, for about equal goodness (as judged by the eigen-collective, of course).

Some final points of clarification

  • I think this is a useful distinction pointing at something real. Better handles for the types of research evaluation might be direct-outcome-evaluation vs communal-estimation-prediction.

  • This distinction makes more sense where there’s an element of engineering towards desired outcomes vs a more purely predictive science.

  • I haven’t spent much time thinking about this, but I think the distinction applies in other fields where some of the evaluation is direct-outcome and some is communal-estimation. Hard sciences are more on the latter compared to social sciences which have more communal-estimation.

    • AI Alignment is just necessarily at an extreme end of split between the two.

    • For fields that can evaluate empirically their final outcome at all, there’s maybe a kind of “slow feedback loop” that periodically validates or invalidates the faster communal-estimation that’s been happening.

  • In truth, you actually never fully escape communal evaluation, because even with concrete empirical experiments, the researcher community must evaluate and interpret the experiments within an agreed-upon paradigm (via some Eigen-evaluation process, also thanks Hume). However, the quantitative difference gets so large it is basically qualitative.

  • There are assumptions under which intermediary results (e.g. bunch of SAE outputs) in AI Alignment are more valuable and more clearly constitute progress. However, I don’t think they change the field from being fundamentally driven by communal-estimation. They can’t, because belief in the value of intermediary outputs and associated assumptions is itself coming from [contested/​controversial] communal-estimation, not something validated with reference to the outcomes.

    • I can imagine people wanting to talk about timelines and takeoff speeds here as being relevant. At the end of day, those are also still in the communal-estimation, and questions with disagreement in the community.

  • I think it’s a debate worth having about how good vs bad the communal estimation is relative to direct-outcome evaluation. My strongest claim in this post is that this is a meaningful distinction. It’s a secondary claim for me that communal-estimation is vastly more fallible, but I haven’t actually argued that with particular rigor in this post.

  • I first began thinking about all of this when trying to figure out how to build better infrastructure for the Alignment research community. I still think projects along the lines of “improve how well the Eigen-evaluation process happens” are worth effort.

    • Thinking “Eigen-evaluation” caused me to update on the value of mechanism not just of people adding more ideas to the collective, but also how they critique them. For example, I’ve updated more in favor of the LessWrong Annual Review for improving the community’s Eigen-evaluation.

  1. ^

    Arguably most scientific work is simply about being able to model things and make accurate predictions, regardless of whether those predictions are useful for anything else. In contrast to that, alignment research is more of an engineering discipline, and the research isn’t just about predicting some event, but being able to successfully build some system. Accordingly, I’m choosing examples here that also sit at the juncture between science and engineering.

  2. ^

    Yes, I’ve had a very diverse and extensive research career.

  3. ^

    I also model social status as operating similarly.

  4. ^

    Raemon’s recent recent post provides a cute illustration of this.

  5. ^

    A concrete decision that I would make differently: in a world where we are very optimistic about alignment research, we might put more effort into getting those research results put to use in frontier labs. In contrast, in pessimistic worlds where we don’t think we have good solutions, overwhelmingly effort should go into pauses and moratoriums.