Curated. Tackles thorny conceptual issues at the foundation of AI alignment while also revealing the weak spots of the abstractions used to do so.
I like the general strategy of trying to make progress on understanding the problem relying only on the concept of “basic agency” without having to work on the much harder problem of coming up with a useful formalism of a more full throated conception of agency, whether or not that turns out to be enough in the end.
The core point of the post: that certain kinds of goals only make sense at all given that there are certain kinds of patterns present in the environment, and that most of the problem of making sense of the alignment problem is identifying what those patterns are for the goal of “make aligned AGIs”, is plausible and worthy of discussion. I also appreciate that this post elucidates the (according to me) canon-around-these-parts general patterns that render the specific goal of aligning AGIs sensible (eg, compression based analyses of optimization) and presents them as such explicitly.
The introductory examples of patterns that must be present in the general environment for certain simpler goals to make sense—especially how the absence of the pattern makes the goal not make sense—are clear and evocative. I would not be surprised if they helped someone notice that there are some ways that the canon-around-these-parts hypothesized patterns which render “align AGIs” a sensible goal are importantly flawed.
Curated. Comparing model performance on tasks to the time human experts need to complete the same tasks (with fixed reliability) is worth highlighting since it helps operationalize terms like “human-level-AI” and “AI-level-of-capabilities” in general. Furthermore, by making this empirical comparison and discovering a 7-month doubling time, this work significantly reduces our uncertainty about both when to expect certain capabilities (and more impressively according to me) how to conceptualize those AI capability levels. That is, on top of reducing our uncertainty, I think this work also provides a good general format / frame for reporting general AI capabilities forecasts, eg, we have X years until models can do things that it takes human experts Y hours to do with reliability Z%.
I also appreciated the discussions this post inspired about whether we should expect the slope in log-space to change, and if so in which direction, as well as the related discussion about whether we should expect this trend to go superexponential. Interesting arguments and models were put forth in both discussions.
I hope in the future METR explores other methods for concretizing/operationalizing and forecasting AI capability levels. For example, comparing human expert reliability in general within specific task domains to model task reliability within those same domains, or comparing the time humans take to become reliable experts in certain domains to model task reliability within those same domains.