A Year of AI Increasing AI Progress
In July, I made a post about AI being used to increase AI progress, along with this spreadsheet that I’ve been updating throughout the year. Since then, I have run across more examples, and had others submit examples (some of which were published before the date I made my original post).
2022 has included a number of instances of AI increasing AI progress. Here is the list. In each entry I also credit the person who originally submitted the paper to my list.
A paper from Google Research used a robust supervised learning technique to architect hardware accelerators [March 17th, submitted by Zach Stein-Perlman]
A paper from Google Research and Stanford fine tuned a model on its own chain-of-thought outputs, to improve performance on reasoning tasks [March 28th, submitted by Nathaniel Li]
A paper from OpenAI used LLMs to help humans find flaws in other LLMs, thereby enabling them to more easily improve those models [June 12th, submitted by Dan Hendrycks]
A paper from Google used machine learning to optimize compilers. This is less obviously accelerating AI but an earlier version of the compiler is used in Pytorch so it may end up doing so. [July 6th, submitted by Oliver Zhang]
NVIDIA used deep reinforcement learning to generate nearly 13,000 circuits in their newest GPUs. [July 8th, submitted by me]
Google found that ML code completion improved the productivity of their engineers. Some of them are presumably working in AI. [July 27th, submitted by Aidan O’Gara]
A paper from Microsoft Research and MIT used language models to generate programming puzzle tasks for other language models. When finetuned on these tasks, the models were much better at solving the puzzles. [July 29th, submitted by Esben Kran]
A paper from Google and UIUC used outputs from a language model to fine tune a language model after a majority vote procedure was used to filter outputs. [September 30th, submitted by me]
A paper from DeepMind used reinforcement learning to discover more efficient matrix multiplication algorithms. [October 5th, submitted by me]
A paper from Anthropic used language models, rather than humans, for feedback to improve language models. [December 16th, submitted by me]
A paper from a number of universities used language models to generate examples of instruction following, which were then filtered and used to fine tune language models to follow instructions better. [December 20th, submitted by Nathaniel Li].
I’m writing this fairly quickly so I’m not going to add extensive commentary beyond what I said in my last post, but I’ll point out here two things:
It is pretty common these days for people to use language model outputs to improve language models. This trend appears likely to continue.
A lot of these papers are from Google. Not DeepMind, Google. Google may not have declared they are aiming for AGI, but they sure do seem to be writing a lot of papers that involve AI increasing AI progress. It seems important not to ignore them.
Did I miss any? You can submit more here.
- What a compute-centric framework says about AI takeoff speeds by 23 Jan 2023 4:09 UTC; 189 points) (EA Forum;
- What a compute-centric framework says about AI takeoff speeds by 23 Jan 2023 4:02 UTC; 187 points) (
- Some Intuitions Around Short AI Timelines Based on Recent Progress by 11 Apr 2023 4:23 UTC; 37 points) (
- 4 Jan 2023 19:52 UTC; 9 points) 's comment on 2022 was the year AGI arrived (Just don’t call it that) by (
- 30 Dec 2022 22:43 UTC; 6 points) 's comment on Evidence on recursive self-improvement from current ML by (
- 15 Jan 2023 5:56 UTC; 5 points) 's comment on Current AI Models Seem Sufficient for Low-Risk, Beneficial AI by (
- AI improving AI [MLAISU W01!] by 6 Jan 2023 11:13 UTC; 5 points) (
- 19 Apr 2023 22:27 UTC; 5 points) 's comment on Some Intuitions Around Short AI Timelines Based on Recent Progress by (
- 30 Dec 2022 23:17 UTC; 4 points) 's comment on DragonGod’s Shortform by (
- 2 Jan 2023 5:30 UTC; 3 points) 's comment on DragonGod’s Shortform by (
I was aware of a couple of these, but most are new to me. Obviously, published papers (even if this is comprehensive) represent only a fraction of what is happening and, likely, are somewhat behind the curve.
And it’s still fairly surprising how much of this there is.
Also, from MIT CSAIL and Meta: Gradient Descent: The Ultimate Optimizer
Thanks for putting this together Thomas. Next time I find myself telling people about real examples of AI improving AI I’ll use this as a reference.