Given growth in both AI research and alignment research over the past 5 years, how do the rates of progress compare? Maybe separating absolute change, first and second derivatives.
I’d guess that alignment research is now a smaller fraction of people working on “AGI” or on really ambitious AI projects (which has grown massively over the last 5 years), but a larger fraction of the total AI community (which has grown not-quite-as-massively).
For higher derivatives my guess is that alignment is currently doing better than AI more broadly so I’d tentatively expect alignment to grow proportionally over the coming years (maybe converging to something resembling OpenAI/DeepMind like levels throughout the entire community?)
I’m really speculating wildly though, and I would update readily if someone had actual numbers on growth.
Given growth in both AI research and alignment research over the past 5 years, how do the rates of progress compare? Maybe separating absolute change, first and second derivatives.
I’d guess that alignment research is now a smaller fraction of people working on “AGI” or on really ambitious AI projects (which has grown massively over the last 5 years), but a larger fraction of the total AI community (which has grown not-quite-as-massively).
For higher derivatives my guess is that alignment is currently doing better than AI more broadly so I’d tentatively expect alignment to grow proportionally over the coming years (maybe converging to something resembling OpenAI/DeepMind like levels throughout the entire community?)
I’m really speculating wildly though, and I would update readily if someone had actual numbers on growth.