Co-Director at ML Alignment & Theory Scholars Program (2022-present)
Co-Founder & Board Member at London Initiative for Safe AI (2023-present)
Manifund Regrantor (2023-present)
Advisor, Catalyze Impact (2023-present)
Advisor, AI Safety ANZ (2024-present)
Ph.D. in Physics at the University of Queensland (2017-2023)
Group organizer at Effective Altruism UQ (2018-2021)
Give me feedback! :)
How fast should the field of AI safety grow? An attempt at grounding this question in some predictions.
Ryan Greenblatt seems to think we can get a 30x speed-up in AI R&D using near-term, plausibly safe AI systems; assume every AIS researcher can be 30x’d by Alignment MVPs
Tom Davidson thinks we have <3 years from 20%-AI to 100%-AI; assume we have ~3 years to align AGI with the aid of Alignment MVPs
Assume the hardness of aligning TAI is equivalent to the Apollo Program (90k engineer/scientist FTEs x 9 years = 810k FTE-years); therefore, we need ~9k more AIS technical researchers
The technical AIS field is currently ~500 people; at the current growth rate of 28% per year, it will take 12 years to grow to 9k people (Oct 2036)
Alternatively, if we bound by the Manhattan Project (25k FTEs x 5 years = 125 FTE-years), this will take 6.5 years (Jul 2031)
Metaculus predicts weak AGI in 2026 and strong AGI in 2030; clearly, more talent development is needed if we want to make the Nov 2030 AGI deadline!
If we want to make the 9k researchers goal by Nov 2030 AGI deadline, we need an annual growth rate of 65%, 2.3x the current growth rate of 28%