Co-Executive Director at ML Alignment & Theory Scholars Program (2022-present)
Co-Founder & Board Member at London Initiative for Safe AI (2023-present)
Manifund Regrantor (2023-present) | RFPs here
Advisor, Catalyze Impact (2023-present) | ToC here
Advisor, AI Safety ANZ (2024-present)
Ph.D. in Physics at the University of Queensland (2017-2023)
Group organizer at Effective Altruism UQ (2018-2021)
Give me feedback! :)
Obviously I disagree with Tsvi regarding the value of MATS to the proto-alignment researcher; I think being exposed to high quality mentorship and peer-sourced red-teaming of your research ideas is incredibly valuable for emerging researchers. However, he makes a good point: ideally, scholars shouldn’t feel pushed to write highly competitive LTFF grant applications so soon into their research careers; there should be longer-term unconditional funding opportunities. I would love to unlock this so that a subset of scholars can explore diverse research directions for 1-2 years without 6-month grant timelines looming over them. Currently cooking something in this space.