Co-Executive Director at ML Alignment & Theory Scholars Program (2022-present)
Co-Founder & Board Member at London Initiative for Safe AI (2023-present)
Manifund Regrantor (2023-present) | RFPs here
Advisor, Catalyze Impact (2023-present) | ToC here
Advisor, AI Safety ANZ (2024-present)
Ph.D. in Physics at the University of Queensland (2017-2023)
Group organizer at Effective Altruism UQ (2018-2021)
Give me feedback! :)
Makes you wonder if there’s some 4D chess going on here. Occam’s razor suggests otherwise, though. And if true, this seems wholly irresponsible, given that AI risk skeptics can just point to this situation as an example that “even if we do no safety testing/guardrails, it’s not that bad! It just offends a few people.” It seems hard to say which direction this will impact SB 53, for example.