Current:
Technical Specialist, Safeguarded AI, Advanced Research and Invention Agency (ARIA)
Past:
Director and Co-Founder of “Principles of Intelligent Behaviour in Biological and Social Systems” (pibbss.ai)
Research Affiliate and PhD student with the Alignment of Complex Systems group, Charles University (acsresearch.org)
Programme Manager, Future of Humanity Institute, University of Oxford
Research Affiliate, Simon Institute for Longterm Governance
I found this article ~very poor. Much of the rhetorical moves adopted in the pieces seem largely optimised for making it easy to stay on the “high horse”. Talking about a singular AI doomer movement being one of them. Having the stance that AGI is not near and thus there is nothing to worry about is another. Whether or not that’s true, it certainly makes it easy to point your finger at folks who are worried and say ‘look what silly theater’.
I think it’s somewhat interesting to ask whether there should be more coherence across safety efforts, and at the margins, the answer might be yes. But I’m also confused about the social model that suggests that there could be something like a singular safety plan (instead, I think we live in a world where increasingly more people are waking up to the implications of AI progress, and of course there will be diverse and to some extent non-coherent reactions to this), OR that a singular coherent safety plan would be desirable given the complexity and amount of uncertainty invovled in the challenge.