Current:
Director and Co-Founder of “Principles of Intelligent Behaviour in Biological and Social Systems” (pibbss.ai)
Research Affiliate and PhD student with the Alignment of Complex Systems group, Charles University (acsresearch.org)
Main research interests:
How can a naturalized understanding of intelligent behavior (across systems, scales and substrates) be translated into concrete progress towards making AI systems safe and beneficial?
What are scientific and epistemological challenges specific to making progress on questions in AI risk, governance and safety? And how can we overcome them?
Other interests:
Alternative AI paradigms and their respective capabilities-, safety-, and governability-profiles
The dual (descriptive-prescriptive) nature of the study of agency and the sciences of the artificial
Pluralist epistemic perspective on the landscape of AI risks
The “think” interface between technical and governance aspects of AI alignment
...and more general ideas from philosophy & history of science, political philosophy, complex systems studies, and broadly speaking enactivist theories of cognition, … — as they are relevance to questions in AI risk/governance/safety
Going back further, I have also spent a bunch of time thinking about how (bounded) minds make sense of and navigate a (complex) world (i.e. rationality, critical thinking, etc.). I have several years of experience in research organization, among others from working at FHI, CHERI, Epistea, etc. I have a background in International Relations, and spend large parts of of 2017-2019 doing complex systems inspired research on understanding group decision making and political processes with the aim of building towards an appropriate framework for “Longterm Governance”.
That’s cool to hear!
We are hoping to write up our current thinking on ICF at some point (although I don’t expect it to happened within the next 3 months) and will make sure to share it.
Happy to talk!