My (current) model of what an AI governance researcher does
Purpose of this post: We need more people working as AI governance researchers. This post establishes a basic framework for thinking about what this career path entails, and how you might test your fit. I hope to receive feedback on this framework.
Epistemic status and disclosure: I have thought about and researched this by myself for about 15 hours, and while I have obtained an academic degree, I don’t consider myself as an expert yet. A partial motivation of writing this post is to gather feedback and improve my own thinking.
Introduction
Alongside the many positive developments that AI brings, there are significant risks it poses. To address both the risks and benefits of AI, we have the still relatively nascent field of AI safety. Contributions to this field range from technical alignment research and AI governance to advocacy and field-building. But how does an individual choose which pathway to pursue?
A strong starting point is to choose an area with a good personal fit and to understand your unique capacity to contribute. However, making sense of this can be challenging. Experimenting with different roles and projects is a promising way to begin. This approach allows one to learn more about the nature of the work and provides real-world evidence to test your hypotheses about what might be the best fit for you.
The perspective that high-quality research is a critical area to work on inspired me to test my personal fit for AI governance research. Therefore, this post will focus on this career path, though I expect most (if not all) of the steps to be quite similar to technical research.
The aims of an AI governance researcher
To determine whether a career in AI governance research is right for you, it’s relevant to understand the purpose of this career path. I define the high-level goal of an AI governance researcher as follows: to generate and share useful insights that will better position society to effectively manage the development of increasingly advanced AI systems. But what does this mean in concrete terms?
To achieve this overarching aim effectively, an AI governance researcher seems to typically engage in seven different activity clusters.
1. Developing a deep understanding of relevant areas
1.1 The field of AI governance
Developing an extensive, detailed understanding of the AI governance landscape, including key concepts, major players, ongoing projects and current developments.
1.2 Your own position
Establishing a well-developed, internally consistent set of views on how AI should be governed based on your model of the world and the field. You update this regularly based on new evidence.
1.3 Your strengths and personal challenges
To understand what type of research to focus on and which impact opportunities to pivot to, you develop clear understanding of your own strengths weaknesses.
Essentially, the aim of this domain is that you put yourself into a position where you have a sufficient amount of knowledge to guide your actions. It is a foundation for all following steps.
2. Identifying research gaps
As an analogy, if the first domain is akin to continuously creating and updating an accurate map of the system, the second domain involves identifying areas of the map that lack detail and insight.
This activity cluster emphasises the process of critically evaluating the space and coming up with well-substantiated hypotheses about what work and insights are needed to advance the field (specifically the theory of victory that you consider to be most important).
3. Prioritising between opportunities for impact
Your time and resources are limited, so you want to figure out what work you want to prioritise, and what you can ignore.
Continuing with the map analogy, the third step would involve understanding the bigger picture of the existing map, including the areas that lack detail, and deciding which of these foggy areas to prioritize to further sketch out.
This is accomplished, among others, through having conversations with knowledgable people, applying your judgement and using a systematised approach for filtering out the highest impact research questions.
4) Addressing your research question of choice
This is the “classical” research process and it involves multiple sub-steps. For every research project this will be slightly different, but the main steps seem to involve:
4.1 Optimising your knowledge to tackle the research question.
4.2 Coming up with preliminary hypotheses and / or research objectives.
4.3 Choosing an appropriate research design.
4.4 Iteratively applying methodology, rethinking your position based on new evidence, and writing up your insights.
5) Sharing and publicising your insights
If you don’t publish or share your insights it is not possible to influence the behaviour and decision-making of your target audience.
6) Getting people to think about your insights
You won’t realise your impact as a researcher unless people actually internalise the insights you have generated into their own world model and work.
I expect this step mostly to be the sum out of having a well-established network that trusts you, a good track record, being proactive in sharing your work, and luck.
7) Evaluating your impact
To gauge how to improve as a researcher and whether you realised your intended impact with a research project it seems to be important to take the time to gather relevant data.
In practice it probably involves a combination out of surveys, feedback conversations and evaluating how many other stakeholders have used and / or cited your paper.
Indicators for success
While I have divided the object-level aims of an (AI governance) researcher into 7 distinct clusters, it appears to me that there is significant overlap between them and that they largely occur in cycles. Each of these areas necessitate different skill sets. While I am still grappling with developing good metrics to evaluate how well one is performing in each domain, I can recommend to check out this and this to learn about general indicators for predicting success.
Setting up an experiment to test your personal fit
While talking to others and thinking about what it’s like to be an AI governance researcher can indeed be helpful, actually undertaking a research project will likely give you the clearest sense of whether this career path is something you want to pursue full-time.
It appears to me that one of the best ways to do so is to go through the seven areas above while gathering continuous feedback. In practice this could mean dedicating time to getting up to speed on the different areas of AI governance research, choosing a research question that you are excited about, tackling it, and writing up your findings.
Through a combination of receiving feedback and reflecting on how things went, you will gain valuable information that can help you determine your next steps.
I hope you have a good day, and thank you for taking the time to think about this!
Some ideas on how you can consider engaging
Consider making use of the voting function to signal what you think about the post
Dropping a short comment to share
What you think I am missing or what I am getting wrong
What you think I got right
Anything else
Sharing anonymous feedback: https://www.admonymous.co/johandekock
Sharing your thoughts on which activities you believe are most crucial for excelling as an AI governance researcher
Basic definitions
Research = the process of investigating, understanding and making sense of phenomena with the aim of generating new useful insights about the phenomena. These insights are published and made available to different kind of stakeholders so that they can use the insights and ideas to inform their decision-making, update their understanding of the world and act more effectively.
AI Governance = the study of norms, policies, and institutions that can help humanity navigate the transition to a world with advanced artificial intelligence. [″]
AI Governance research = the undertaking of generating and sharing useful insights that will put society in a better position to effectively deal with the creation of progressively more advanced AI systems
My big concern with AI governance as it is currently being conducted is that I think the people doing it are having a correlated failure to notice and address the true risks and timelines. I’m not sure this makes them actively harmful to humanity’s survival, but it does sure limit their helpfulness.
For more details, see this comment on my AI timelines.
Thank you for your thoughts! I read through your linked comment, and I think everything you wrote seems plausible to me. In fact, I also have short timelines and think that AGI is around the corner. As for your second perspective regarding correlated failure, I would be curious if you are willing to give an example.
Also, what do you think are the types of research questions (long-term) AI governance researchers should address if we are right?
If you have significant time trade-offs, getting your take on the second prompt would be most valuable.