The number of AI Safety groups worldwide has grown rapidly, but there are many important uncertainties around strategy and tactics.
This guide summarizes the opinions of the organizers of 13 AI safety university groups on 27 strategy-related topics, and is intended primarily for other organizers.
The opinions were collected through a virtual meeting, a series of retrospective documents, and an in-person structured group discussion.
These opinions should be taken as rough early guesses on these topics, some more confident than others.
Introduction
Within the last few years, the number of AI safety university groups has grown from nearly zero to ~70. Many of these groups have spun out of existing Effective Altruism university groups but recently have become increasingly independent. While there is a solid body of work on effective organizing techniques for EA groups, the field of AI safety as a whole is still largely pre-paradigmatic – and this is especially true of theories of change for university groups. Despite this, we have seen a number of groups rapidly scale up and achieve exciting successes, in large part thanks to intentional advertising, facilitating, and other organizing tactics. After talking with organizers from >10 of the world’s top AI safety university groups, we’ve collected a list of these tactics, along with some discussion surrounding the effectiveness of each. Once again, given the pre-paradigmatic nature of the field, we’d like people to read this post not as “here is a list of tried-and-true best practices that all organizers should follow”, but rather as a tentative collection of recently-acquired wisdom that can help inform people’s priors on what has and hasn’t worked in the space of AI safety university organizing.
This post will start by briefly outlining the methodology that we used to source perspectives from various university organizers. Then, it will dive into our key takeaways, along with any uncertainties we still have. We hope that readers will engage with this post by a) actually internalizing/visualizing the techniques we discuss (what would happen if I / a friend implemented this? How would I/they go about it?), and b) reading with a critical lens and providing feedback – as mentioned previously, most university groups have not been around very long, so we are still very much in a stage where we would like to update our beliefs/strategies based on community feedback.
Methodology
To source ideas from university organizers on “what works and what doesn’t” in their AI safety groups, we went through three stages of interaction: 1) a virtual meeting among organizers to discuss the most helpful form for knowledge-sharing, 2) soliciting retrospective documents from various universities, and 3) a formatted in-person discussion among organizers discussing specific claims. Across different stages, we had participation from organizers from the following 13 groups:
In June of this year, we had a meeting with 13 AI safety organizers and prospective organizers from nine universities, where we discussed the high-level theory of change for AI safety groups, along with the best ways for groups to help each other through knowledge-sharing. While initially, we had considered having a group of organizers jointly write an AI safety university groups guide, we ultimately decided that the best way to avoid knowledge cascades was to have each university write up a retrospective document of their own, which would then later be synthesized.
Over the next two months, we collected retrospective documents from various universities. We reached out to organizers from 11 universities and ultimately received documents from seven. Organizers were prompted to write ~2 pages outlining their organization’s theory of change, structure, advertisement techniques, interaction with other organizations, activities, and external support. The full list of guidelines is in Appendix A.
In August of this year, we ran a structured group discussion at OASIS 3.0, a workshop for AI Safety group organizers, in which around 15 organizers participated. Based on our main takeaways from the retrospectives, we made a list of 21 claims about group organizing, and organizers voted in favor or against each claim, separately voting on whether they wanted to discuss them. We then discussed 8 of these claims, particularly the more controversial ones, and summarized the main takeaways from each discussion. The “Key Lessons” section of this post is largely modeled after our discussion at OASIS 3.0; the precise number of organizers that agreed and disagreed with each claim is included in Appendix B.
Early takeaways
The takeaways that follow are our best attempt at summarizing important opinions and discussions that surfaced through the activities we ran; they should only be taken as rough early guesses and not prescriptive recommendations.
Programming
Reading assignments for fellowships should be done during sessions rather than expecting members to read them beforehand [Very Confident]
This lesson was of particular interest. We found that a large portion of groups (perhaps even the majority) do not have fellowship participants do readings (and watch videos) in session, but the majority of organizers with the strongest takes all believed this claim to be true.
The reasoning is that a) many participants do not thoroughly read materials when they are expected to do so at home, and b) reading materials in-session increases retention due to recency and establishes common knowledge among participants.
If readings are done during fellowship sessions, the facilitators should have them printed out rather than asking participants to read them on their devices. [Confident]
Printing out readings helps with engagement by a) allowing participants to more easily annotate documents and b) removing the distraction of having open devices (e.g., participants may switch between tabs on their computer to take care of other tasks).
The AI Safety Fundamentals curricula are a strong base for introductory fellowships [Confident]
Most organizers thought that BlueDot’s AISF curricula (alignment and governance) provided a strong foundation for their own syllabi.
It’s useful to put substantial effort into improving the experience of people participating in introductory programs [Confident]
Organizers discussed various important factors in maintaining high retention rates. These included providing food during sessions, printing out readings, and ensuring logistics go smoothly.
Running in-group research projects when there are good mentors around is a valuable activity [Uncertain]
There was discussion around whether research should be a central activity of AI Safety groups, part of a portfolio or even an activity at all.
In general, it seems that groups that have tried performing in-house research have had at most mixed success.
Because of this, some argued it’s not the group’s competitive advantage to teach basic research skills unless they have highly skilled grad students, and universities often provide better first research opportunities for undergraduates.
There was also consensus around generally referring members to programs like SPAR, MARS, or MATS, freeing up groups to run activities that can only be run locally.
The optimal length for introductory programs in most groups is around seven weeks [Uncertain]
Typical alternatives to 6-8 weeks fellowships include split fellowships or shorter 3-5 week fellowships coupled with other programming after the program ends
Arguments for shorter introductory programs include reducing wasted effort on non-engaged members and mitigating the effects of high dropout rates through the program.
At the same time, longer programs are the default in most groups, and they have the advantage of providing members with a good baseline understanding of topics in AI Safety during their first semester.
Including hands-on content in introductory programs, especially technical ones, is beneficial [Uncertain]
The general claim is that introducing more hands-on material as part of introductory programs might attract more ‘doers’ and complement the conceptual discussions in a typical program.
Some organizers mentioned concerns about take-home code exercises (i.e., self-guided Jupyter notebooks) due to low completion rates.
Organizers generally agreed that another way to attract doers is to build a reputation on campus for being action-oriented, e.g., by showcasing members’ projects on the group’s website, making it less necessary for the introductory program to also be action-oriented.
Advanced upskilling programs like MLAB/ARENA aren’t well suited for university groups [Uncertain].
Organizers disagreed on the effectiveness of running programs like MLAB/ARENA locally, mainly given the low retention rates in many groups.
At the same time, many organizers did agree about specific factors that are likely to make the programs more or less effective. These include:
These programs should be run in the form of code sprints (long, intense sessions) rather than short individual sessions (e.g., 1hr long)
These programs require significant skill and engagement on the part of organizers (e.g., to help participants debug code) and should only be run if an organizer is confident in their ability to do so.
Recruitment
Being selective (<50% acceptance rates) for introductory programs is generally beneficial [Very Confident]
Organizers generally agreed that being selective in introductory programs can help raise the quality of cohorts in the program, assuming one has a big enough pool of applicants.
At the same time, organizers emphasized relying on some threshold for selecting participants on skills, motivation, etc., rather than having a defined selection rate.
Attracting and engaging graduate students is generally challenging [Very Confident]
While organizers agreed that engaging with graduate students was important, most organizers found this quite challenging, especially if there weren’t existing grad student organizers or participants in the group.
Some organizers suggested that the best way to engage graduate students (and also the best way to leverage their talents) was to offer them a leadership or mentorship position. This could include running a fellowship / reading group / research team, or simply coming in to give a talk about their research.
Having a 15+ minute application process is good, regardless of selectivity [Confident]
Organizers agreed that making application processes moderately effortful helps in screening out participants who are less motivated and more likely to drop off during the program.
One should select primarily for talent, not motivation in fellowships [Uncertain]
Organizers disagreed on how and if to screen for competencies in their introductory programs.
Some groups have started requiring students to be at least juniors and have coursed introductory ML or Deep Learning classes, whereas others have kept their introductory programs fairly open.
Independent of any requirements, there was consensus on allowing particularly exciting people to participate regardless.
Outreach
Where allowed by universities, utilizing mailing lists or mass emails is highly effective for advertising [Very Confident]
Doing mass outreach on campus for programming seems generally effective, and organizers emphasized the value of leveraging mailing lists for this purpose.
Advertising should explicitly mention x-risk/catastrophic concerns as part of their public identity [Very Confident]
Organizers agreed that it’s important to be upfront about a group’s focus on catastrophic risks but also emphasized that there’s a lot of latitude in how one can explain this focus to others (e.g., “existential risks” vs “global catastrophic risks” vs “risks from advanced AI”).
Talks and panels are an effective group activity [Uncertain]
Organizers generally agreed that talks and panels can sometimes be helpful in increasing a group’s credibility and brand awareness but disagreed on its effectiveness as a start-of-the-semester recruiting tool.
Some organizers also mentioned that talks could help shift the Overton window on campus, potentially increasing interest in AI safety. This can be helpful in certain cases, i.e., trying to get faculty to engage more seriously with AI Safety.
Collaborations
Collaboration with other student groups is beneficial [Very Confident]
While all organizers agreed with this claim, some noted that it is important to consider reputational risks from:
Associations with clubs that have a negative reputation (e.g., are known for a lack of rigor),
Exposure to negative press on AI safety groups due to overextension/miscommunication.
If there’s an AI ethics club on campus, you should try collaborating with them [Very Confident]
Collaboration can include, e.g., cross-promotions, running joint socials, and hosting speaker events.
However, organizers generally expressed that AI safety groups should think carefully about collaborations which would strongly dilute the general focus of the group (e.g., by running joint introductory programs with groups with a very different focus).
It is important for AI safety groups to actively distance themselves from Effective Altruism groups [Uncertain]
EA and AI Safety groups often coexist on campus, so organizers often wonder how much to integrate both student groups, if at all.
Arguments against this included adverse self-exclusion of participants, i.e., AIS-curious members self-excluding themselves because of EA and vice versa, and concerns about EA’s reputation affecting that of AI Safety groups.
Arguments in favor included motivational benefits and general synergies between both clubs, especially when there are organizers shared among both groups.
A compromise suggestion was to have one-off joint events but not necessarily recurrent ones, allowing both communities to develop independently.
Community building
Socials should be run frequently (every one or two weeks) [Very Confident]
Organizers generally agreed that given enough capacity, it’s often beneficial to run social events pretty frequently.
Being selective in membership and offering exclusive events is beneficial [Very Confident]
Organizers agreed that having some defined sense of group membership was helpful (i.e., some definition of who’s a member and what members can do within the group) and that having completed an introductory program is often a good minimum qualification, albeit sometimes insufficient.
Organizers mentioned gating a number of events and programs to members, including retreats/workshops, reading groups, research projects, and socials.
Beyond this, however, there was less consensus – some clubs effectively accept all member applicants who have completed a fellowship, while others evaluated applications more selectively.
The strategies seemed partially dependent on school culture, with AI safety clubs not wanting to stray too far from typical rates of selectivity for reputational reasons.
Many programs can strongly benefit from providing good food to students during events or programming [Confident]
Organizers mentioned that good food can significantly help with retention across different types of programs.
In the case of introductory programs run over the evening, organizers agreed that providing dinner is often a good choice.
It is important to have channels of communication for career opportunities, which can include channels for sharing resources in Slack or Discord, as well as a monthly newsletter with updates on events and position openings [Confident]
One of the key components of an AI Safety club’s theory of change is the ability to direct students into high-impact positions in the field, and these channels are the most direct way to do so.
Group strategy
Time management is crucial, as participants are often time-constrained [Very Confident]
In particular, most organizers were in strong agreement that it is better to run a small number of programs very effectively than to run a greater number of programs with less organizer support.
School-specific factors play a significant role in program effectiveness [Very Confident].
Organizers generally agreed that cultural elements of specific campuses, states, or countries seem to influence the effectiveness of different programs.
For example, strategy-wise, projecting selectiveness and exclusivity can be beneficial on some campuses while harmful on others, given different cultural expectations for how student groups should work.
Most large groups are not doing enough outreach to faculty members [Very Confident]
It’s important to explicitly segment activities across target audiences [Confident]
For example, groups might choose to run separate activities for the participants with the highest context, particularly grad students, or split cohorts in an introductory program based on participants’ experience with ML.
Succession planning is a significant challenge for groups [Confident]
Appendix A: Retrospective document guidelines
We asked AI safety organizers from 11 universities to write retrospective documents on their organization. For the most part, we asked organizers to write up “whatever they thought to be most relevant” and provided guidelines to spark thinking on some organizing topics. The guidelines are italicized below.
The document can be written in bullet-point format—it should be readable and dive straight into the meat of things. Some things that you can include in the document:
Your org’s theory of change—target audiences, how you facilitate these audiences furthering the goals of AI safety
Your org’s structure—board structure; list & structure of fellowships, reading groups, research projects, etc. Have you tried different structures? How have they each worked out? Why do you have the structure you have now?
Advertisement techniques—where do you advertise? (club fairs, symposiums, at other clubs’ events, etc.) What rhetoric do you use? Do you hand out books/papers? What is your online presence? etc.
Interaction with other organizations at your school—do you collaborate at all with other orgs? This could include introducing an AI Safety paper to your school’s AI club reading group, integrating some amount of AI Safety material into general AI workshops/fellowships, co-hosting events with EA and AIS, etc. How has this gone?
Activities—guest speaker events, debates, socials, hackathons, etc. Which things have gone particularly well, and why? Which ones were flops?
External support—has your org gone through the University Group Accelerator Program (UGAP)? How about the Organizer Support Program (OSP)? How do you go about getting funding? Any niche grants / support for specific things that you think others may not know about?
Appendix B: OASIS poll & discussion
The table below includes the full list of claims included in our poll at OASIS 3.0; for each claim, we recorded the number of organizers who agreed, the number who disagreed, and the number who wanted to discuss this claim (independent of their agreement/disagreement). The claims with the highest number of votes in the “# want discuss” column were those that we went on to discuss in the second half of the meeting.
Category
Question
# agree
# disagree
# want discuss
Programs
ML upskilling bootcamps (e.g., ARENA, MLAB) require too much commitment for students, and therefore not worth running in the context of uni groups.
3.5
5
4
ML upskilling bootcamps should be run in the form of code sprints (small number of long, intense sessions) rather than shorter weekly meetings
6
2
4
Research projects are generally good after-intro activities for group participants
7
5
6
Talks and panels (with high quality guests) are effective for…
Start of the semester recruiting for fellowships
6
4
5
Improving the group’s perception on campus
all
none
none
Motivating existing members of the group
6.5
1
3
Fellowships
The beginning of the pipeline should not be an intro fellowship but rather a shorter introductory experience. This reduces wasted effort on people who won’t engage further
5
2
7
You shouldn’t tweak AISF too much for your fellowships
6.5
6
5
Readings should be done in-session
9
2
0
Even introductory fellowships should have hands-on content. For example, participants should write code and not just read papers
2.5
7
5
Other student clubs
If there’s an AI ethics club on campus, you should try collaborating with them
8
0.5
6
If there is an AI club at your school, you should collaborate with this AI club (say, by proposing an AIS paper in their AI reading group)
all
none
none
If you have socials or coworking sessions, it’s fine to co-host them with your school’s EA club
3.5
9
7
Target audience
When selecting participants to fellowships, research projects, etc., you should select for talent over motivation (e.g., you should not accept someone to a technical fellowship who is highly motivated to learn but has no CS or ML experience)
3
9
all
You should be selective (say, accept < 50%) for your intro fellowships
9
2
5
You should be selective in your membership, offer exclusive member-only events and programs, and kick members out when they have been inactive
all
none
1
Advertising
Advertising should explicitly mention x-risk/catastrophic concerns
9
0.5
3
You should be as truthful as possible in your outreach, including being ~doomy
3.5
8
8
Group culture
Socials should be run pretty frequently (every one or two weeks)
all
none
none
Reading groups
People should be able to propose papers to read (beyond just voting on them)
?
3
1
Faculty
Most groups are not doing enough outreach to faculty members
AI Safety University Organizing: Early Takeaways from Thirteen Groups
TL;DR
The number of AI Safety groups worldwide has grown rapidly, but there are many important uncertainties around strategy and tactics.
This guide summarizes the opinions of the organizers of 13 AI safety university groups on 27 strategy-related topics, and is intended primarily for other organizers.
The opinions were collected through a virtual meeting, a series of retrospective documents, and an in-person structured group discussion.
These opinions should be taken as rough early guesses on these topics, some more confident than others.
Introduction
Within the last few years, the number of AI safety university groups has grown from nearly zero to ~70. Many of these groups have spun out of existing Effective Altruism university groups but recently have become increasingly independent. While there is a solid body of work on effective organizing techniques for EA groups, the field of AI safety as a whole is still largely pre-paradigmatic – and this is especially true of theories of change for university groups. Despite this, we have seen a number of groups rapidly scale up and achieve exciting successes, in large part thanks to intentional advertising, facilitating, and other organizing tactics. After talking with organizers from >10 of the world’s top AI safety university groups, we’ve collected a list of these tactics, along with some discussion surrounding the effectiveness of each. Once again, given the pre-paradigmatic nature of the field, we’d like people to read this post not as “here is a list of tried-and-true best practices that all organizers should follow”, but rather as a tentative collection of recently-acquired wisdom that can help inform people’s priors on what has and hasn’t worked in the space of AI safety university organizing.
This post will start by briefly outlining the methodology that we used to source perspectives from various university organizers. Then, it will dive into our key takeaways, along with any uncertainties we still have. We hope that readers will engage with this post by a) actually internalizing/visualizing the techniques we discuss (what would happen if I / a friend implemented this? How would I/they go about it?), and b) reading with a critical lens and providing feedback – as mentioned previously, most university groups have not been around very long, so we are still very much in a stage where we would like to update our beliefs/strategies based on community feedback.
Methodology
To source ideas from university organizers on “what works and what doesn’t” in their AI safety groups, we went through three stages of interaction: 1) a virtual meeting among organizers to discuss the most helpful form for knowledge-sharing, 2) soliciting retrospective documents from various universities, and 3) a formatted in-person discussion among organizers discussing specific claims. Across different stages, we had participation from organizers from the following 13 groups:
AI Safety Initiative at Georgia Tech
AI Safety at UCLA
Stanford AI Alignment
Oxford AI Safety Initiative
Berkeley AI Safety
AI Safety Maastricht
MIT AI Alignment
AI Safety Student Team at Harvard
Brown AI Safety Team
Wisconsin AI Safety Initiative
AI Safety Initiative at UC Chile
Cambridge AI Safety Hub
AI Safety USC
In June of this year, we had a meeting with 13 AI safety organizers and prospective organizers from nine universities, where we discussed the high-level theory of change for AI safety groups, along with the best ways for groups to help each other through knowledge-sharing. While initially, we had considered having a group of organizers jointly write an AI safety university groups guide, we ultimately decided that the best way to avoid knowledge cascades was to have each university write up a retrospective document of their own, which would then later be synthesized.
Over the next two months, we collected retrospective documents from various universities. We reached out to organizers from 11 universities and ultimately received documents from seven. Organizers were prompted to write ~2 pages outlining their organization’s theory of change, structure, advertisement techniques, interaction with other organizations, activities, and external support. The full list of guidelines is in Appendix A.
In August of this year, we ran a structured group discussion at OASIS 3.0, a workshop for AI Safety group organizers, in which around 15 organizers participated. Based on our main takeaways from the retrospectives, we made a list of 21 claims about group organizing, and organizers voted in favor or against each claim, separately voting on whether they wanted to discuss them. We then discussed 8 of these claims, particularly the more controversial ones, and summarized the main takeaways from each discussion. The “Key Lessons” section of this post is largely modeled after our discussion at OASIS 3.0; the precise number of organizers that agreed and disagreed with each claim is included in Appendix B.
Early takeaways
The takeaways that follow are our best attempt at summarizing important opinions and discussions that surfaced through the activities we ran; they should only be taken as rough early guesses and not prescriptive recommendations.
Programming
Reading assignments for fellowships should be done during sessions rather than expecting members to read them beforehand [Very Confident]
This lesson was of particular interest. We found that a large portion of groups (perhaps even the majority) do not have fellowship participants do readings (and watch videos) in session, but the majority of organizers with the strongest takes all believed this claim to be true.
The reasoning is that a) many participants do not thoroughly read materials when they are expected to do so at home, and b) reading materials in-session increases retention due to recency and establishes common knowledge among participants.
If readings are done during fellowship sessions, the facilitators should have them printed out rather than asking participants to read them on their devices. [Confident]
Printing out readings helps with engagement by a) allowing participants to more easily annotate documents and b) removing the distraction of having open devices (e.g., participants may switch between tabs on their computer to take care of other tasks).
The AI Safety Fundamentals curricula are a strong base for introductory fellowships [Confident]
Most organizers thought that BlueDot’s AISF curricula (alignment and governance) provided a strong foundation for their own syllabi.
At the same time, a number of groups are experimenting this semester with basing their intro fellowship on the AI Safety, Ethics, and Society textbook.
It’s useful to put substantial effort into improving the experience of people participating in introductory programs [Confident]
Organizers discussed various important factors in maintaining high retention rates. These included providing food during sessions, printing out readings, and ensuring logistics go smoothly.
Running in-group research projects when there are good mentors around is a valuable activity [Uncertain]
There was discussion around whether research should be a central activity of AI Safety groups, part of a portfolio or even an activity at all.
In general, it seems that groups that have tried performing in-house research have had at most mixed success.
Because of this, some argued it’s not the group’s competitive advantage to teach basic research skills unless they have highly skilled grad students, and universities often provide better first research opportunities for undergraduates.
There was also consensus around generally referring members to programs like SPAR, MARS, or MATS, freeing up groups to run activities that can only be run locally.
The optimal length for introductory programs in most groups is around seven weeks [Uncertain]
Typical alternatives to 6-8 weeks fellowships include split fellowships or shorter 3-5 week fellowships coupled with other programming after the program ends
Arguments for shorter introductory programs include reducing wasted effort on non-engaged members and mitigating the effects of high dropout rates through the program.
At the same time, longer programs are the default in most groups, and they have the advantage of providing members with a good baseline understanding of topics in AI Safety during their first semester.
Including hands-on content in introductory programs, especially technical ones, is beneficial [Uncertain]
The general claim is that introducing more hands-on material as part of introductory programs might attract more ‘doers’ and complement the conceptual discussions in a typical program.
Some organizers mentioned concerns about take-home code exercises (i.e., self-guided Jupyter notebooks) due to low completion rates.
Organizers generally agreed that another way to attract doers is to build a reputation on campus for being action-oriented, e.g., by showcasing members’ projects on the group’s website, making it less necessary for the introductory program to also be action-oriented.
Advanced upskilling programs like MLAB/ARENA aren’t well suited for university groups [Uncertain].
Organizers disagreed on the effectiveness of running programs like MLAB/ARENA locally, mainly given the low retention rates in many groups.
At the same time, many organizers did agree about specific factors that are likely to make the programs more or less effective. These include:
These programs should be run in the form of code sprints (long, intense sessions) rather than short individual sessions (e.g., 1hr long)
These programs require significant skill and engagement on the part of organizers (e.g., to help participants debug code) and should only be run if an organizer is confident in their ability to do so.
Recruitment
Being selective (<50% acceptance rates) for introductory programs is generally beneficial [Very Confident]
Organizers generally agreed that being selective in introductory programs can help raise the quality of cohorts in the program, assuming one has a big enough pool of applicants.
At the same time, organizers emphasized relying on some threshold for selecting participants on skills, motivation, etc., rather than having a defined selection rate.
Attracting and engaging graduate students is generally challenging [Very Confident]
While organizers agreed that engaging with graduate students was important, most organizers found this quite challenging, especially if there weren’t existing grad student organizers or participants in the group.
Some organizers suggested that the best way to engage graduate students (and also the best way to leverage their talents) was to offer them a leadership or mentorship position. This could include running a fellowship / reading group / research team, or simply coming in to give a talk about their research.
Having a 15+ minute application process is good, regardless of selectivity [Confident]
Organizers agreed that making application processes moderately effortful helps in screening out participants who are less motivated and more likely to drop off during the program.
One should select primarily for talent, not motivation in fellowships [Uncertain]
Organizers disagreed on how and if to screen for competencies in their introductory programs.
Some groups have started requiring students to be at least juniors and have coursed introductory ML or Deep Learning classes, whereas others have kept their introductory programs fairly open.
Independent of any requirements, there was consensus on allowing particularly exciting people to participate regardless.
Outreach
Where allowed by universities, utilizing mailing lists or mass emails is highly effective for advertising [Very Confident]
Doing mass outreach on campus for programming seems generally effective, and organizers emphasized the value of leveraging mailing lists for this purpose.
Advertising should explicitly mention x-risk/catastrophic concerns as part of their public identity [Very Confident]
Organizers agreed that it’s important to be upfront about a group’s focus on catastrophic risks but also emphasized that there’s a lot of latitude in how one can explain this focus to others (e.g., “existential risks” vs “global catastrophic risks” vs “risks from advanced AI”).
Talks and panels are an effective group activity [Uncertain]
Organizers generally agreed that talks and panels can sometimes be helpful in increasing a group’s credibility and brand awareness but disagreed on its effectiveness as a start-of-the-semester recruiting tool.
Some organizers also mentioned that talks could help shift the Overton window on campus, potentially increasing interest in AI safety. This can be helpful in certain cases, i.e., trying to get faculty to engage more seriously with AI Safety.
Collaborations
Collaboration with other student groups is beneficial [Very Confident]
While all organizers agreed with this claim, some noted that it is important to consider reputational risks from:
Associations with clubs that have a negative reputation (e.g., are known for a lack of rigor),
Exposure to negative press on AI safety groups due to overextension/miscommunication.
If there’s an AI ethics club on campus, you should try collaborating with them [Very Confident]
Collaboration can include, e.g., cross-promotions, running joint socials, and hosting speaker events.
However, organizers generally expressed that AI safety groups should think carefully about collaborations which would strongly dilute the general focus of the group (e.g., by running joint introductory programs with groups with a very different focus).
It is important for AI safety groups to actively distance themselves from Effective Altruism groups [Uncertain]
EA and AI Safety groups often coexist on campus, so organizers often wonder how much to integrate both student groups, if at all.
Arguments against this included adverse self-exclusion of participants, i.e., AIS-curious members self-excluding themselves because of EA and vice versa, and concerns about EA’s reputation affecting that of AI Safety groups.
Arguments in favor included motivational benefits and general synergies between both clubs, especially when there are organizers shared among both groups.
A compromise suggestion was to have one-off joint events but not necessarily recurrent ones, allowing both communities to develop independently.
Community building
Socials should be run frequently (every one or two weeks) [Very Confident]
Organizers generally agreed that given enough capacity, it’s often beneficial to run social events pretty frequently.
Being selective in membership and offering exclusive events is beneficial [Very Confident]
Organizers agreed that having some defined sense of group membership was helpful (i.e., some definition of who’s a member and what members can do within the group) and that having completed an introductory program is often a good minimum qualification, albeit sometimes insufficient.
Organizers mentioned gating a number of events and programs to members, including retreats/workshops, reading groups, research projects, and socials.
Beyond this, however, there was less consensus – some clubs effectively accept all member applicants who have completed a fellowship, while others evaluated applications more selectively.
The strategies seemed partially dependent on school culture, with AI safety clubs not wanting to stray too far from typical rates of selectivity for reputational reasons.
Many programs can strongly benefit from providing good food to students during events or programming [Confident]
Organizers mentioned that good food can significantly help with retention across different types of programs.
In the case of introductory programs run over the evening, organizers agreed that providing dinner is often a good choice.
It is important to have channels of communication for career opportunities, which can include channels for sharing resources in Slack or Discord, as well as a monthly newsletter with updates on events and position openings [Confident]
One of the key components of an AI Safety club’s theory of change is the ability to direct students into high-impact positions in the field, and these channels are the most direct way to do so.
Group strategy
Time management is crucial, as participants are often time-constrained [Very Confident]
In particular, most organizers were in strong agreement that it is better to run a small number of programs very effectively than to run a greater number of programs with less organizer support.
School-specific factors play a significant role in program effectiveness [Very Confident].
Organizers generally agreed that cultural elements of specific campuses, states, or countries seem to influence the effectiveness of different programs.
For example, strategy-wise, projecting selectiveness and exclusivity can be beneficial on some campuses while harmful on others, given different cultural expectations for how student groups should work.
Most large groups are not doing enough outreach to faculty members [Very Confident]
It’s important to explicitly segment activities across target audiences [Confident]
For example, groups might choose to run separate activities for the participants with the highest context, particularly grad students, or split cohorts in an introductory program based on participants’ experience with ML.
Succession planning is a significant challenge for groups [Confident]
Appendix A: Retrospective document guidelines
We asked AI safety organizers from 11 universities to write retrospective documents on their organization. For the most part, we asked organizers to write up “whatever they thought to be most relevant” and provided guidelines to spark thinking on some organizing topics. The guidelines are italicized below.
The document can be written in bullet-point format—it should be readable and dive straight into the meat of things. Some things that you can include in the document:
Your org’s theory of change—target audiences, how you facilitate these audiences furthering the goals of AI safety
Your org’s structure—board structure; list & structure of fellowships, reading groups, research projects, etc. Have you tried different structures? How have they each worked out? Why do you have the structure you have now?
Advertisement techniques—where do you advertise? (club fairs, symposiums, at other clubs’ events, etc.) What rhetoric do you use? Do you hand out books/papers? What is your online presence? etc.
Interaction with other organizations at your school—do you collaborate at all with other orgs? This could include introducing an AI Safety paper to your school’s AI club reading group, integrating some amount of AI Safety material into general AI workshops/fellowships, co-hosting events with EA and AIS, etc. How has this gone?
Activities—guest speaker events, debates, socials, hackathons, etc. Which things have gone particularly well, and why? Which ones were flops?
External support—has your org gone through the University Group Accelerator Program (UGAP)? How about the Organizer Support Program (OSP)? How do you go about getting funding? Any niche grants / support for specific things that you think others may not know about?
Appendix B: OASIS poll & discussion
The table below includes the full list of claims included in our poll at OASIS 3.0; for each claim, we recorded the number of organizers who agreed, the number who disagreed, and the number who wanted to discuss this claim (independent of their agreement/disagreement). The claims with the highest number of votes in the “# want discuss” column were those that we went on to discuss in the second half of the meeting.