Robert Miles has a great channel spreading AI safety content. There’s also Rational Animations and Siliconversations and In a Nutshell.
I think FLI does a lot of work in outreach + academia.
Connor Leahy does a lot of outreach and he’s one of my favorite AI safety advocates.
Nonlinear doesn’t do outreach to academia in particular, but we do target people working in ML, which is a lot of academia.
AI Safety Memes does a lot of outreach but is focused on broad appeal, definitely not specifically academia.
Pause AI and Stop AI both work on outreach to the broader public.
CAIS does great outreach work. Not sure if any academia specific stuff.
Are you on the Nonlinear Network? You can sort by the category of “content/media creation” to find a bunch of AI safety orgs limited by funding who are working on advocacy. Quick scan of the section shows 36.
Robert Miles has a great channel spreading AI safety content. There’s also Rational Animations and Siliconversations and In a Nutshell.
I think FLI does a lot of work in outreach + academia.
Connor Leahy does a lot of outreach and he’s one of my favorite AI safety advocates.
Nonlinear doesn’t do outreach to academia in particular, but we do target people working in ML, which is a lot of academia.
AI Safety Memes does a lot of outreach but is focused on broad appeal, definitely not specifically academia.
Pause AI and Stop AI both work on outreach to the broader public.
CAIS does great outreach work. Not sure if any academia specific stuff.
Are you on the Nonlinear Network? You can sort by the category of “content/media creation” to find a bunch of AI safety orgs limited by funding who are working on advocacy. Quick scan of the section shows 36.
Might be able to find more possibilities on the AI safety map too https://map.aisafety.world