Launching Applications for the Global AI Safety Fellowship 2025!
TLDR; Applications are accepted until December 31, 2024 on a rolling basis, to join a 3-6 month, fully-funded research program in AI safety. Fellows work with some of the world’s leading AI safety labs and research institutions. This will be a full-time, in-person placement, after which there may be opportunities to continue the engagement full time based on mutual fit and fellow’s performance.
Learn more and apply to be a fellow here, or refer someone you think would be awesome for this here. We’re also looking for Talent Identification Advisors (or Consultants) – find out more about the role here.
—
Impact Academy is an organisation focused on running cutting-edge fellowships to enable global talent to use their careers to contribute to the safe and beneficial development of AI.
Impact Academy’s Global AI Safety Fellowship is a 3-6 month fully-funded research program for exceptional STEM talent worldwide.
1⃣ Fellows will work with the world’s leading AI safety organisations to advance the safe and beneficial development of AI. Some of our placement partners are the Center for Human Compatible AI (CHAI), FAR.AI, Conjecture, UK AISI and the Mila–Quebec AI Institute.
Applications are being accepted on a rolling basis until 31 December 2024, but early applications are strongly encouraged. The exact start date of the Fellowship will be decided by the candidate and the placement organization.
Fellows will work in person with partner organisations, subject to visa. If fellows experience visa delays, we will enable them to work from our shared offices at global AI safety hubs.
Ideal candidates for the program will have-
Demonstrated programming proficiency (e.g. >1 year of relevant professional experience).
A strong background in ML (e.g. full-semester university courses, significant research projects, or publications in ML).
A track record of excellence (e.g. outstanding achievements in academics or other areas).
An interest in pursuing research to reduce the risks from advanced AI systems.
Please apply even if you do not meet all qualifications! Competitive candidates may excel in some areas while developing in others.
Fellows will receive a comprehensive financial package to cover their salary, living expenses and research costs, along with dedicated resources for building foundational knowledge in AI safety, regular mentorship, and 1:1 coaching calls with the Impact Academy team. Fellows who perform well would have reliable opportunities to continue working full-time with their placement org.
To learn more and apply, visit our website.
Know someone who would be a good fit? Refer them through this form. There is a $2,000 reward to anyone who refers a candidate that gets selected for placement.
For any queries, please reach out at aisafety@impactacademy.org.
I notice this is downvoted and by a new user. On the surface, it looks like something I would strongly consider applying to, depending on what happens in my personal life over the next month. Can anyone let me know (either here or privately) if this is reputable?
Hi,
It was quite strange to see it downvoted, and I’m not sure what the issue was. My guess is that the initial username might have played a role, especially since this is my first post on LessWrong, it might have caused some concern maybe?
As for the credibility, you can see that this fellowship has been shared by individuals from the organizations themselves on Twitter, as seen here, here and here.
If you’d like, I’m happy to discuss this further on the call to help alleviate any concerns you may have.
The post feels very salesy to me, was written by an org account, and also made statements that seemed false to me like:
(Of those, maybe Far.AI would be deserving of that title, but also, I feel like there is something bad about trying to award that title in the first place).
There also is no disambiguation of whether this program is focused on existential risk efforts or near-term bias/filter-bubble/censorship/etc. AI efforts, the latter of which I think is usually bad for the world, but at least a lot less valuable.
Just commenting to say that this is convincing enough (and the application sufficiently low-effort) for me to apply later this month, conditional on being in a position where I could theoretically accept such an offer.