Here is a timeline of AI safety that I originally wrote in 2017. The timeline has been updated several times since then, mostly by Vipul Naik.
Here are some highlights by year from the timeline since 2013:
Year | Highlights |
2013 | Research and outreach focused on forecasting and timelines continue. Connections with the nascent effective altruism movement strengthen. The Center for the Study of Existential Risk and the Foundational Research Institute launch. |
2014 | Superintelligence: Paths, Dangers, Strategies by Nick Bostrom is published. The Future of Life Institute is founded and AI Impacts launches. AI safety gets more mainstream attention, including from Elon Musk, Stephen Hawking, and the fictional portrayal Ex Machina. While forecasting and timelines remain a focus of AI safety efforts, the effort shifts toward the technical AI safety agenda, with the launch of the Intelligent Agent Foundations Forum. |
2015 | AI safety continues to get more mainstream, with the founding of OpenAI (supported by Elon Musk and Sam Altman) and the Leverhulme Centre for the Future of Intelligence, the Open Letter on Artificial Intelligence, the Puerto Rico conference, and coverage on Wait But Why. This also appears to be the last year that Peter Thiel donates in the area. |
2016 | Open Philanthropy makes AI safety a focus area; it would ramp up giving in the area considerably starting around this time. The landmark paper “Concrete Problems in AI Safety” is published, and OpenAI’s safety work picks up pace. The Center for Human-Compatible AI launches. The annual tradition of LessWrong posts providing an AI alignment literature review and charity comparison for the year begins. AI safety continues to get more mainstream, with the Partnership on AI and the Obama administration’s efforts to understand the subject. |
2017 | This is a great year for cryptocurrency prices, causing a number of donations to MIRI from people who got rich through cryptocurrency. The AI safety funding and support landscape changes somewhat with the launch of the Berkeley Existential Risk Initiative (BERI) (and funding of its grants program by Jaan Tallinn) and the Effective Altruism Funds, specifically the Long-Term Future Fund. Open Philanthropy makes several grants in AI safety, including a $30 million grant to OpenAI and a $3.75 million grant to MIRI. AI safety attracts dismissive commentary from Mark Zuckerberg, while Elon Musk continues to highlight its importance. The year begins with the Asilomar Conference and the Asilomar AI Principles, and initiatives such as AI Watch and the AI Alignment Prize begin toward the end of the year. |
2018 | Activity in the field of AI safety becomes more steady, in terms of both ongoing discussion (with the launch of the AI Alignment Newsletter, AI Alignment Podcast, and Alignment Forum) and funding (with structural changes to the Long-Term Future Fund to make it grant more regularly, the introduction of the annual Open Philanthropy AI Fellowship grants, and more grantmaking by BERI). Near the end of the year, MIRI announces its nondisclosure-by-default policy. Ought, Median Group, and the Stanford Center for AI Safety launch during the year. |
2019 | The Center for Security and Emerging Technology (CSET), that is focused on AI safety and other security risks, launches with a 5-year $55 million grant from Open Philanthropy. The Stanford Institute for Human-Centered Artificial Intelligence (HAI) launches. Grantmaking from the Long-Term Future Fund picks up pace; BERI hands off its grantmaking of Jaan Tallinn’s money to the Survival and Flourishing Fund (SFF). Open Philanthropy begins using the Committee for Effective Altruism Support to decide grant amounts for some of its AI safety grants, including grants to MIRI. OpenAI unveils its GPT-2 model but does not release the full model initially; this sparks discussion on disclosure norms. |
2020 | Andrew Critch and David Krueger release their ARCHES paper. OpenAI unveils GPT-3, leading to further discussion of AI safety implications. AI Safety Support launches. The funding ecosystem continues to mature: Open Philanthropy and the Survival and Flourishing Fund continue to make large grants to established organizations, while the Long-Term Future Fund increasingly shifts focus to donating to individuals. |
I previously shared timelines for MIRI and FHI here on LessWrong.
Any thoughts on the timeline (such as events to add, events to remove, corrections, etc.) would be greatly appreciated! I’m also curious to hear thoughts about how useful a timeline like this is (or how useful it could become after more work is put into it).
The effort is commendable. I am wondering why you started at 2013?
Debatably it is the things that happened prior to 2013 that is especially of interest.
I am thinking of early speculations by Turing, Von Neumann and Good continuing on to the founding of SI/MIRI some twenty years ago and much more in between I am less familiar with—but would like to know more about!
We cover a larger period in the overall summary and full timeline. The summary by year starts 2013 because (it appears that) that’s around the time that enough started happening per year. Though we might expand it a little further to the past as we continue to expand the timeline.
Ah! Excuse me for my drive-by comment, I should have clicked the link.
That’s later in the linked wiki page: https://timelines.issarice.com/wiki/Timeline_of_AI_safety#Full_timeline
Every self-respecting scientific effort traces its roots back to Ancient Greece. I am somewhat disappointed.
I guess Golem is cool, too.
Excellent, thanks! Now I just need a similar timeline for near-term safety engineering / assured autonomy as they relate to AI, and then a good part of a paper I’m working on will have just written itself.