Predicted AI alignment event/meeting calendar
Update 2020-06-21: Linda Linsefors made the AI Safety Google Calendar. I published How to make a predicted AI alignment event/meeting calendar, in case you want to make your own.
Update 2021-11-21: The AI Safety Google Calendar is not in a state (anymore?) where it would be a replacement for the kind of calendar I maintained.
I kept this calendar up-to-date between August 2019 and April 2020. As I’ve left AI alignment (see also I’m leaving AI alignment – you better stay) and nobody has told me that they want me to keep this calendar up-to-date, I stop keeping it up-to-date.
If you find this calendar useful and would like it to continue to be kept up-to-date, please tell me in the comments or via PM. If you want to contribute to the community by maintaining this calendar, let me know and I’ll share my process with you.
I update this every month. If you know of more events, please comment, PM or email me. The same goes for events that I listed, but which won’t actually take place.
See also (I won’t repeat here what is listed there):
Last updated (search for ‘UPDATED’): 2020-04-18
Next update will be published by: 2020-05-18 – If someone tells me that they find this calendar useful. See also the paragraph at the top.
2020
Many of the events went virtual because of COVID-19.
(Apparently no SafeML workshop at ICLR 2020.)
April, cyberspace workshop Towards Trustworthy ML: Rethinking Security and Privacy for ML at ICLR 2020 (only somewhat related to AIA)
April proposal submission deadline for the AISafety workshop at IJCAI
May, cyberspace research retreat of the AI Safety Camp Toronto – The in-person event is cancelled. The organizers are doing their best putting together a virtual camp.
UPDATED May paper submission deadline for the Third International Workshop on Artificial Intelligence Safety Engineering
UPDATED May, cyberspace Web Technical AI Safety Unconference
May, cyberspace Workshop on Assured Autonomous Systems at the 41st IEEE Symposium on Security and Privacy
June application deadline for the Human-aligned AI Summer School (my guess based on 2019)
July/August, Prague CZE Human-aligned AI Summer School (my guess based on 2018, 2019)
July, Yokohama JPN AISafety workshop at IJCAI 2020
August, Bodega Bay USA MIRI Summer Fellows Program (my guess based on 2019 – There were also MSFPs/AISFPs in 2015-2018.)
September, Lisbon PRT International Workshop on Artificial Intelligence Safety Engineering at SafeComp 2020
October, Prague CZE International Congress for the Governance of AI
- I’m leaving AI alignment – you better stay by 12 Mar 2020 5:58 UTC; 152 points) (
- 18 May 2020 8:46 UTC; 8 points) 's comment on rmoehn’s Shortform by (
- How to make a predicted AI alignment event/meeting calendar by 20 Jun 2020 22:43 UTC; 6 points) (
- 18 Apr 2020 11:20 UTC; 5 points) 's comment on rmoehn’s Shortform by (
- Update: Predicted AI alignment event/meeting calendar by 13 Sep 2019 9:05 UTC; 5 points) (
- 13 Mar 2020 7:28 UTC; 3 points) 's comment on rmoehn’s Shortform by (
- 10 Nov 2019 22:07 UTC; 2 points) 's comment on rmoehn’s Shortform by (
- 11 Feb 2020 6:21 UTC; 2 points) 's comment on rmoehn’s Shortform by (
- 13 Dec 2019 2:25 UTC; 1 point) 's comment on rmoehn’s Shortform by (
- 13 Jan 2020 7:20 UTC; 1 point) 's comment on rmoehn’s Shortform by (
Hey. Just a quick comment that I find this calendar useful and would like it to continue to be kept up-to-date if possible! Thanks
Thanks for letting me know! In response, I’ve added a link to Linda Linsefors’ calendar at the top of the article. I hope that is useful enough to you. Her calendar is focused on online events, but these days almost everything is online anyway. Also, she wrote that she might make a calendar for in-person events once we vanquish Covid.
Thank you, yes this is certainly useful!
Somehow the word “predicted” in the title (as opposed to, say, “future” or “planned”) led me to expect entries for things like “OpenAI releases explicit model of human utility function” and “Entire mass of planet earth converted to paperclips”...
If both of those things happened I would be very interested in hearing about the person who decided to make a paperclip maximizer despite having an explicit model of human utility function they could implement.
Actually, I wouldn’t be interested in anything. I would be paperclips.
It hardly seems to make sense to implement a utility function for a paperclip plant, your AI would be focused on solving death and making people happy instead of making more paperclips!
Thanks for pointing that out. Do you have a suggestion for a less misleading title?
Timeline of AI Alignment meetings
Dunno. I don’t think the way it is does any actual harm. Maybe something with “meetings” in it, as per Teerth Aloke’s suggestion.
Just ‘meeting’ sounds too unimportant. But I’ve added it to the title, which removes the ambiguity.
Seems weakly better for this to be organized with newer content at the top?
I can’t imagine a good format with new content at the top. But I will add markers, so people can quickly scan for changes. I assume that’s why you asked?
I was thinking just “reverse the order of the years/months”. (Which might not be “newest added” but would be in the limit, would mean you don’t have to scroll past years of irrelevant stuff before getting the stuff in the nearterm)
I don’t understand. Would you like to have the September 2020 event at the top?
Maybe what is most relevant to you differs from what is most relevant to me. I delete all past events, which limits the ‘irrelevant stuff’. And most relevant to me are the events that are soonest.