Abstract. The development of Artificial General Intelligence (AGI) promises to be a major event. Along with its many potential benefits, it also raises serious safety concerns (Bostrom, 2014). The intention of this paper is to provide an easily accessible and up-to-date collection of references for the emerging field of AGI safety. A significant number of safety problems for AGI have been identified. We list these, and survey recent research on solving them. We also cover works on how best to think of AGI from the limited knowledge we have today, predictions for when AGI will first be created, and what will happen after its creation. Finally, we review the current public policy on AGI.
Thank you for the link.
Few questions that come to mind:
What would be the differences / improvements from the “Concrete problems in AI Safety” paper? (https://arxiv.org/abs/1606.06565)
What would be the most important concrete problems to work on (for instance for a thesis)?
More generally, does anyone know if someone already made some kind of graph of dependencies (e.g. this problem must be solved before that one) ?