As far as I know, AI Impacts does not seem to have any non-trivial positive impact on either the epistemics of the AI x-risk community, nor seems to have helped with any governance efforts.
Probably the best output by people seemingly affiliated with AI Impacts, that I have encountered, is Zach’s sequence on Slowing AI.
On the other hand, here are two examples (one, two) that immediately came to mind when I thought of AI Impacts. These two essays describe a view of reality that seems utterly pre-Sequences to me. There’s this idea that there’s something inherently unpredictable about reality caused by chaos dynamics in complex systems that limits the sorts of capabilities that a superintelligence can have, and such an argument seems to imply that one should worry less about the possibility of superintelligent AGI systems ending humanity.
It seems like some of AI Impacts’ research output goes against the very fundamental understanding that underpins why the creation of unaligned AGI is an extinction risk.
Is AI Impacts being funded by MIRI? I sure hope not.
These both I think had a pretty large effect on a bunch of governance and coordination work.
Another post that had a pretty major impact was Katja’s post “Let’s Think about Slowing Down AI”, which had a relatively large effect on me and many others.
I also think the discontinuity studies have been quite helpful and I’ve frequently references them when talking about takeoff dynamics.
I don’t believe AI Impacts is being funded by MIRI; we’re talking about fiscal sponsorship here, which means that they’re the same legal entity in the eyes of the government, and share accounting, but typically (and as it looks to me in this case) they do their fundraising entirely separately and do not work together.
As far as I know, AI Impacts does not seem to have any non-trivial positive impact on either the epistemics of the AI x-risk community, nor seems to have helped with any governance efforts.
Probably the best output by people seemingly affiliated with AI Impacts, that I have encountered, is Zach’s sequence on Slowing AI.
On the other hand, here are two examples (one, two) that immediately came to mind when I thought of AI Impacts. These two essays describe a view of reality that seems utterly pre-Sequences to me. There’s this idea that there’s something inherently unpredictable about reality caused by chaos dynamics in complex systems that limits the sorts of capabilities that a superintelligence can have, and such an argument seems to imply that one should worry less about the possibility of superintelligent AGI systems ending humanity.
It seems like some of AI Impacts’ research output goes against the very fundamental understanding that underpins why the creation of unaligned AGI is an extinction risk.
Is AI Impacts being funded by MIRI? I sure hope not.
Re. governance efforts, you might be forgetting the quite impactful 2016 and 2022 AI impacts survey: https://wiki.aiimpacts.org/doku.php?id=ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:2022_expert_survey_on_progress_in_ai
These both I think had a pretty large effect on a bunch of governance and coordination work.
Another post that had a pretty major impact was Katja’s post “Let’s Think about Slowing Down AI”, which had a relatively large effect on me and many others.
I also think the discontinuity studies have been quite helpful and I’ve frequently references them when talking about takeoff dynamics.
I don’t believe AI Impacts is being funded by MIRI; we’re talking about fiscal sponsorship here, which means that they’re the same legal entity in the eyes of the government, and share accounting, but typically (and as it looks to me in this case) they do their fundraising entirely separately and do not work together.
On their output, I’ll mention that I found Katja’s post Discontinuous progress in history: an update quite interesting when thinking about how quickly things change historically, and I think Let’s think about slowing down AI helped get people thinking seriously about that topic.
I think my main critique would be that their total output is fairly small (i.e. a fair bit less than I would have predicted 6-7 years ago).