I personally take AI risks seriously, and I think they are worth investigating and preparing for.
I have drifted towards a more skeptical position on risk in the last two years. This is due to a combination of seeing the societal reaction to AI, me participating in several risk evaluation processes, and AI unfolding more gradually than I expected 10 years ago.
Currently I am more worried about concentration in AI development and how unimproved humans will retain wealth over the very long term than I am about a violent AI takeover.
Personal view as an employee: Epoch has always been a mix of EAs/safety-focused people and people with other views. I don’t think our core mission was ever explicitly about safety, for a bunch of reasons including that some of us were personally uncertain about AI risk, and that an explicit commitment to safety might have undermined the perceived neutrality/objectiveness of our work. The mission was raising the standard of evidence for thinking about AI and informing people to hopefully make better decisions.
My impression is that Matthew, Tamay and Ege were among the most skeptical about AI risk and had relatively long timelines more or less from the beginning. They have contributed enormously to Epoch and I think we’d have done much less valuable work without them. I’m quite happy that they have been working with us until now, they could have moved to do direct capabilities work or anything else at any point if they wanted and I don’t think they lacked opportunities to do so.
Finally, Jaime is definitely not the only one who still takes risks seriously (at the very least I also do), even if there have been shifts in relative concern about different types of risks (eg: ASI takeover vs gradual disempowerment).
This comment suggests it was maybe a shift over the last year or two (but also emphasises that at least Jaime thinks AI risk is still serious): https://www.lesswrong.com/posts/Fhwh67eJDLeaSfHzx/jonathan-claybrough-s-shortform?commentId=X3bLKX3ASvWbkNJkH
Personal view as an employee: Epoch has always been a mix of EAs/safety-focused people and people with other views. I don’t think our core mission was ever explicitly about safety, for a bunch of reasons including that some of us were personally uncertain about AI risk, and that an explicit commitment to safety might have undermined the perceived neutrality/objectiveness of our work. The mission was raising the standard of evidence for thinking about AI and informing people to hopefully make better decisions.
My impression is that Matthew, Tamay and Ege were among the most skeptical about AI risk and had relatively long timelines more or less from the beginning. They have contributed enormously to Epoch and I think we’d have done much less valuable work without them. I’m quite happy that they have been working with us until now, they could have moved to do direct capabilities work or anything else at any point if they wanted and I don’t think they lacked opportunities to do so.
Finally, Jaime is definitely not the only one who still takes risks seriously (at the very least I also do), even if there have been shifts in relative concern about different types of risks (eg: ASI takeover vs gradual disempowerment).
Thank you, that is helpful information.