Consider this a short review of the entire “This Most Important Century” sequence, not just the Introduction post.
This series was one of the first and most compelling writings I read when I was first starting to consider AI risk. It basically swayed me from thinking AI was a long way off and will likely have moderate impact among technologies, to thinking AI will likely be transformative and come in the next few decades.
After that I decided to become an AI alignment researcher, in part because of these posts. So the impact of these posts on me personally was quite large.
I found out about this series on Ezra Klein’s podcast, when Holden was interviewed there about this series, before I was a LessWrong reader. Given the size of Ezra’s audience, I’d be surprised if I was the only one these posts made an impact on. So I would guess the influence of this series was pretty substantial, even outside of their impact on me personally.
Consider this a short review of the entire “This Most Important Century” sequence, not just the Introduction post.
This series was one of the first and most compelling writings I read when I was first starting to consider AI risk. It basically swayed me from thinking AI was a long way off and will likely have moderate impact among technologies, to thinking AI will likely be transformative and come in the next few decades.
After that I decided to become an AI alignment researcher, in part because of these posts. So the impact of these posts on me personally was quite large.
I found out about this series on Ezra Klein’s podcast, when Holden was interviewed there about this series, before I was a LessWrong reader. Given the size of Ezra’s audience, I’d be surprised if I was the only one these posts made an impact on. So I would guess the influence of this series was pretty substantial, even outside of their impact on me personally.