I went to 80,000 hours, and hit the start here button, and it starts with climate change, then nuclear war, then pandemics, then AI. Long-termism is gestured at in passing on the way to pandemics. So I think this is pretty much already done. I expect that leading with AI instead of climate change would lose readers, and I expect that this is something they’ve thought about.
The summary of Holden Karnofsky’s Most Important Century series is five points long, and AI risk is introduced at the second point: “The long-run future could come much faster than we think, due to a possible AI-driven productivity explosion”. I’m not clear what you would change, if anything.
I went to 80,000 hours, and hit the start here button, and it starts with climate change, then nuclear war, then pandemics, then AI. Long-termism is gestured at in passing on the way to pandemics. So I think this is pretty much already done. I expect that leading with AI instead of climate change would lose readers, and I expect that this is something they’ve thought about.
The summary of Holden Karnofsky’s Most Important Century series is five points long, and AI risk is introduced at the second point: “The long-run future could come much faster than we think, due to a possible AI-driven productivity explosion”. I’m not clear what you would change, if anything.
Oh, neat! I haven’t looked at their introduction in a while, that’s a much better pitch than I remember! Kudos to them.