Hindsight is 20⁄20. I think you’re underemphasizing how our current state of affairs is fairly contingent on social factors, like the actions of people concerned about AI safety.
For example, I think this world is actually quite plausible, not incongruent:
A world where AI capabilities progressed far enough to get us to something like chat-gpt, but somehow this didn’t cause a stir or wake-up moment for anyone who wasn’t already concerned about AI risk.
I can easily imagine a counterfactual world in which:
ChatGPT shows that AI is helpful, safe, and easy to align
Policymakers are excited about accelerating the benefits of AI and unconvinced of risks
Industry leaders and respectable academics are not willing to make public statements claiming that AI is an extinction risk, especially given the lack of evidence or analysis
Instead of the UK AI Safety Summit, we get a summit which is about driving innovation
AI labs play up how AIs can help with safety and prosperity and dismiss anything related to AI risk
Hindsight is 20⁄20. I think you’re underemphasizing how our current state of affairs is fairly contingent on social factors, like the actions of people concerned about AI safety.
For example, I think this world is actually quite plausible, not incongruent:
I can easily imagine a counterfactual world in which:
ChatGPT shows that AI is helpful, safe, and easy to align
Policymakers are excited about accelerating the benefits of AI and unconvinced of risks
Industry leaders and respectable academics are not willing to make public statements claiming that AI is an extinction risk, especially given the lack of evidence or analysis
Instead of the UK AI Safety Summit, we get a summit which is about driving innovation
AI labs play up how AIs can help with safety and prosperity and dismiss anything related to AI risk