I definitely agree that policymakers, labs, and journalists seem to be “waking up” to AGI risk recently. However the wakeup is not a binary thing & there’s still a lot of additional wakeup that needs to happen before people behave responsibly enough to keep the risk below, say, 10%. And my timelines are short enough that I don’t currently expect that to happen in time.
Based on my personal experience in pandemic resilience, additional wakeups can proceed swiftly as soon as a specific society-scale harm is realized.
Specifically, as we are waking up to over-reliance harms and addressing them (esp. within security OODA loops), it would buy time for good enough continuous alignment.
I definitely agree that policymakers, labs, and journalists seem to be “waking up” to AGI risk recently. However the wakeup is not a binary thing & there’s still a lot of additional wakeup that needs to happen before people behave responsibly enough to keep the risk below, say, 10%. And my timelines are short enough that I don’t currently expect that to happen in time.
What about the technical alignment problem crux?
Based on my personal experience in pandemic resilience, additional wakeups can proceed swiftly as soon as a specific society-scale harm is realized.
Specifically, as we are waking up to over-reliance harms and addressing them (esp. within security OODA loops), it would buy time for good enough continuous alignment.