Right now talking about AI risk is like yelling about covid in Feb 2020. I and many others spent the end of that February in distress over impending doom, and despairing that absolutely nobody seemed to care—but literally within a couple weeks, America went from dismissing covid to everyone locking down.
I don’t think comparing misaligned AI to covid is fair. With covid, real life people were dying, and it was easy to understand the concept of “da virus will spread,” and almost every government on Earth was still MASSIVELY too late in taking action. Even when the pandemic was in full swing they were STILL making huge mistakes. And now post-pandemic, have any lessons been learned in prep for the next one? No.
Far too slow to act, stupid decisions when acting, learned nothing even after the fact.
With AI it’s much worse, because the day before the world ends everything will look perfectly normal.
Even in a hypothetical scenario where everyone gets a free life like in a video game so that when the world ends we all get to wake up the next morning regardless, people would still build the AGI again anyway.
I disagree. I think that “everything will look fine until the moment we are all doomed” is quite unlikely. I think we are going to get clear warning shots, and should be prepared to capitalize on those in order to bring political force to bear on the problem. It’s gonna get messy. Dumb, unhelpful legislation seems nearly unavoidable. I’m hopeful that having governments flailing around with a mix of bad and good legislation and enforcement will overall be better than them doing nothing.
I don’t think comparing misaligned AI to covid is fair. With covid, real life people were dying, and it was easy to understand the concept of “da virus will spread,” and almost every government on Earth was still MASSIVELY too late in taking action. Even when the pandemic was in full swing they were STILL making huge mistakes. And now post-pandemic, have any lessons been learned in prep for the next one? No.
Far too slow to act, stupid decisions when acting, learned nothing even after the fact.
With AI it’s much worse, because the day before the world ends everything will look perfectly normal.
Even in a hypothetical scenario where everyone gets a free life like in a video game so that when the world ends we all get to wake up the next morning regardless, people would still build the AGI again anyway.
I disagree. I think that “everything will look fine until the moment we are all doomed” is quite unlikely. I think we are going to get clear warning shots, and should be prepared to capitalize on those in order to bring political force to bear on the problem. It’s gonna get messy. Dumb, unhelpful legislation seems nearly unavoidable. I’m hopeful that having governments flailing around with a mix of bad and good legislation and enforcement will overall be better than them doing nothing.