The opening statements made it clear that no one involved cared about or was likely even aware of existential risks.
I think this is a significant overstatement given, especially, these remarks from Sen. Hawley:
And I think my question is, what kind of an innovation is [AI] going to be? Is it gonna be like the printing press that diffused knowledge, power, and learning widely across the landscape that empowered, ordinary, everyday individuals that led to greater flourishing, that led above all two greater liberty? Or is it gonna be more like the atom bomb, huge technological breakthrough, but the consequences severe, terrible, continue to haunt us to this day? I don’t know the answer to that question. I don’t think any of us in the room know the answer to that question. Cause I think the answer has not yet been written. And to a certain extent, it’s up to us here and to us as the American people to write the answer.
Obviously he didn’t use the term “existential risk.” But that’s not the standard we should use to determine whether people are aware of risks that could be called, in our lingo, existential. Hawley clearly believes that there is a clear possibility that this could be an atomic-bomb-level invention, which is pretty good (but not decisive) evidence that, if asked, he would agree that this could cause something like human extinction.
I think this is a significant overstatement given, especially, these remarks from Sen. Hawley:
Obviously he didn’t use the term “existential risk.” But that’s not the standard we should use to determine whether people are aware of risks that could be called, in our lingo, existential. Hawley clearly believes that there is a clear possibility that this could be an atomic-bomb-level invention, which is pretty good (but not decisive) evidence that, if asked, he would agree that this could cause something like human extinction.