I have not watched it myself but AI Safety Memes has excerpted it as follows:
Legendary scholar @DanielDennett agrees w/Douglas Hofstadter, is “very distressed” by AI
Hofstadter: “[AI] is terrifying. I hate it. I think about it practically all the time, every single day”
“[Humanity] is about to be eclipsed and left in the dust”
“An oncoming tsunami that is going to catch all of humanity off guard.”
DENNETT: [The alignment problem is extremely hard]
“It would be like someone saying ‘I know the solution to the problem of Israel in the Arab world, the Palestinians. It’s simple.’ No, it isn’t. No, it isn’t. And if you think it is, that’s almost self-disqualifying.
That is such a complex issue. You have to know so much, and appreciate so much, and set aside so many misconceptions and oversimplifications to make any sense of it. And I think the alignment problem is like that.
If someone tells you they’ve got the alignment problem solved, that’s two strikes against them.
They are wildly optimistic. They say “we know how to write control architectures that prevent them from doing X Y and Z.”
Oh really? These systems are huge. They’re gigantic software entities. Has Microsoft or anyone ever invented a program remotely that size that didn’t have bugs in it? No. No, absolutely not.”
Programs can get out of control. There is no magic bullet that is going to make such huge systems transparent.”
Hofstadter’s long-time associate/friend/co-author, Daniel Dennett, has discussed Hofstadter’s change of heart in a recent (December 2023?) Theories of Everything podcast/interview: https://www.youtube.com/watch?v=bH553zzjQlI&t=7195s
I have not watched it myself but AI Safety Memes has excerpted it as follows: