Fascinating account by AI hero Douglas Hofstadter about his struggle with the realization that human-level AGI is near and probably ASI also [LW link to here]
Fascinating both because someone as old and stubborn and genius as Hofstadter changing his mind is not that common, and because I think he’s changing his mind to an excessive extent here.
I.e. yes he was wrong in his view that “Singularity is far” … however I don’t think he’s right that transformers can get to human-level AGI without an infusion of a bunch of cognitive-science-based architecture and strange-loopy stuff.
I.e. I think a bunch of the ideas Hofstadter spent his life working on are actually going to be critical in getting from LLMs onward to human-level AGI.
It seems he was previously thinking that DNNs were useless and totally off in the wrong direction from AGI, so now he’s taken aback by recent developments more—so than those of us who always thought DNNs were cool but not the whole picture
So now that he sees he was wrong in his underestimation of DNNs, instead of saying “Hmm OK maybe they’re part of the story, but maybe I can use my lifetime of AI/cognitive insights to fill in the rest of the story” he’s sort of throwing his hands up....
I am reminded (though it’s not a precise analogy mapping) of Dostoevsky’s observation that the atheist and the evangelist are clustered closely together compared to the agnostic...
HN comments.
Ben Goertzel: