GEB is great as many things; as an introduction to formal systems, self reference, several computer science topics, Gödel’s first Incompleteness Theorem, and other stuff. Often it is also a unique and very entertaining hybrid of art and nonfiction. Without denying any of those merits, the book’s weakest point is actually the core message, quoted in OP as
GEB is a very personal attempt to say how it is that animate beings can come out of inanimate matter… GEB approaches [this question] by slowly building up an analogy that likens inanimate molecules to meaningless symbols, and further likens selves… to certain special swirly, twisty, vortex-like, and meaningful patterns that arise only in particular types of systems of meaningless symbols.
What Hofstadter does is is the following: he identifies self-awareness and self-reference as core features of consciousness and/or intelligence, and he embarks on a long journey across various fields in search of phenomena that also has something to do with self-reference. This is some kind of weird essentialism; Hofstadter tries to reduce extremely high-level features of complex minds to (superficially) similar features that arise in enormously simpler formal and physical systems. Hofstadter doesn’t believe in ontologically fundamental mental entities, so he’s far from classic essentialism, yet he believes in very low level “essences” of consciousness that percolate up to high-level minds. This abrupt jumping across levels of organizations reminds me a bit of those people who try to derive practical everyday epistemic implications from the First Incompleteness Theorem (or get dispirited because of some implied “inherent unknowability” of the world).
Now, to be fair, GEB considers medium levels of organization in its two chapters on AI, but GEB’s far less elaborate on those matters than on formal systems, for instance. The AI chapters are also the most outdated now, and even there Hofstadter’s not really trying to do any noteworthy reduction of minds but instead briefly ponders then-contemporary AI topics such as Turing tests, computer chess, SHRDLU, Bongard problems, symbol grounding, etc.
To be even fairer, valid reduction of high-level features of human minds is extremely difficult. Ev-psych and Cognitive Science can do it occasionally, but they don’t yet attempt reduction of general intelligence and consciousness itself. It is probably understandable that Hofstadter couldn’t see that far ahead into the future of cogsci, evpsych and AI. Eliezer Yudkowsky’s Level of Organization in General Intelligence is the only reductionist work I know that tries to wrestle all of it at once, and while it is of course not definitive or even fleshed-out, I think it represents the kind of mode of thinking that could possibly yield genuine insights about the mysteries of consciousness. In contrast, GEB never really enters that mode.
Agreed on the weakness of Hofstadter’s core message, but for another reason. While self-awareness is one feature that is often referred to as “consciousness”, it isn’t the biggest trouble-maker. The “qualia” of experience can occur without self-awareness, and probably even in organisms utterly lacking in self-awareness. The subjective feels have been the biggest stumbling block to philosophical progress, and turning away from them (or worse, thinking that they amount to a kind of self-reference) doesn’t help the reductionist cause.
GEB is great as many things; as an introduction to formal systems, self reference, several computer science topics, Gödel’s first Incompleteness Theorem, and other stuff. Often it is also a unique and very entertaining hybrid of art and nonfiction. Without denying any of those merits, the book’s weakest point is actually the core message, quoted in OP as
What Hofstadter does is is the following: he identifies self-awareness and self-reference as core features of consciousness and/or intelligence, and he embarks on a long journey across various fields in search of phenomena that also has something to do with self-reference. This is some kind of weird essentialism; Hofstadter tries to reduce extremely high-level features of complex minds to (superficially) similar features that arise in enormously simpler formal and physical systems. Hofstadter doesn’t believe in ontologically fundamental mental entities, so he’s far from classic essentialism, yet he believes in very low level “essences” of consciousness that percolate up to high-level minds. This abrupt jumping across levels of organizations reminds me a bit of those people who try to derive practical everyday epistemic implications from the First Incompleteness Theorem (or get dispirited because of some implied “inherent unknowability” of the world).
Now, to be fair, GEB considers medium levels of organization in its two chapters on AI, but GEB’s far less elaborate on those matters than on formal systems, for instance. The AI chapters are also the most outdated now, and even there Hofstadter’s not really trying to do any noteworthy reduction of minds but instead briefly ponders then-contemporary AI topics such as Turing tests, computer chess, SHRDLU, Bongard problems, symbol grounding, etc.
To be even fairer, valid reduction of high-level features of human minds is extremely difficult. Ev-psych and Cognitive Science can do it occasionally, but they don’t yet attempt reduction of general intelligence and consciousness itself. It is probably understandable that Hofstadter couldn’t see that far ahead into the future of cogsci, evpsych and AI. Eliezer Yudkowsky’s Level of Organization in General Intelligence is the only reductionist work I know that tries to wrestle all of it at once, and while it is of course not definitive or even fleshed-out, I think it represents the kind of mode of thinking that could possibly yield genuine insights about the mysteries of consciousness. In contrast, GEB never really enters that mode.
Agreed on the weakness of Hofstadter’s core message, but for another reason. While self-awareness is one feature that is often referred to as “consciousness”, it isn’t the biggest trouble-maker. The “qualia” of experience can occur without self-awareness, and probably even in organisms utterly lacking in self-awareness. The subjective feels have been the biggest stumbling block to philosophical progress, and turning away from them (or worse, thinking that they amount to a kind of self-reference) doesn’t help the reductionist cause.