I feel like GEB has been diminished a bit by its own success. People reading it nowadays might go “what’s the big deal?” A big theme is how the mind can be a machine and still do stupid stuff, which had to be spelled out in the 70s but has pretty much permeated the relevant subcultures these days. And of course Hofstadter didn’t know a clear recipe for an actual AGI, so the speculative parts on that were left at the level of intriguing handwaving.
Totally agree that a major goal of GEB was to help people build intuition for things like “How could our minds be built out of things that follow purely mechanical rules?” that are now more well-accepted. With that in mind, there are two ways to interpret a lot of the stuff he writes about minds and artificial intelligence:
“This model I’m presenting is literally how the brain works/how future AIs will work.”
“This model is a plausible idea about how the brain might work/how future AIs might work, presented here to build intuition, not necessarily because I think it’s right.”
I spent a long time trying to figure out which of these interpretations was intended, and my best guess was that it was different for different claims, but usually somewhere in the middle. The stuff about grandmother modules was more like (1) -- he was really trying to argue from first principles that grandmother modules must exist. A lot of the stuff about AI was more like (2), I think (but still with a little bit of (1), which is why I think he still ought to be a little surprised by modern ML).
I’m actually very curious about to what extent GEB helped put ideas like “the mind is a machine” and “it’s possible to create a thinking computer” into the water supply. Hofstadter’s arguments for these things felt a little different than the standard arguments, so it never occurred to me that he could be partly responsible for the widespread acceptance of these ideas. Maybe GEB convinced a bunch of people, who eventually came up with better arguments? Or maybe GEB had nothing to do it, I honestly have no idea.
I thought a lot of it showed the different little tricks that formal logic has. The concept of true and false doesn’t exist in reality in any form. Things just are. A lot of representations and operations are things we use that are useful to create bigger useful logic statements. In a sense how we categorize intelligence is very human. This is why I think AGI will end up being human-like. We use these methods to help us organize reality into meaningful, to us, and practical representations that we recognize as individual concepts and entities mentally.
I feel like GEB has been diminished a bit by its own success. People reading it nowadays might go “what’s the big deal?” A big theme is how the mind can be a machine and still do stupid stuff, which had to be spelled out in the 70s but has pretty much permeated the relevant subcultures these days. And of course Hofstadter didn’t know a clear recipe for an actual AGI, so the speculative parts on that were left at the level of intriguing handwaving.
Totally agree that a major goal of GEB was to help people build intuition for things like “How could our minds be built out of things that follow purely mechanical rules?” that are now more well-accepted. With that in mind, there are two ways to interpret a lot of the stuff he writes about minds and artificial intelligence:
“This model I’m presenting is literally how the brain works/how future AIs will work.”
“This model is a plausible idea about how the brain might work/how future AIs might work, presented here to build intuition, not necessarily because I think it’s right.”
I spent a long time trying to figure out which of these interpretations was intended, and my best guess was that it was different for different claims, but usually somewhere in the middle. The stuff about grandmother modules was more like (1) -- he was really trying to argue from first principles that grandmother modules must exist. A lot of the stuff about AI was more like (2), I think (but still with a little bit of (1), which is why I think he still ought to be a little surprised by modern ML).
I’m actually very curious about to what extent GEB helped put ideas like “the mind is a machine” and “it’s possible to create a thinking computer” into the water supply. Hofstadter’s arguments for these things felt a little different than the standard arguments, so it never occurred to me that he could be partly responsible for the widespread acceptance of these ideas. Maybe GEB convinced a bunch of people, who eventually came up with better arguments? Or maybe GEB had nothing to do it, I honestly have no idea.
It may be interesting to read the ancient slashdot thread from where I first learned about GEB to see what the buzz around it was like 20 years ago.
I thought a lot of it showed the different little tricks that formal logic has. The concept of true and false doesn’t exist in reality in any form. Things just are. A lot of representations and operations are things we use that are useful to create bigger useful logic statements. In a sense how we categorize intelligence is very human. This is why I think AGI will end up being human-like. We use these methods to help us organize reality into meaningful, to us, and practical representations that we recognize as individual concepts and entities mentally.