Totally agree that a major goal of GEB was to help people build intuition for things like “How could our minds be built out of things that follow purely mechanical rules?” that are now more well-accepted. With that in mind, there are two ways to interpret a lot of the stuff he writes about minds and artificial intelligence:
“This model I’m presenting is literally how the brain works/how future AIs will work.”
“This model is a plausible idea about how the brain might work/how future AIs might work, presented here to build intuition, not necessarily because I think it’s right.”
I spent a long time trying to figure out which of these interpretations was intended, and my best guess was that it was different for different claims, but usually somewhere in the middle. The stuff about grandmother modules was more like (1) -- he was really trying to argue from first principles that grandmother modules must exist. A lot of the stuff about AI was more like (2), I think (but still with a little bit of (1), which is why I think he still ought to be a little surprised by modern ML).
I’m actually very curious about to what extent GEB helped put ideas like “the mind is a machine” and “it’s possible to create a thinking computer” into the water supply. Hofstadter’s arguments for these things felt a little different than the standard arguments, so it never occurred to me that he could be partly responsible for the widespread acceptance of these ideas. Maybe GEB convinced a bunch of people, who eventually came up with better arguments? Or maybe GEB had nothing to do it, I honestly have no idea.
Totally agree that a major goal of GEB was to help people build intuition for things like “How could our minds be built out of things that follow purely mechanical rules?” that are now more well-accepted. With that in mind, there are two ways to interpret a lot of the stuff he writes about minds and artificial intelligence:
“This model I’m presenting is literally how the brain works/how future AIs will work.”
“This model is a plausible idea about how the brain might work/how future AIs might work, presented here to build intuition, not necessarily because I think it’s right.”
I spent a long time trying to figure out which of these interpretations was intended, and my best guess was that it was different for different claims, but usually somewhere in the middle. The stuff about grandmother modules was more like (1) -- he was really trying to argue from first principles that grandmother modules must exist. A lot of the stuff about AI was more like (2), I think (but still with a little bit of (1), which is why I think he still ought to be a little surprised by modern ML).
I’m actually very curious about to what extent GEB helped put ideas like “the mind is a machine” and “it’s possible to create a thinking computer” into the water supply. Hofstadter’s arguments for these things felt a little different than the standard arguments, so it never occurred to me that he could be partly responsible for the widespread acceptance of these ideas. Maybe GEB convinced a bunch of people, who eventually came up with better arguments? Or maybe GEB had nothing to do it, I honestly have no idea.
It may be interesting to read the ancient slashdot thread from where I first learned about GEB to see what the buzz around it was like 20 years ago.