Hofstadter haspreviouslyenergeticallycriticized GPT-2/3 models (and deeplearning and compute-heavy GOFAI).
These criticisms were widely circulated & cited, and apparently many people found Hofstadter a convincing & trustworthy authority when he was negative on deep learning capabilities & prospects, and so I found his most-recent comments (which amplify things he has been saying in private since at least 2014) of considerable interest.
This interview (EDIT: and earlier material, it turns out), appears to have gone under the radar, perhaps because it’s a video, so below I excerpt from the second half where he discusses DL progress & AI risk:
Douglas
Hofstadter: …In my book, I Am a
Strange
Loop, I tried to set forth what
it is that really makes a self or a soul. I like to use the word
“soul”, not in the religious sense, but as a synonym for “I”, a
human “I”, capital letter “I.” So, what is it that makes a human
being able to validly say “I”? What justifies the use of that
word? When can a computer say “I” and we feel that there is a
genuine “I” behind the scenes?
I don’t mean like when you call up the drugstore and the
chatbot, or whatever you want to call it, on the phone says,
“Tell me what you want. I know you want to talk to a human
being, but first, in a few words, tell me what you want. I can
understand full sentences.” And then you say something and it
says, “Do you want to refill a prescription?” And then when I
say yes, it says, “Gotcha”, meaning “I got you.” So it acts as
if there is an “I” there, but I don’t have any sense whatsoever
that there is an “I” there. It doesn’t feel like an “I” to me,
it feels like a very mechanical process.
But in the case of more advanced things like
ChatGPT-3 or
GPT-4, it feels like there
is something more there that merits the word “I.” The question
is, when will we feel that those things actually deserve to be
thought of as being full-fledged, or at least partly fledged,
“I”s?
I personally worry that this is happening right now. But it’s
not only happening right now. It’s not just that certain things
that are coming about are similar to human consciousness or
human selves. They are also very different, and in one way, it
is extremely frightening to me. They are extraordinarily much
more knowledgeable and they are extraordinarily much faster. So
that if I were to take an hour in doing something, the ChatGPT-4
might take one second, maybe not even a second, to do exactly
the same thing.
And that suggests that these entities, whatever you want to
think of them, are going to be very soon, right now they still
make so many mistakes that we can’t call them more intelligent
than us, but very soon they’re going to be, they may very well
be more intelligent than us and far more intelligent than us.
And at that point, we will be receding into the background in
some sense. We will have handed the baton over to our
successors, for better or for worse.
And I can understand that if this were to happen over a long
period of time, like hundreds of years, that might be okay. But
it’s happening over a period of a few years. It’s like a tidal
wave that is washing over us at unprecedented and unimagined
speeds. And to me, it’s quite terrifying because it suggests
that everything that I used to believe was the case is being
overturned.
Q: What are some things specifically that terrify you? What
are some issues that you’re really...
D. Hofstadter: When I started out studying cognitive science
and thinking about the mind and computation, you know, this was
many years ago, around 1960, and I knew how computers worked and
I knew how extraordinarily rigid they were. You made the
slightest typing error and it completely ruined your program.
Debugging was a very difficult art and you might have to run
your program many times in order to just get the bugs out. And
then when it ran, it would be very rigid and it might not do
exactly what you wanted it to do because you hadn’t told it
exactly what you wanted to do correctly, and you had to change
your program, and on and on.
Computers were very rigid and I grew up with a certain feeling
about what computers can or cannot do. And I thought that
artificial intelligence, when I heard about it, was a very
fascinating goal, which is to make rigid systems act fluid. But
to me, that was a very long, remote goal. It seemed infinitely
far away. It felt as if artificial intelligence was the art of
trying to make very rigid systems behave as if they were fluid.
And I felt that would take enormous amounts of time. I felt it
would be hundreds of years before anything even remotely like a
human mind would be asymptotically approaching the level of the
human mind, but from beneath.
I never imagined that computers would rival, let alone surpass,
human intelligence. And in principle, I thought they could rival
human intelligence. I didn’t see any reason that they couldn’t.
But it seemed to me like it was a goal that was so far away, I
wasn’t worried about it. But when certain systems started
appearing, maybe 20 years ago, they gave me pause. And then this
started happening at an accelerating pace, where unreachable
goals and things that computers shouldn’t be able to do started
toppling. The defeat of Gary Kasparov by Deep Blue, and then
going on to Go systems, Go programs, well, systems that could
defeat some of the best Go players in the world. And then
systems got better and better at translation between languages,
and then at producing intelligible responses to difficult
questions in natural language, and even writing poetry.
And my whole intellectual edifice, my system of beliefs… It’s
a very traumatic experience when some of your most core beliefs
about the world start collapsing. And especially when you think
that human beings are soon going to be eclipsed. It felt as if
not only are my belief systems collapsing, but it feels as if
the entire human race is going to be eclipsed and left in the
dust soon. People ask me, “What do you mean by ‘soon’?” And I
don’t know what I really mean. I don’t have any way of knowing.
But some part of me says 5 years, some part of me says 20 years,
some part of me says, “I don’t know, I have no idea.” But the
progress, the accelerating progress, has been so unexpected, so
completely caught me off guard, not only myself but many, many
people, that there is a certain kind of terror of an oncoming
tsunami that is going to catch all humanity off guard.
It’s not clear whether that will mean the end of humanity in the
sense of the systems we’ve created destroying us. It’s not clear
if that’s the case, but it’s certainly conceivable. If not, it
also just renders humanity a very small phenomenon compared to
something else that is far more intelligent and will become
incomprehensible to us, as incomprehensible to us as we are to
cockroaches.
Q: That’s an interesting thought. [nervous laughter]
Hofstadter: Well, I don’t think it’s interesting. I think
it’s terrifying. I hate it. I think about it practically all the
time, every single day. [Q: Wow.] And it overwhelms me and depresses
me in a way that I haven’t been depressed for a very long time.
Q: Wow, that’s really intense. You have a unique
perspective, so knowing you feel that way is very powerful.
Q: How have LLMs, large language models, impacted your view
of how human thought and creativity works?
D H: Of course, it reinforces the idea that human creativity
and so forth come from the brain’s hardware. There is nothing
else than the brain’s hardware, which is neural nets. But one
thing that has completely surprised me is that these LLMs and
other systems like them are all feed-forward. It’s like the
firing of the neurons is going only in one direction. And I
would never have thought that deep thinking could come out of a
network that only goes in one direction, out of firing neurons
in only one direction. And that doesn’t make sense to me, but
that just shows that I’m naive.
It also makes me feel that maybe the human mind is not so
mysterious and complex and impenetrably complex as I imagined it
was when I was writing Gödel, Escher, Bach and writing I Am a
Strange Loop. I felt at those times, quite a number of years
ago, that as I say, we were very far away from reaching anything
computational that could possibly rival us. It was getting more
fluid, but I didn’t think it was going to happen, you know,
within a very short time.
And so it makes me feel diminished. It makes me feel, in some
sense, like a very imperfect, flawed structure compared with
these computational systems that have, you know, a million times
or a billion times more knowledge than I have and are a billion
times faster. It makes me feel extremely inferior. And I don’t
want to say deserving of being eclipsed, but it almost feels
that way, as if we, all we humans, unbeknownst to us, are soon
going to be eclipsed, and rightly so, because we’re so imperfect
and so fallible. We forget things all the time, we confuse
things all the time, we contradict ourselves all the time. You
know, it may very well be that that just shows how limited we
are.
Q: Wow. So let me keep going through the questions. Is there
a time in our history as human beings when there was something
analogous that terrified a lot of smart people?
D H: Fire.
Q: You didn’t even hesitate, did you? So what can we learn
from that?
D H: No, I don’t know. Caution, but you know, we may have
already gone too far. We may have already set the forest on
fire. I mean, it seems to me that we’ve already done that. I
don’t think there’s any way of going back.
When I saw an interview with Geoff
Hinton, who was probably the most
central person in the development of all of these kinds of
systems, he said something striking. He said he might regret his
life’s work. He said, “Part of me regrets all of my life’s
work.” The interviewer then asked him how important these
developments are. “Are they as important as the Industrial
Revolution? Is there something
analogous in history that terrified people?” Hinton thought for
a second and he said, “Well, maybe as important as the wheel.”
(YouTube transcript cleaned up by GPT-4 & checked against audio.)
Douglas Hofstadter changes his mind on Deep Learning & AI risk (June 2023)?
Link post
A podcast interview (posted 2023-06-29) with noted AI researcher Douglas Hofstadter discusses his career and current views on AI (via Edward Kmett), and amplified to David Brooks.
Hofstadter has previously energetically criticized GPT-2/3 models (and deep learning and compute-heavy GOFAI). These criticisms were widely circulated & cited, and apparently many people found Hofstadter a convincing & trustworthy authority when he was negative on deep learning capabilities & prospects, and so I found his most-recent comments (which amplify things he has been saying in private since at least 2014) of considerable interest.
This interview (EDIT: and earlier material, it turns out), appears to have gone under the radar, perhaps because it’s a video, so below I excerpt from the second half where he discusses DL progress & AI risk:
(YouTube transcript cleaned up by GPT-4 & checked against audio.)