A brief collection of Hinton’s recent comments on AGI risk
Since I’ve seen some people doubt whether (original popularizer of the backpropagation algorithm and one of the original developers of deep learning) Geoff Hinton is actually concerned about AGI risk (as opposed to e.g. the NYT spinning an anti-tech agenda in their interview of him), I thought I’d put together a brief collection of his recent comments on the topic.
Written interviews
New York Times, May 1:
Dr. Hinton said he has quit his job at Google, where he has worked for more than a decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work. [...]
Dr. Hinton [originally] thought [systems like ChatGPT were] a powerful way for machines to understand and generate language, but [...] inferior to the way humans handled language. [...]
Then, last year [...] his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others. “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.” [...]
Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their own. [...]
“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
Technology Review, May 2:
People are also divided on whether the consequences of this new form of intelligence, if it exists, would be beneficial or apocalyptic. “Whether you think superintelligence is going to be good or bad depends very much on whether you’re an optimist or a pessimist,” he says. “If you ask people to estimate the risks of bad things happening, like what’s the chance of someone in your family getting really sick or being hit by a car, an optimist might say 5% and a pessimist might say it’s guaranteed to happen. But the mildly depressed person will say the odds are maybe around 40%, and they’re usually right.”
Which is Hinton? “I’m mildly depressed,” he says. “Which is why I’m scared.” [...]
… even if a bad actor doesn’t seize the machines, there are other concerns about subgoals, Hinton says.
“Well, here’s a subgoal that almost always helps in biology: get more energy. So the first thing that could happen is these robots are going to say, ‘Let’s get more power. Let’s reroute all the electricity to my chips.’ Another great subgoal would be to make more copies of yourself. Does that sound good?” [...]
When Hinton saw me out, the spring day had turned gray and wet. “Enjoy yourself, because you may not have long left,” he said. He chuckled and shut the door.
Video interviews
CNN, May 2:
INTERVIEWER: You’ve spoken out saying that AI could manipulate or possibly figure out a way to kill humans. How could it kill humans?
HINTON: Well eventually, if it gets to be much smarter than us, it’ll be very good at manipulation because it will have learned that from us. And there are very few examples of a more intelligent thing being controlled by a less intelligent thing. And it knows how to program, so it’ll figure out ways of getting around restrictions we put on it. It’ll figure out ways of manipulating people to do what it wants.
INTERVIEWER: So what do we do? Do we just need to pull the plug on it right now? Do we need to put in far more restrictions and backstops on this? How do we solve this problem?
HINTON: It’s not clear to me that we can solve this problem. I believe we should put a big effort into thinking about ways to solve the problem. I don’t have a solution at present. I just want people to be aware that this is a really serious problem and we need to be thinking about it very hard. I don’t think we can stop the progress. I didn’t sign the petition saying we should stop working on AI, because if people in America stopped, people in China wouldn’t. It’s very hard to verify whether people are doing it.
INTERVIEWER: There have been some whistleblowers who have been warning about the dangers of AI over the past few years. One of them, Timnit Gebru, was forced out of Google for voicing his concerns. Looking back on it, do you wish that you had stood behind these whistleblowers more?
HINTON: Timnit’s actually a woman.
INTERVIEWER: Oh, sorry.
HINTON: So they were rather different concerns from mine. I think it’s easier to voice concerns if you leave the company first. And their concerns aren’t as existentially serious as the idea of these things getting more intelligent than us and taking over.
[...] INTERVIEWER: What should [AI] regulation look like?
HINTON: I’m not an expert on how to do regulation. I’m just a scientist who suddenly realized that these things are getting smarter than us. And I want to sort of blow the whistle and say we should worry seriously about how we stop these things getting control over us. And it’s going to be very hard. And I don’t have the solutions. I wish I did. [...]
So for some things [stopping bad actors or rogue nations is] very hard, like them using AI for manipulating electorates or for fighting wars with robot soldiers. But for the existential threat of AI taking over, we’re all in the same boat. It’s bad for all of us. And so we might be able to get China and the U.S. to agree on things like that. It’s like nuclear weapons. If there’s a nuclear war, we all lose. And it’s the same if these things take over. So since we’re all in the same boat, we should be able to get agreement between China and the U.S. on things like that.
CBS Morning, Mar 25:
[25:51] INTERVIEWER: Some people are worried that this could take off very quickly and we just might not be ready for that. Does that concern you?
HINTON: It does a bit. Until quite recently, I thought it was going to be like 20 to 50 years before we had general purpose AI. And now I think it may be 20 years or less.
INTERVIEWER: Some people think it could be like five. Is that silly?
HINTON: I wouldn’t completely rule that possibility out now. Whereas a few years ago I would have said no way.
INTERVIEWER: Okay. And then some people say AGI could be massively dangerous to humanity because we just don’t know what a system that’s so much smarter than us will do. Do you share that concern?
HINTON: I do a bit. I mean, obviously what we need to do is make this synergistic, have it so it helps people. And I think the main issue here, well, one of the main issues is the political systems we have. So I’m not confident that President Putin is going to use AI in ways that help people. [...]
INTERVIEWER: This is like the most pointed version of the question, and you can just laugh it off or not answer it if you want. But what do you think the chances are of AI just wiping out humanity? Can we put a number on that?
HINTON: It’s somewhere between 1% and 100%. I think it’s not inconceivable, that’s all I’ll say. I think if we’re sensible, we’ll try and develop it so that it doesn’t. But what worries me is the political system we’re in, where it needs everybody to be sensible.
MIT Technology Review, May 4:
INTERVIEWER: It’s been the news everywhere that you stepped down from Google this week. Could you start by telling us why you made that decision?
HINTON: Well, there were a number of reasons. There’s always a bunch of reasons for a decision like that. One was that I’m 75 and I’m not as good at doing technical work as I used to be. My memory is not as good and when I program, I forget to do things. So it was time to retire.
A second was, very recently, I’ve changed my mind a lot about the relationship between the brain and the kind of digital intelligence we’re developing. So I used to think that the computer models we were developing weren’t as good as the brain and the aim was to see if you could understand more about the brain by seeing what it takes to improve the computer models. Over the last few months, I’ve changed my mind completely and I think probably the computer models are working in a rather different way from the brain. They’re using backpropagation and I think the brain’s probably not. And a couple of things have led me to that conclusion, but one is the performance of things like GPT-4. [...]
INTERVIEWER: So talk to us about why that sort of amazement that you have with today’s large language models has completely sort of almost flipped your thinking of what back propagation or machine learning in general is.
HINTON: So if you look at these large language models, they have about a trillion connections, and things like GPT-4 know much more than we do. They have sort of common sense knowledge about everything. And so they probably know a thousand times as much as a person. But they’ve got a trillion connections, and we’ve got a hundred trillion connections. So they’re much, much better at getting a lot of knowledge into only a trillion connections than we are. And I think it’s because back propagation may be a much, much better learning algorithm than what we’ve got.
INTERVIWER: Can you define-
HINTON: That’s scary.
INTERVIWER: Yeah. I definitely want to get onto the scary stuff. So what do you mean by better?
HINTON: It can pack more information into only a few connections.
INTERVIEWER: Right.
HINTON: [unintelligible] trillion is only a few.
INTERVIEWER: Okay. So these digital computers are better at learning than humans, which itself is a huge claim. But then you also argue that that’s something that we should be scared of. So could you take us through that step of the argument?
HINTON: Yeah. Let me give you a separate piece of the argument, which is that if a computer’s digital, which involves very high energy costs and very careful fabrication, you can have many copies of the same model running on different hardware that do exactly the same thing. They can look at different data, but the model is exactly the same. And what that means is, suppose you have 10,000 copies, they can be looking at 10,000 different subsets of the data. And whenever one of them learns anything, all the others know it. One of them figures out how to change the weight so it can deal with this data. They all communicate with each other and they all agree to change the weights by the average of what all of them want.
And now the 10,000 things are communicating very effectively with each other so that they can see 10,000 times as much data as one agent could. And people can’t do that. If I learn a whole lot of stuff about quantum mechanics and I want you to know all that stuff about quantum mechanics, it’s a long, painful process of getting you to understand it. I can’t just copy my weights into your brain because your brain isn’t exactly the same as mine. [...]
INTERVIEWER: So we have digital computers that can learn more things more quickly and they can instantly teach it to each other. It’s like people in the room here could instantly transfer what they had in their heads into mine. But why is that scary?
HINTON: Well because they can learn so much more and they might. Take an example of a doctor and imagine you have one doctor who’s seen a thousand patients and another doctor who’s seen a hundred million patients. You would expect the doctor who’s seen a hundred million patients, if he’s not too forgetful, to have noticed all sorts of trends in the data that just aren’t visible if you’ve only seen a thousand patients. You may have only seen one patient with some rare disease. The other doctor who’s seen a hundred million will have seen, well you can figure out how many patients, but a lot. And so we’ll see all sorts of regularities that just aren’t apparent in small data. And that’s why things that can get through a lot of data can probably see structure in data we’ll never see.
INTERVIEWER: But then take me to the point where I should be scared of this though.
HINTON: Well if you look at GPT-4 it can already do simple reasoning. I mean reasoning is the area where we’re still better. But I was impressed the other day at GPT-4 doing a piece of common sense reasoning that I didn’t think it would be able to do.
So I asked it, I want all the rooms in my house to be white. At present there’s some white rooms, some blue rooms and some yellow rooms. And yellow paint fades to white within a year. So what should I do if I want them all to be white in two years time? And it said you should paint the blue rooms yellow. That’s not the natural solution, but it works, right?
That’s pretty impressive common sense reasoning of the kind that it’s been very hard to get AI to do using symbolic AI because it had to understand what fades means. It had to be understood by temporal stuff. And so they’re doing sort of sensible reasoning with an IQ of like 80 or 90 or something. And as a friend of mine said, “It’s as if some genetic engineers have said we’re going to improve grizzly bears. We’ve already improved them to have an IQ of 65 and they can talk English now. And they’re very useful for all sorts of things, but we think we can improve the IQ to 210.”
INTERVIEWER: I certainly have, I’m sure many people have had that feeling when you’re interacting with these latest chat bots, you know, sort of hair on the back of the neck, it’s sort of an uncanny feeling. But you know, when I have that feeling and I’m uncomfortable, I just close my laptop.
HINTON: Yes, but these things will have learned from us by reading all the novels that ever were and everything Machiavelli ever wrote, that how to manipulate people, right? And if they’re much smarter than us, they’ll be very good at manipulating us. You won’t realise what’s going on. You’ll be like a two year old who’s being asked, do you want the peas or the cauliflower? And doesn’t realise you don’t have to have either. And you’ll be that easy to manipulate. And so even if they can’t directly pull levers, they can certainly get us to pull levers. It turns out if you can manipulate people, you can invade a building in Washington without ever going there yourself. [...]
So we evolved, and because we evolved, we have certain built-in goals that we find very hard to turn off. Like we try not to damage our bodies. That’s what pain’s about. We try and get enough to eat, so we feed our bodies. We try and make as many copies of ourselves as possible. Maybe not deliberately to that intention, but we’ve been wired up so there’s pleasure involved in making many copies of ourselves. And that all came from evolution, and it’s important that we can’t turn it off. If you could turn it off, you don’t do so well. There’s a wonderful group called the Shakers, who are related to the Quakers, who made beautiful furniture but didn’t believe in sex, and there aren’t any of them around anymore.
So these digital intelligences didn’t evolve. We made them. And so they don’t have these built-in goals. And so the issue is, if we can put the goals in, maybe it’ll all be okay. But my big worry is, sooner or later, someone will wire into them the ability to create their own sub-goals.
In fact, they almost have that already. There’s a version of ChatGPT that calls ChatGPT. And if you give something the ability to create its own sub-goals in order to achieve other goals, I think it’ll very quickly realise that getting more control is a very good sub-goal because it helps you achieve other goals. And if these things get carried away with getting more control, we’re in trouble.
INTERVIEWER: So what’s the worst case scenario that you think is conceivable?
HINTON: Oh, I think it’s quite conceivable that humanity is just a passing phase in the evolution of intelligence. You couldn’t directly evolve digital intelligence. It requires too much energy and too much careful fabrication. You need biological intelligence to evolve so that it can create digital intelligence. The digital intelligence can then absorb everything people ever wrote in a fairly slow way, which is what ChatGPT has been doing. But then it can start getting direct experience of the world and learn much faster. And it may keep us around for a while to keep the power stations running. But after that, maybe not. [...]
So I think if you take the existential risk seriously, as I now do, I used to think it was way off, but I now think it’s serious and fairly close. It might be quite sensible to just stop developing these things any further. But I think it’s completely naive to think that would happen. There’s no way to make that happen.
Hinton on Twitter:
Pedro Domingos, May 3
Reminder: most AI researchers think the notion of AI ending human civilization is baloney.
Geoffrey Hinton, May 5
and for a long time, most people thought the earth was flat. If we did make something MUCH smarter than us, what is your plan for making sure it doesn’t manipulate us into giving it control?
---
Melanie Mitchell, May 3
Rather than asking AI researchers how soon machines will become “smarter than people”, perhaps we should be asking cognitive scientists, who actually know something about human intelligence?
Geoffrey Hinton, May 4
I am a cognitive scientist.
---
RyanRejoice, May 2
Hey Geoffrey. You originally predicted AI would become smarter than a human in 30-50 years. Now, you say it will happen much sooner. How soon?
Geoffrey Hinton, May 3
I now predict 5 to 20 years but without much confidence. We live in very uncertain times. It’s possible that I am totally wrong about digital intelligence overtaking us. Nobody really knows which is why we should worry now.
Latest Hinton tweet:
A good reminder to apply bounded distrust appropriately. (Tips and more by Zvi here.)
Where are people saying and hearing false claims about Hinton’s stances? Which social media platforms, if any? In the possible scenario where people are spreading misinformation deliberately, or even strategically, then this is important to triangulate ASAP.
I thought I saw some in Reddit discussion but couldn’t quickly find those comments anymore, also at least one of my Facebook friends.
Updated the post with excerpts from the MIT Technology Review video interview, where Hinton among other things brings up convergent instrumental goals (“And if you give something the ability to create its own sub-goals in order to achieve other goals, I think it’ll very quickly realise that getting more control is a very good sub-goal because it helps you achieve other goals. And if these things get carried away with getting more control, we’re in trouble”) and explicitly says x-risk from AI may be close (“So I think if you take the existential risk seriously, as I now do, I used to think it was way off, but I now think it’s serious and fairly close. It might be quite sensible to just stop developing these things any further. But I think it’s completely naive to think that would happen.”)
Based on this interview, it doesn’t seem like Hinton is interested in doing a lot more for reducing AI risk: https://youtu.be/rLG68k2blOc?t=3378
It sounds like he wanted to sound the alarm as best he could with his credibility and will likely continue to do interviews, but says he’ll be spending his time “watching netflix, hanging around with his kids, and trying to study his forward-forward algorithm some more”.
Maybe he was downplaying his plans because he wants to keep them quiet for now, but this was a little sad even though his credibility applied to discussing AI risk concerns is certainly already an amazing thing for us to have gotten.
The guy is 75 years old. Many people would have retired 10+ years ago. Any effort he’s putting in is supererogatory as far as I’m concerned. One can hope for more, of course, but let there be no hint of obligation.
Well yes, but he’s also one of the main guys who brought the field to this point so this feels a little different. That said, I’m not saying he has an obligation, just that some people might have hoped for more after seeing him go public with this.
Editing suggestion: could you put the elipsis here on a separate line? Putting it in that paragraph gives the impression that you or the source left something out of that quote, which more likely than not would have made it sound more harsh and dismissive of Gebru’s concerns than it was, which would have been carnage, so I had to go and check the source, but it turns out that’s the whole quote.
Done