Geoff Hinton Quits Google
The NYTimes reports that Geoff Hinton has quit his role at Google:
On Monday, however, he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.
Dr. Hinton said he has quit his job at Google, where he has worked for more than a decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.
“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Dr. Hinton said during a lengthy interview last week in the dining room of his home in Toronto, a short walk from where he and his students made their breakthrough.
Some clarification from Hinton followed:
It was already apparent that Hinton considered AI potentially dangerous, but this seems significant.
Hinton is one of the few people who, unfortunately, definitely does not get to say “if I hadn’t done it, someone else would have”.
But this is based as hell. Hard alignmentpilled Hinton before Hinton-level AI?
Does anyone know of any AI-related predictions by Hinton?
Here’s the only one I know of—“People should stop training radiologists now. It’s just completely obvious within five years deep learning is going to do better than radiologists because it can get a lot more experience. And it might be ten years but we got plenty of radiologists already.” − 2016, slightly paraphrased
This seems like still a testable prediction—by November 2026, radiologists should be completely replaceable by deep learning methods, at least other than regulatory requirements for trained physicians.
Fyi actually radiology is not mostly looking at pictures but doing imagery-guided surgery (for example embolisation) which is significantly harder to automate.
Same for family octors : it’s not just following guidelines and renewing scripts but a good part is physical examination.
I agree that AI can do a lot of what happens in medicine though.
This is indeed an interesting losing* bet. He was mostly right on the technical side (yes deep learning now do better than the average radiologist on many tasks). He was completely wrong on the societal impact (no we still need to train radiologists). This was the same story with ophthalmologists when deep learning significantly shorten the time needed to perform part of their job: they just spent the saved time on doing more.
*16+5=21, not 26 😉
“it might be ten”
Yeah, he said that too. But let’s face it, it’s 2023 and there’s absolutely no trace of radiologists starting to stop being under heavy pressure. Especially in Canada where papy boom is hitting hard and the new generations value family time more than dying at or from work.
But yeah, I concede it’s not settled yet. Do you want to bet friendly goodies with me?
In my local news today:
« radiologist at the CHUM, An Tang […] chaired an artificial intelligence task force of the Canadian Association of Radiologists. […] First observation: his profession would not be threatened.
The combination between the doctor and the AI algorithm is going to be superior to the AI alone or the doctor alone. The mistakes likely to be made are not of the same [type]. »
https://ici.radio-canada.ca/nouvelle/1975944/lintelligence-humaine-artificielle-hopital-revolution
No; I agree with you.
Another interview with Hinton about this: https://www.technologyreview.com/2023/05/02/1072528/geoffrey-hinton-google-why-scared-ai/
Chosen excerpts:
https://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom
Hinton seems to be more responsible now!
Archive.org link: https://web.archive.org/web/20230501211505/https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html
Note, Cade Metz is the author of the somewhat infamous NYT article about Scott Alexander.
I think people in the LW/alignment community should really reach out to Hinton to coordinate messaging now that he’s suddenly become the most high profile and credible public voice on AI risk. Not sure who should be doing this specifically, but I hope someone’s on it.
I note that Eliezer did this (pretty much immediately) on Twitter.
Not sure if he took him up on that (or even saw the tweet reply). Am just hoping we have someone more proactively reaching out to him to coordinate is all. He commands a lot of respect in this industry as I’m sure most know.