Why do you care that Geoffrey Hinton worries about AI x-risk?
Why do so many people in this community care that Hinton is worried about x-risk from AI?
Do people mention Hinton because they think it’s persuasive to the public?
Or persuasive to the elites?
Or do they think that Hinton being worried about AI x-risk is strong evidence for AI x-risk?
If so, why?
Is it because he is so intelligent?
Or because you think he has private information or intuitions?
Do you think he has good arguments in favour of AI x-risk?
Do you think he has a good understanding of the problem?
Do you update more-so on Hinton’s views than on Yann LeCun’s?
I’m inspired to write this because Hinton and Hopfield were just announced as the winners of the Nobel Prize in Physics. But I’ve been confused about these questions ever since Hinton went public with his worries. These questions are sincere (i.e. non-rhetorical), and I’d appreciate help on any/all of them. The phenomenon I’m confused about includes the other “Godfathers of AI” here as well, though Hinton is the main example.
Personally, I’ve updated very little on either LeCun’s or Hinton’s views, and I’ve never mentioned either person in any object-level discussion about whether AI poses an x-risk. My current best guess is that people care about Hinton only because it helps with public/elite outreach. This explains why activists tend to care more about Geoffrey Hinton than researchers do.
I think it’s mostly about elite outreach. If you already have a sophisticated model of the situation you shouldn’t update too much on it, but it’s a reasonably clear signal (for outsiders) that x-risk from A.I. is a credible concern.
I think it’s more “Hinton’s concerns are evidence that worrying about AI x-risk isn’t silly” than “Hinton’s concerns are evidence that worrying about AI x-risk is correct”. The most common negative response to AI x-risk concerns is (I think) dismissal, and it seems relevant to that to be able to point to someone who (1) clearly has some deep technical knowledge, (2) doesn’t seem to be otherwise insane, (3) has no obvious personal stake in making people worry about x-risk, and (4) is very smart, and who thinks AI x-risk is a serious problem.
It’s hard to square “ha ha ha, look at those stupid nerds who think AI is magic and expect it to turn into a god” or “ha ha ha, look at those slimy techbros talking up their field to inflate the value of their investments” or “ha ha ha, look at those idiots who don’t know that so-called AI systems are just stochastic parrots that obviously will never be able to think” with the fact that one of the people you’re laughing at is Geoffrey Hinton.
(I suppose he probably has a pile of Google shares so maybe you could squeeze him into the “techbro talking up his investments” box, but that seems unconvincing to me.)
I think it pretty much only matters as a trivial refutation of (not-object-level) claims that no “serious” people in the field take AI x-risk concerns seriously, and has no bearing on object-level arguments. My guess is that Hinton is somewhat less confused than Yann but I don’t think he’s talked about his models in very much depth; I’m mostly just going off the high-level arguments I’ve seen him make (which round off to “if we make something much smarter than us that we don’t know how to control, that might go badly for us”).
He also argued that digital intelligence is superior to analog human intelligence because, he said, many identical copies can be trained in parallel on different data, and then they can exchange their changed weights. He also said biological brains are worse because they probably use a learning algorithm that is less efficient than backpropagation.
Yes, outreach. Hinton has now won both the Turing award and the Nobel prize in physics. Basically, he gained maximum reputation. Nobody can convincingly doubt his respectability. If you meet anyone who dismisses warnings about extinction risk from superhuman AI as low status and outside the Overton window, they can be countered with referring to Hinton. He is the ultimate appeal-to-authority. (This is not a very rational argument, but dismissing an idea on the basis of status and Overton windows is even less so.)
I think it’s mostly because he’s well known and have (especially after the Nobel prize) credentials recognized by the public and elites. Hinton legitimizes the AI safety movement, maybe more than anyone else.
If you watch his Q&A at METR, he says something along the lines of “I want to retire and don’t plan on doing AI safety research. I do outreach and media appearances because I think it’s the best way I can help (and because I like seeing myself on TV).”
And he’s continuing to do that. The only real topic he discussed in first phone interview after receiving the prize was AI risk.
Bengio and Hinton are the two most influential “old guard” AI researchers turned safety advocates as far as I can tell, with Bengio being more active in research. Your e.g. is super misleading, since my list would have been something like:
I think it is just the cumulative effect that people see yet another prominent AI scientist that “admits” that no one have any clear solution to the possible problem of a run away ASI. Given that the median p(doom) is about 5-10% among AI scientist, people are of course wondering wtf is going on, why are they pursuing a technology with such high risk for humanity if they really think it is that dangerous.
For 7: One AI risk controversy is we do not know/see existing model that pose that risk yet. But there might be models that the frontier companies such as Google may be developing privately, and Hinton maybe saw more there.
For 9: Expert opinions are important and adds credibility generally as the question of how/why AI risks can emerge is by root highly technical. It is important to understand the fundamentals of the learning algorithms. Additionally they might have seen more algorithms. This is important to me as I already work in this space.
Lastly for 10: I do agree it is important to listen to multiple sides as experts do not agree among themselves sometimes. It may be interesting to analyze the background of the speaker to understand their perspectives. Hinton seems to have more background in cognitive science comparing with LeCun who seems to me to be more strictly computer science (but I could be wrong). Not very sure but my guess is these may effect how they view problems. (Only saying they could result in different views, but not commenting on which one is better or worse. This is relatively unhelpful for a person to make decisions on who they want to align more with.)
Why do you care that Geoffrey Hinton worries about AI x-risk?
Why do so many people in this community care that Hinton is worried about x-risk from AI?
Do people mention Hinton because they think it’s persuasive to the public?
Or persuasive to the elites?
Or do they think that Hinton being worried about AI x-risk is strong evidence for AI x-risk?
If so, why?
Is it because he is so intelligent?
Or because you think he has private information or intuitions?
Do you think he has good arguments in favour of AI x-risk?
Do you think he has a good understanding of the problem?
Do you update more-so on Hinton’s views than on Yann LeCun’s?
I’m inspired to write this because Hinton and Hopfield were just announced as the winners of the Nobel Prize in Physics. But I’ve been confused about these questions ever since Hinton went public with his worries. These questions are sincere (i.e. non-rhetorical), and I’d appreciate help on any/all of them. The phenomenon I’m confused about includes the other “Godfathers of AI” here as well, though Hinton is the main example.
Personally, I’ve updated very little on either LeCun’s or Hinton’s views, and I’ve never mentioned either person in any object-level discussion about whether AI poses an x-risk. My current best guess is that people care about Hinton only because it helps with public/elite outreach. This explains why activists tend to care more about Geoffrey Hinton than researchers do.
I think it’s mostly about elite outreach. If you already have a sophisticated model of the situation you shouldn’t update too much on it, but it’s a reasonably clear signal (for outsiders) that x-risk from A.I. is a credible concern.
I think it’s more “Hinton’s concerns are evidence that worrying about AI x-risk isn’t silly” than “Hinton’s concerns are evidence that worrying about AI x-risk is correct”. The most common negative response to AI x-risk concerns is (I think) dismissal, and it seems relevant to that to be able to point to someone who (1) clearly has some deep technical knowledge, (2) doesn’t seem to be otherwise insane, (3) has no obvious personal stake in making people worry about x-risk, and (4) is very smart, and who thinks AI x-risk is a serious problem.
It’s hard to square “ha ha ha, look at those stupid nerds who think AI is magic and expect it to turn into a god” or “ha ha ha, look at those slimy techbros talking up their field to inflate the value of their investments” or “ha ha ha, look at those idiots who don’t know that so-called AI systems are just stochastic parrots that obviously will never be able to think” with the fact that one of the people you’re laughing at is Geoffrey Hinton.
(I suppose he probably has a pile of Google shares so maybe you could squeeze him into the “techbro talking up his investments” box, but that seems unconvincing to me.)
I think it pretty much only matters as a trivial refutation of (not-object-level) claims that no “serious” people in the field take AI x-risk concerns seriously, and has no bearing on object-level arguments. My guess is that Hinton is somewhat less confused than Yann but I don’t think he’s talked about his models in very much depth; I’m mostly just going off the high-level arguments I’ve seen him make (which round off to “if we make something much smarter than us that we don’t know how to control, that might go badly for us”).
He also argued that digital intelligence is superior to analog human intelligence because, he said, many identical copies can be trained in parallel on different data, and then they can exchange their changed weights. He also said biological brains are worse because they probably use a learning algorithm that is less efficient than backpropagation.
Yes, outreach. Hinton has now won both the Turing award and the Nobel prize in physics. Basically, he gained maximum reputation. Nobody can convincingly doubt his respectability. If you meet anyone who dismisses warnings about extinction risk from superhuman AI as low status and outside the Overton window, they can be countered with referring to Hinton. He is the ultimate appeal-to-authority. (This is not a very rational argument, but dismissing an idea on the basis of status and Overton windows is even less so.)
I think it’s mostly because he’s well known and have (especially after the Nobel prize) credentials recognized by the public and elites. Hinton legitimizes the AI safety movement, maybe more than anyone else.
If you watch his Q&A at METR, he says something along the lines of “I want to retire and don’t plan on doing AI safety research. I do outreach and media appearances because I think it’s the best way I can help (and because I like seeing myself on TV).”
And he’s continuing to do that. The only real topic he discussed in first phone interview after receiving the prize was AI risk.
Hmm. He seems pretty periphery to the AI safety movement, especially compared with (e.g.) Yoshua Bengio.
Yeah that’s true. I meant this more as “Hinton is proof that AI safety is a real field and very serious people are concerned about AI x-risk.”
Bengio and Hinton are the two most influential “old guard” AI researchers turned safety advocates as far as I can tell, with Bengio being more active in research. Your e.g. is super misleading, since my list would have been something like:
Bengio
Hinton
Russell
I think it is just the cumulative effect that people see yet another prominent AI scientist that “admits” that no one have any clear solution to the possible problem of a run away ASI. Given that the median p(doom) is about 5-10% among AI scientist, people are of course wondering wtf is going on, why are they pursuing a technology with such high risk for humanity if they really think it is that dangerous.
From my perspective—would say it’s 7 and 9.
For 7: One AI risk controversy is we do not know/see existing model that pose that risk yet. But there might be models that the frontier companies such as Google may be developing privately, and Hinton maybe saw more there.
For 9: Expert opinions are important and adds credibility generally as the question of how/why AI risks can emerge is by root highly technical. It is important to understand the fundamentals of the learning algorithms. Additionally they might have seen more algorithms. This is important to me as I already work in this space.
Lastly for 10: I do agree it is important to listen to multiple sides as experts do not agree among themselves sometimes. It may be interesting to analyze the background of the speaker to understand their perspectives. Hinton seems to have more background in cognitive science comparing with LeCun who seems to me to be more strictly computer science (but I could be wrong). Not very sure but my guess is these may effect how they view problems. (Only saying they could result in different views, but not commenting on which one is better or worse. This is relatively unhelpful for a person to make decisions on who they want to align more with.)