First, EY is concerned about risks due to technologies that have not yet been developed; as far as I know, there is no reliable way to make predictions about the likelihood of the development of new technologies. (This is also the basis of my skepticism about cryonics.) If you’re going to say “Technology X is likely to be developed” then I’d like to see your prediction mechanism and whether it’s worked in the past.
I think there are ways to make these predictions. On the most layman level I would point out that IBM build a robot that beats people at Jeopardy. Yes, I am aware that this is a complete machine-learning hack (this is what I could gather from the NYT coverage) and is not true cognition, but it surprised even me (I do know something about ML). I think this is useful to defeat the intuition of “machines cannot do that”. If you are truly interested I think you can (I know you’re capable) read Norvig’s AI book, and than follow up on the parts of it that most resemble human cognition; I think serious progress is made in those areas. BTW, Norvig does take FAI issues seriously, including a reference to EY paper in the book.
Second, shouldn’t an organization worried about the dangers of AI be very closely in touch with AI researchers in computer science departments? Sure, there’s room for pure philosophy and mathematics, but you’d need some grounding in actual AI to understand what future AIs are likely to do.
I think they should, I have no idea if this is being done; but if I would do it I would not do it publicly, as it may have very counterproductive consequences. So until you or I become SIAI fellows we will not know, and I cannot hold such lack of knowledge against them.
First, I’m not really claiming “machines cannot do that.” I can see advances in machine learning and I can imagine the next round of advances being pretty exciting. But I’m thinking in terms of maybe someday a machine being able to distinguish foreground from background, or understand a sentence in English, not being a superintelligence that controls Earth’s destiny. The scales are completely different. One scale is reasonable; one strains credibility, I’m afraid.
Thanks for the book recommendation; I’ll be sure to check it out.
I think controlling Earth’s destiny is only modestly harder than understanding a sentence in English—in the same sense that I think Einstein was only modestly smarter than George W. Bush. EY makes a similar point.
You sound to me like someone saying, sixty years ago: “Maybe some day a computer will be able to play a legal game of chess—but simultaneously defeating multiple grandmasters, that strains credibility, I’m afraid.” But it only took a few decades to get from point A to point B. I doubt that going from “understanding English” to “controlling the Earth” will take that long.
There’s a problem with it, though. Some decades ago you’d have just as eagerly subscribed to this statement: “Controlling Earth’s destiny is only modestly harder than playing a good game of chess”, which we now know to be almost certainly false.
I agree with Rain. Understanding implies a much deeper model than playing. To make the comparison to chess, you would have to change it to something like, “Controlling Earth’s destiny is only modestly harder than making something that can learn chess, or any other board game, without that game’s mechanics (or any mapping from the computer’s output to game moves) being hard-coded, and then play it at an expert level.”
It’s the word “understanding” in the quote which makes it presume general intelligence and/or consciousness without directly stating it. The word “playing” does not have such a connotation, at least to me. I don’t know if I would think differently back when chess required intelligence.
I think there are ways to make these predictions. On the most layman level I would point out that IBM build a robot that beats people at Jeopardy. Yes, I am aware that this is a complete machine-learning hack (this is what I could gather from the NYT coverage) and is not true cognition, but it surprised even me (I do know something about ML). I think this is useful to defeat the intuition of “machines cannot do that”. If you are truly interested I think you can (I know you’re capable) read Norvig’s AI book, and than follow up on the parts of it that most resemble human cognition; I think serious progress is made in those areas. BTW, Norvig does take FAI issues seriously, including a reference to EY paper in the book.
I think they should, I have no idea if this is being done; but if I would do it I would not do it publicly, as it may have very counterproductive consequences. So until you or I become SIAI fellows we will not know, and I cannot hold such lack of knowledge against them.
First, I’m not really claiming “machines cannot do that.” I can see advances in machine learning and I can imagine the next round of advances being pretty exciting. But I’m thinking in terms of maybe someday a machine being able to distinguish foreground from background, or understand a sentence in English, not being a superintelligence that controls Earth’s destiny. The scales are completely different. One scale is reasonable; one strains credibility, I’m afraid.
Thanks for the book recommendation; I’ll be sure to check it out.
I think controlling Earth’s destiny is only modestly harder than understanding a sentence in English—in the same sense that I think Einstein was only modestly smarter than George W. Bush. EY makes a similar point.
You sound to me like someone saying, sixty years ago: “Maybe some day a computer will be able to play a legal game of chess—but simultaneously defeating multiple grandmasters, that strains credibility, I’m afraid.” But it only took a few decades to get from point A to point B. I doubt that going from “understanding English” to “controlling the Earth” will take that long.
Well said. I shall have to try to remember that tagline.
There’s a problem with it, though. Some decades ago you’d have just as eagerly subscribed to this statement: “Controlling Earth’s destiny is only modestly harder than playing a good game of chess”, which we now know to be almost certainly false.
I agree with Rain. Understanding implies a much deeper model than playing. To make the comparison to chess, you would have to change it to something like, “Controlling Earth’s destiny is only modestly harder than making something that can learn chess, or any other board game, without that game’s mechanics (or any mapping from the computer’s output to game moves) being hard-coded, and then play it at an expert level.”
Not obviously false, I think.
It’s the word “understanding” in the quote which makes it presume general intelligence and/or consciousness without directly stating it. The word “playing” does not have such a connotation, at least to me. I don’t know if I would think differently back when chess required intelligence.
(Again:) Hey, remember this tagline: “I think controlling Earth’s destiny is only modestly harder than understanding a sentence in English.”
Hey, remember this tagline: “I think controlling Earth’s destiny is only modestly harder than understanding a sentence in English.”