2011 has been a huge year for Artificial Intelligence. With the IBM computer Watson defeating two top Jeopardy! champions in February, it’s clear that the field is making steady progress.
Do people here generally think that this is true? I don’t see much of an intersection between Watson and AI; it seems like a few machine learning algorithms that approach Jeopardy problems in an extremely artificial way, much like chess engines approach playing chess. (Are chess engines artificial intelligence too?)
I actually do think it’s a big deal, as well as being flashy, though not an extremely big deal. Something along the lines of the best narrow AI accomplishment of any given year and the flashiest of any given 3-5 year period.
Further to my previous comment, I found the second final Jeopardy puzzle to be instructive. The category was “US Cities” and the clue was this:
Its largest airport was named for a World War II hero; its second largest, for a World War II battle.
A reasonably smart human will come up with an algorithm on the fly for solving this, which is to start thinking of major US cities (likely to have 2 or more airports); remember the names of their airports, and think about whether any of the names sound like a battle or a war hero. The three obvious cities to try are Los Angeles, New York, and Chicago. And “Midway” definitely sounds like the name of a battle.
But Watson was totally clueless. Even though it had the necessary information, it had to rely on pre-programmed algorithms to access that information. It was apparently unable to come up with a new algorithm on the fly.
Probably Watson relies heavily on statistical word associations. If the puzzle has “Charles Shulz” and “This Dog” in it, it will probably guess “Snoopy” without really parsing the puzzle. I’m just speculating here, but my impression is that AI has a long way to go.
A reasonably smart human will come up with an algorithm on the fly for solving this, which is to start thinking of major US cities (likely to have 2 or more airports); remember the names of their airports, and think about whether any of the names sound like a battle or a war hero. The three obvious cities to try are Los Angeles, New York, and Chicago. And “Midway” definitely sounds like the name of a battle.
But Watson was totally clueless. Even though it had the necessary information, it had to rely on pre-programmed algorithms to access that information. It was apparently unable to come up with a new algorithm on the fly.
This isn’t meaningful. Whatever method we use to “come up with algorithms on the fly” is itself an algorithm, just a more complicated one.
Probably Watson relies heavily on statistical word associations. If the puzzle has “Charles Shulz” and “This Dog” in it, it will probably guess “Snoopy” without really parsing the puzzle.
This isn’t true. You know, a lot of the things you’re talking about here regarding Watson aren’t secret...
This isn’t meaningful. Whatever method we use to “come up with algorithms on the fly” is itself an algorithm, just a more complicated one
Then why wasn’t Watson simply programmed with one meta-algorithm rather than hundreds of specialized algorithms?
This isn’t true. You know, a lot of the things you’re talking about here regarding Watson aren’t secret.
FWIW, the wiki article indicates that Watson would “parse the clues into different keywords and sentence fragments in order to find statistically related phrases.” Would you mind giving me some links which show that Watson doesn’t rely heavily on statistical word associations?
Then why wasn’t Watson simply programmed with one meta-algorithm rather than hundreds of specialized algorithms?
I don’t have a clue what you’re talking about. Where are you getting this claim that it was programmed with “hundreds of specialized algorithms”? And how is that really qualitatively different from what we do?
Would you mind giving me some links which show that Watson doesn’t rely heavily on statistical word associations?
I never said it didn’t. I was contradicting your statement that relied on that without any parsing.
I don’t have a clue what you’re talking about. Where are you getting this claim that it was programmed with “hundreds of specialized algorithms”?
For one thing, the Wiki article talks about thousands of algorithms. My common sense tells me that many of those algorithms are specialized for particular types of puzzles. Anyway, why didn’t Watsons creators program Watson with a meta-algorithm to enable it to solve puzzles like the Airport puzzle?
And how is that really qualitatively different from what we do?
For one thing, smart people can come up with new algorithms on the fly. For example an organized way of solving the airport puzzle. If that were just a matter of making a more complicated computer program, then why didn’t Watson’s creators do it?
I was contradicting your statement that relied on that without any parsing
My statement was speculation. So if you are confident that it is wrong, then presumably you must have solid evidence to believe so. If you don’t know one way or another, then we are both in the same boat.
For one thing, smart people can come up with new algorithms on the fly. For example an organized way of solving the airport puzzle. If that were just a matter of making a more complicated computer program, then why didn’t Watson’s creators do it?
That’s like asking why a human contestant failed to come up with a new algorithm on the fly. Or, put simply: no one is perfect. Not the other players, not Watson, and not Watson’s creators. While you’ve certainly identified a flaw, I’m not sure it’s really quite as big a deal as you make it out to be. I mean, Watson did beat actual humans, so clearly they managed something fairly robust.
I don’t think Watson is anywhere near an AGI, but the field of AI development seems to mostly include “applied-AI” like Deep Blue and Watson, and failures, so I’m going to go ahead and root for the successes in applied-AI :)
That’s like asking why a human contestant failed to come up with a new algorithm on the fly.
I disagree. A human contestant who failed to come up with a new algorithm was perhaps not smart enough, but is still able to engage in the same kind of flexible thinking under less challenging circumstances. I suspect Watson cannot do so under any circumstances.
I mean, Watson did beat actual humans, so clearly they managed something fairly robust.
Without it’s super-human buzzer speed, I doubt Watson would have won.
I believe that the way things were designed, Ken Jennings was probably at least as good as Watson on buzzer speed. Watson presses the buzzer with a mechanical mechanism, to give it a latency similar to a finger; and Watson doesn’t start going for the buzzer until it sees the ‘buzzer unlocked’ signal. By contrast, Ken Jennings has said that he starts pressing the buzzer before the signal, relying on his intuitive sense of the typical delay between the completion of a question and the buzzer-unlock signal.
Watson does have a big advantage in this regard, since it can knock out a microsecond-precise buzz every single time with little or no variation. Human reflexes can’t compete with computer circuits in this regard. But I wouldn’t call this unfair … precise timing just happens to be one thing computers are better at than we humans. It’s not like I think Watson should try buzzing in more erratically just to give homo sapiens a chance.
Here’s what Wikipedia says:
The Jeopardy! staff used different means to notify Watson and the human players when to buzz, which was critical in many rounds. The humans were notified by a light, which took them tenths of a second to perceive. Watson was notified by an electronic signal and could activate the buzzer within about eight milliseconds. The humans tried to compensate for the perception delay by anticipating the light, but the variation in the anticipation time was generally too great to fall within Watson’s response time. Watson did not operate to anticipate the notification signal.
For one thing, the Wiki article talks about thousands of algorithms. My common sense tells me that many of those algorithms are specialized for particular types of puzzles. Anyway, why didn’t Watsons creators program Watson with a meta-algorithm to enable it to solve puzzles like the Airport puzzle?
Er… they did? The whole thing ultimately had to produce one answer, after all. It just wasn’t good enough.
The whole thing ultimately had to produce one answer, after all. It just wasn’t good enough.
Ok, then arguably it’s not so simple to create an algorithm which is “just more complicated.” I mean, one could say that an ICBM is just like a Quassam rocket, but just more complicated.
An ICBM is “just” a bow-and-arrow system with a more precise guidance system, more energy available to spend reaching its destination, and a more destructive payload.
Right, and it’s far more difficult to construct. It probably took thousands of years between the first missile weapons and modern ICBMs. I doubt that it will take thousands of years to create general AI, but it’s still the same concept.
The first general AI will probably be “just” an algorithm running on a digital computer.
This comment doesn’t appear to have any relevance. Where did anyone suggest that the way to make it better is to just make it more complicated? Where did anyone suggest that improving it would be simple? I am completely baffled.
But Watson was totally clueless. Even though it had the necessary information, it had to rely on pre-programmed algorithms to access that information. It was apparently unable to come up with a new algorithm on the fly.
You:
Whatever method we use to “come up with algorithms on the fly” is itself an algorithm, just a more complicated one.
So you seemed to be saying that there’s no big deal about the human ability to come up with a new algorithm—it’s just another algorithm. Which is technically true, but this sort of meta-algorithm obviously would require a lot more sophistication to create.
Well, yes. Though probably firstly should note that I am skeptical that what you are talking about—the process of answering a Final Jeopardy question—could actually be described as coming up with new algorithms on the fly in the first place. Regardless, if we do accept that, my point that there is no meaningful distinction between relying on pre-programmed algorithms, and (algorithmically) coming up with new ones on the fly, stands. There’s plenty of ways in which our brains are more sophisticated than Watson, but that one isn’t a meaningful distinction. Perhaps you mean something else.
my point that there is no meaningful distinction between relying on pre-programmed algorithms, and (algorithmically) coming up with new ones on the fly,
Then again my question: Why not program such a meta-algorithm into Watson?
I still don’t think you’re saying what you mean. The question doesn’t make any sense. The answer to the question you probably intended to ask is, “Because the people writing Watson didn’t know how to do so in a way that would solve the problem, and presumably nobody currently does”. I mean, I think I get your point, but...
Because the people writing Watson didn’t know how to do so in a way that would solve the problem, and presumably nobody currently does
Fine, so it’s a bit like the state of rocket science in 1900. They had crude military rockets but did not know how to make the kind of really destructive stuff that would come 100 years later. As I said, AI still has a way to go.
For one thing, it’s big advantage was inhuman button-pushing speed since an actuator is much faster than a human finger. Now, one might argue that pushing the button is part of the game, but to that I would respond that reading the puzzles off of the right screen is part of the game too and Watson didn’t have to do that—the puzzles were inputted in the form of a text file. Also, travelling to Los Angeles is part of the game and Watson didn’t have to do that either. If the game had been played in Los Angeles instead of New York, then all of Watson’s responses would have been delayed by a few hundredths of a second.
Another problem is that a lot of the puzzles on Jeopardy don’t actually require much intelligence to solve particularly if you can write a specialized program for each puzzle category. For example, I would guess a competent computer science grad student could pretty easily write a program that did reasonably well in “state capitals” And of the puzzles which do require some intelligence, the two human champions will split the points.
I’m not saying that Watson wasn’t impressive, just that it’s win was not convincing.
Watson was not specialized for different categories. It would learn categories—during a game, after seeing question-answer pairs from it. It ignored category titles, because they couldn’t find any way to get that to work. (Hence “Toronto” when the category was “U.S. cities”.)
Watson was not specialized for different categories. It would learn categories—during a game, after seeing question-answer pairs from it. It ignored category titles,
I have a really hard time believing this. A lot of the categories on Jeopardy recur regularly and pose the same types of puzzles again and again. IBM would have been crazy not to take advantage of this regularity. Or at least to pay attention to the category titles in evaluating possible answers.
*shrug* I mean, if you want to claim that the makers of IBM coordinated to lie about this point, go ahead, but don’t expect to me to bother discussing this with you at that point.
If your comment was inaccurate, it would probably be because you were mistaken and perhaps something you read was mistaken, not that IBM had coordinated to lie.
Yeah, so as it happens, I was misremembering—it doesn’t ignore category titles, it just doesn’t weight them very highly. Which FWIW still contradicts what brazil84 was suggesting it does. :P
I didn’t say it ignores categories—it knows which questions go together in a category, and learns what to use for a given category as it sees question-answer pairs for it. What I said was that it ignores category titles.
Do people here generally think that this is true? I don’t see much of an intersection between Watson and AI; it seems like a few machine learning algorithms that approach Jeopardy problems in an extremely artificial way, much like chess engines approach playing chess. (Are chess engines artificial intelligence too?)
I actually do think it’s a big deal, as well as being flashy, though not an extremely big deal. Something along the lines of the best narrow AI accomplishment of any given year and the flashiest of any given 3-5 year period.
Further to my previous comment, I found the second final Jeopardy puzzle to be instructive. The category was “US Cities” and the clue was this:
A reasonably smart human will come up with an algorithm on the fly for solving this, which is to start thinking of major US cities (likely to have 2 or more airports); remember the names of their airports, and think about whether any of the names sound like a battle or a war hero. The three obvious cities to try are Los Angeles, New York, and Chicago. And “Midway” definitely sounds like the name of a battle.
But Watson was totally clueless. Even though it had the necessary information, it had to rely on pre-programmed algorithms to access that information. It was apparently unable to come up with a new algorithm on the fly.
Probably Watson relies heavily on statistical word associations. If the puzzle has “Charles Shulz” and “This Dog” in it, it will probably guess “Snoopy” without really parsing the puzzle. I’m just speculating here, but my impression is that AI has a long way to go.
This isn’t meaningful. Whatever method we use to “come up with algorithms on the fly” is itself an algorithm, just a more complicated one.
This isn’t true. You know, a lot of the things you’re talking about here regarding Watson aren’t secret...
Then why wasn’t Watson simply programmed with one meta-algorithm rather than hundreds of specialized algorithms?
FWIW, the wiki article indicates that Watson would “parse the clues into different keywords and sentence fragments in order to find statistically related phrases.” Would you mind giving me some links which show that Watson doesn’t rely heavily on statistical word associations?
I don’t have a clue what you’re talking about. Where are you getting this claim that it was programmed with “hundreds of specialized algorithms”? And how is that really qualitatively different from what we do?
I never said it didn’t. I was contradicting your statement that relied on that without any parsing.
For one thing, the Wiki article talks about thousands of algorithms. My common sense tells me that many of those algorithms are specialized for particular types of puzzles. Anyway, why didn’t Watsons creators program Watson with a meta-algorithm to enable it to solve puzzles like the Airport puzzle?
For one thing, smart people can come up with new algorithms on the fly. For example an organized way of solving the airport puzzle. If that were just a matter of making a more complicated computer program, then why didn’t Watson’s creators do it?
My statement was speculation. So if you are confident that it is wrong, then presumably you must have solid evidence to believe so. If you don’t know one way or another, then we are both in the same boat.
That’s like asking why a human contestant failed to come up with a new algorithm on the fly. Or, put simply: no one is perfect. Not the other players, not Watson, and not Watson’s creators. While you’ve certainly identified a flaw, I’m not sure it’s really quite as big a deal as you make it out to be. I mean, Watson did beat actual humans, so clearly they managed something fairly robust.
I don’t think Watson is anywhere near an AGI, but the field of AI development seems to mostly include “applied-AI” like Deep Blue and Watson, and failures, so I’m going to go ahead and root for the successes in applied-AI :)
I disagree. A human contestant who failed to come up with a new algorithm was perhaps not smart enough, but is still able to engage in the same kind of flexible thinking under less challenging circumstances. I suspect Watson cannot do so under any circumstances.
Without it’s super-human buzzer speed, I doubt Watson would have won.
I believe that the way things were designed, Ken Jennings was probably at least as good as Watson on buzzer speed. Watson presses the buzzer with a mechanical mechanism, to give it a latency similar to a finger; and Watson doesn’t start going for the buzzer until it sees the ‘buzzer unlocked’ signal. By contrast, Ken Jennings has said that he starts pressing the buzzer before the signal, relying on his intuitive sense of the typical delay between the completion of a question and the buzzer-unlock signal.
Here’s what Ken Jennings had to say:
Here’s what Wikipedia says:
Interesting, thanks. Upvote for doing some actual research. ;-)
Er… they did? The whole thing ultimately had to produce one answer, after all. It just wasn’t good enough.
Ok, then arguably it’s not so simple to create an algorithm which is “just more complicated.” I mean, one could say that an ICBM is just like a Quassam rocket, but just more complicated.
An ICBM is “just” a bow-and-arrow system with a more precise guidance system, more energy available to spend reaching its destination, and a more destructive payload.
Right, and it’s far more difficult to construct. It probably took thousands of years between the first missile weapons and modern ICBMs. I doubt that it will take thousands of years to create general AI, but it’s still the same concept.
The first general AI will probably be “just” an algorithm running on a digital computer.
This comment doesn’t appear to have any relevance. Where did anyone suggest that the way to make it better is to just make it more complicated? Where did anyone suggest that improving it would be simple? I am completely baffled.
Earlier, we had this exchange:
Me:
You:
So you seemed to be saying that there’s no big deal about the human ability to come up with a new algorithm—it’s just another algorithm. Which is technically true, but this sort of meta-algorithm obviously would require a lot more sophistication to create.
Well, yes. Though probably firstly should note that I am skeptical that what you are talking about—the process of answering a Final Jeopardy question—could actually be described as coming up with new algorithms on the fly in the first place. Regardless, if we do accept that, my point that there is no meaningful distinction between relying on pre-programmed algorithms, and (algorithmically) coming up with new ones on the fly, stands. There’s plenty of ways in which our brains are more sophisticated than Watson, but that one isn’t a meaningful distinction. Perhaps you mean something else.
Then again my question: Why not program such a meta-algorithm into Watson?
I still don’t think you’re saying what you mean. The question doesn’t make any sense. The answer to the question you probably intended to ask is, “Because the people writing Watson didn’t know how to do so in a way that would solve the problem, and presumably nobody currently does”. I mean, I think I get your point, but...
Fine, so it’s a bit like the state of rocket science in 1900. They had crude military rockets but did not know how to make the kind of really destructive stuff that would come 100 years later. As I said, AI still has a way to go.
Oh, yeah, of course. :)
I found Watson to be pretty disappointing.
For one thing, it’s big advantage was inhuman button-pushing speed since an actuator is much faster than a human finger. Now, one might argue that pushing the button is part of the game, but to that I would respond that reading the puzzles off of the right screen is part of the game too and Watson didn’t have to do that—the puzzles were inputted in the form of a text file. Also, travelling to Los Angeles is part of the game and Watson didn’t have to do that either. If the game had been played in Los Angeles instead of New York, then all of Watson’s responses would have been delayed by a few hundredths of a second.
Another problem is that a lot of the puzzles on Jeopardy don’t actually require much intelligence to solve particularly if you can write a specialized program for each puzzle category. For example, I would guess a competent computer science grad student could pretty easily write a program that did reasonably well in “state capitals” And of the puzzles which do require some intelligence, the two human champions will split the points.
I’m not saying that Watson wasn’t impressive, just that it’s win was not convincing.
Watson was not specialized for different categories. It would learn categories—during a game, after seeing question-answer pairs from it. It ignored category titles, because they couldn’t find any way to get that to work. (Hence “Toronto” when the category was “U.S. cities”.)
I have a really hard time believing this. A lot of the categories on Jeopardy recur regularly and pose the same types of puzzles again and again. IBM would have been crazy not to take advantage of this regularity. Or at least to pay attention to the category titles in evaluating possible answers.
*shrug* I mean, if you want to claim that the makers of IBM coordinated to lie about this point, go ahead, but don’t expect to me to bother discussing this with you at that point.
If your comment was inaccurate, it would probably be because you were mistaken and perhaps something you read was mistaken, not that IBM had coordinated to lie.
Yeah, so as it happens, I was misremembering—it doesn’t ignore category titles, it just doesn’t weight them very highly. Which FWIW still contradicts what brazil84 was suggesting it does. :P
Here’s a quote I found from the IBM research blog:
Seems to me that at a minimum, this shows that Watson does not ignore category titles.
I didn’t say it ignores categories—it knows which questions go together in a category, and learns what to use for a given category as it sees question-answer pairs for it. What I said was that it ignores category titles.
However as it happened I was wrong about this; slight misremembrance, sorry. Watson does note category titles, it just doesn’t weight them very highly. Apparently it learned this automatically during its training games. Source: http://www-03.ibm.com/innovation/us/watson/related-content/toronto.html