AI seems to be pretty good at board games relative to us. Does this tell us anything interesting? For instance, about the difficulty of automating other kinds of tasks? How about the task of AI research? Some thoughts here.
For anything whose function and sequencing we thoroughly understand the programming is straightforward and easy, at least in the conceptual sense. That covers most games, including video games. The computer’s “side” in a video game, for example, which looks conceptually difficult, most of the time turns out logically to be only decision trees.
The challenge is the tasks we can’t precisely define, like general intelligence. The rewarding approach here is to break down processes into identifiable subtasks. A case in point is understanding natural languages, one of whose essential questions is, “What is the meaning of “meaning?” In terms of a machine it can only be the content of a subroutine or pointers to subroutines. The input problem, converting sentences into sets of executable concepts, is thus approachable. The output problem, however, converting unpredictable concepts into words, is much tougher. It may involve growing decision trees on the fly.
Thanks for the nice summary and the questions. I think it is worth noting that AI is good only at some board games (fully observable, deterministic games) and not at others (partially observable, non-deterministic games such as, say, Civilization).
Do you know of a partially observable game for which AI lags behind humans substantially? These examples are of particular interest to me because they would significantly revise my understanding of what problems are hard and easy.
The most prominent games of this partial information that I know are Bridge and Poker, and AI’s can now win at both of these (and which in fact proved to be much easier than the classic deterministic games). Backgammon is random, and also turned out to be relatively easy—in fact the randomness itself is widely considered to have made the game easy for computers! Scrabble is the other example that comes to mind, where the situation is the same.
For Civilization in particular, it seems very likely that AI would be wildly superhuman if it were subject to the same kind of attention as other games, simply because the techniques used in Go and Backgammon, together with a bunch of ad hoc logic for navigating the tech tree, should be able to get so much traction.
For Civilization in particular, it seems very likely that AI would be wildly superhuman if it were subject to the same kind of attention as other games, simply because the techniques used in Go and Backgammon, together with a bunch of ad hoc logic for navigating the tech tree, should be able to get so much traction.
Agreed. It’s not Civilization, but Starcraft is also partially observable and non-deterministic, and a team of students managed to bring their Starcraft AI to the level of being able to defeat a “top 16 in Europe”-level human player after only a “few months” of work.
The game AIs for popular strategy games are often bad because the developers don’t actually have the time and resources to make a really good one, and it’s not a high priority anyway—most people playing games like Civilization want an AI that they’ll have fun defeating, not an AI that actually plays optimally.
I think that if you played on big map (freeciv support really huge) then your goals (like in real world) could be better fulfilled if you play WITH (not against) AI. For example managing 5 tousands engineers manually could take several hours per round.
You could meditate more concepts (for example for example geometric growing, metasthasis method of spread civilisation etc and for sure cooperation with some type of AI) in this game…
I think it would be easy to create a Civilization AI that would choose to grow on a certain path with a certain win-style in mind. So if the AI picks military win then it will focus on building troops and acquiring territory and maintaining states of war with other players. What might be hard is other win states like diplomatic or cultural because those require much more intuitive and nuanced decision making without a totally clear course of action.
most people playing games like Civilization want an AI that they’ll have fun defeating, not an AI that actually plays optimally.
The popular AI mods for Civ actually tend to make the AIs less thematic—they’re less likely to be nice to you just because of a thousand year harmonious and profitable peace, for example, and more likely to build unattractive but efficient Stacks of Doom. Of course there are selection effects on who installs such mods.
The game AIs for popular strategy games are often bad because the developers don’t actually have the time and resources to make a really good one, and it’s not a high priority anyway—most people playing games like Civilization want an AI that they’ll have fun defeating, not an AI that actually plays optimally.
I think you’re mostly correct on this. Sometimes difficult opponents are needed, but for almost all games that can be trivially achieved by making the AI cheat rather than improving the algorithms. That said, when playing a game vs an AI you do want the AI to at least appear to be intelligent; although humans can often be quite easy to fool with cheating, a good algorithm is still a better way of giving this appearance than a fake. It doesn’t have to be optimal, and even if it is you can constrain it enough to make it beatable, or intentionally design different kinds of weaknesses into the AI so that humans can have fun looking for those weaknesses and feel good when they find them. Ultimately, though, the point is that the standard approach of having lots and lots of scripting still tends to get the job done, and developers almost never find the resource expenditure for good AI to be worthwhile.
However, I think that genuinely superhuman AI in games like Starcraft and Civilization is far harder than you imply. For example, in RTS games (as Lumifer has said) the AI has a built-in advantage due to its capacity for micromanagement. Moreover, although the example you cite has an AI from a “few months” of work beating a high-level human player, I think that was quite likely to be a one-off occurrence. Beating a human once is quiet different to consistently beating a human.
If you look at the results of the AIIDE Man vs Machine matches, the top bots consistently lose every game to Bakuryu (the human representative). According to this report,
In this match it was shown that the true weakness of state
of the art StarCraft AI systems was that humans are very
adept at recognizing scripted behaviors and exploiting them
to the fullest. A human player in Skynet’s position in the first
game would have realized he was being taken advantage of
and adapted his strategy accordingly, however the inability
to put the local context (Bakuryu kiting his units around his
base) into the larger context of the game (that this would
delay Skynet until reinforcements arrived) and then the lack
of strategy change to fix the situation led to an easy victory
for the human. These problems remain as some of the main
challenges in RTS AI today: to both recognize the strategy and
intent of an opponent’s actions, and how to effectively adapt
your own strategy to overcome them.
I seems to me that the best AIs in these kinds of games work by focusing on a relatively narrow set of overall strategies, and then focusing on executing those strategies as flawlessly as possible. In something like Starcraft the AI’s potential for this kind of execution is definitely superhuman, but as the Man vs Machine matches demonstrate this really isn’t enough.
In the case of the Civilization games, the fact that they aren’t real-time removes quite a lot of the advantage that an AI gets in terms of micromanagement. Also, like in Starcraft, classical AI techniques really don’t work particularly well due to the massive branching factor.
Granted, taking a similar approach to the Starcraft bots might still work pretty well; I believe there are some degenerate strategies in many of the Civ games that are quite strong on their own, and if you program an AI to execute them with a high degree of precision and good micromanagement, and add some decent reactive play, that might be good enough.
However, unless the game is simply broken due to bad design, I suspect that you would find that, like the Starcraft bots, AIs designed on that kind of idea would still be easily exploited and consistently beaten by the best human players.
I wouldn’t say that poker is “much easier than the classic deterministic games”, and poker AI still lags significantly behind humans in several regards. Basically, the strongest poker bots at the moment are designed around solving for Nash equilibrium strategies (of an abstracted version of the game) in advance, but this fails in a couple of ways:
These approaches haven’t really been extended past 2- or 3-player games.
Playing a NE strategy makes sense if your opponent is doing the same, but your opponent almost always won’t be. Thus, in order to play better, poker bots should be able to exploit weak opponents. Both of these are rather nontrivial problems.
Kriegspiel, a partially observable version of chess, is another example where the best humans are still better than the best AIs, although I’ll grant that the gap isn’t a particularly big one, and likely mostly has to do with it not being a significant research focus.
Interestingly enough, a team at MIT managed to make an AI that learned how to play from the manual and proceeded to win 80% of it’s games against the AI, though I don’t know which difficulty it was set to, or how the freeciv AI compares to the one in normal Civilization.
I was disappointed to see my new favorite “pure” game Arimaa missing from Bostrom’s list. Arimaa was designed to be intuitive for humans but difficult for computers, making it a good test case. Indeed, I find it to be very fun, and computers do not seem to be able to play it very well. In particular, computers are nowhere close to beating top humans despite the fact that there has arguably been even more effort to make good computer players than good human players.
Arimaa’s branching factor dwarfs that of Go (which in turn beats every other commonly known example). Since a super-high branching factor is also a characteristic feature of general AI test problems, I think it remains plausible that simple, precisely defined games like Arimaa are good test cases for AI, as long as the branching factor keeps the game out of reach of brute force search.
In particular, computers are nowhere close to beating top humans despite the fact that there has arguably been even more effort to make good computer players than good human players.
This summary of already superhuman game playing AIs impressed me since two weeks. But only until yesterday. John McCarthy was attributed in Vardi(2012) to have said: “As soon as it works, no one calls it AI anymore.” (p13)
There is more truth in it than McCarthy expected it to be:
A tailor made game playing algorithm, developed and optimized by generations of scientists and software engineers is no entity of AI. It is an algorithm. Human beings analyzed the rule set, found abstractions of it, developed evaluation schemes and found heuristics to prune the un-computable large search tree. With brute force and megawatts of computational evaluation power they managed to fill a database with millions of more or less favorable game situations. In direct competion of game playing algorithm vs. human being these pre-computed situations help to find short cuts in the tree search to achieve superhuman performance in the end.
Is this entity an AI or an algorithm?
Game concept development: human.
Game rule definition and negotiation: human.
Game rule abstraction and translation into computable form: human designed algorithm.
Evaluation of game situation: human designed algorithm, computer aided optimization.
Search tree heuristics: human designed algorithm, computer aided optimization.
Database of favorable situations and moves: brute force tree search on massive parallel supercomputer.
Detection of favorable situations: human designed algorithm for pattern matching, computer aided optimization.
Active playing: Full automatic use of algorithms and information of points 3-7. No human being involved.
Unsupervised learning, search optimization and pattern matching (points 5-7) make this class of entities weak AIs. A human being playing against this entity will probably attribute intelligence to it. “Kasparov claims to have seen glimpses of true intelligence and creativity in some of the computers moves” (p12, Newborn[2011]).
But weak AI is not our focus. Our focus is strong AI, HLAI and superintelligence. It is good to know that human engineered weak AI algorithms can achieve superhuman performance. Not a single game playing weak AI achieved human level of intelligence. The following story will show why:
Watch two children, Alice and Bob, playing in the street. They found white and black pebbles and a piece of chalk. Bob has a faint idea of checkers (other names: “draught” or German: “Dame”) from having seen his elder brother playing it. He explains to Alice: “Lets draw a grid of chalk lines on the road and place our pebbles into the fields. I will show you.” In joint effort they draw several strait lines resulting in a 7x9 grid. Then Bob starts to place his black pebbles into his starting rows as he remembered it. Alice follows suit—but she has not enough white pebbles to fill her starting rows. They discuss their options and searched for more white pebbles. After two minutes of unsuccessful search Bob said: “Lets remove one column and I take two of my black pebbles away.” Then Bob explained to Alice how to make moves with her pebbles on the now smaller 7x8 board game grid. They started playing and enjoyed their time. Bob did win most of the games. He changed the rules to give Alice a starting advantage. Alice did not care losing frequently. They laughed a lot. She loves Bob and is happy for every minute being next to him.
This is a real game. It is a full body experience with all senses. These young children manipulate their material world, create and modify abstract rules, develop strategies for winning, communicate and have fun together.
The German Wikipedia entry for “Dame_(Spiel)” lists 3 4 4 (3 + many more) 2 = 288+ orthogonal rule variants. Playing Doppelkopf (popular 4-player card game in Germany) with people you have never played with takes at least five minutes discussion about the rules in the beginning. This demonstrates that developing and negotiating rules is central part of human game play.
If you would tell 10 year old Bob: “Alice has to go home with me for lunch. Look, this is Roboana (a strong AI robot), play with her instead.” You guide your girl-alike robot to Bob.
Roboana: “Hi, I’m Roboana, I saw you playing with Alice. It seems to be very funny. What is the game about?”
You, member of the Roboana development team, leave the scene for lunch. Will your maybe-HLAI robot manage the situation with Bob? Will Roboana modify the rules to balance the game if her strategy is too superior before Bob gets annoyed and walks away? Will Bob enjoy his time with Roboana?
Bob is assumingly 10 years old and qualifies only for sub human intelligence. Within the next 20 years I do not expect any artificial entity to reach this level of general intelligence. To know that algorithms meet the core performance for game play is only the smallest part of the problem. Therefore I prefer calling weak AI what it is: Algorithm.
In our further reading we should try not to forget that aspects of creativity, engineering, programming and social interaction are in most cases more complex than the core problem. Some rules are imprinted into us human beings: how a face looks like, how a fearful face looks like, how a fearful mother smells, how to smile to please, how to scream to alert the mother, how spit out bitter tasting food to protect against intoxication. To play with the environment is imprinted into our brains as well. We enjoy to manipulate things and observe with our fullest curiosity its outcome. A game is a regulated kind of play. For AI development it is worth to widen the focus from game to playing.
Although computers beat humans at board games without needing any kind of general intelligence at all, I don’t think that invalidates game-playing as a useful domain for AGI research.
The strength of AI in games is, to a significant extent, due to the input of humans in being able to incorporate significant domain knowledge into the relatively simple algorithms that game AIs are built on.
However, it is quite easy to make game AI into a far, far more challenging problem (and, I suspect, a rather more widely applicable one)---consider the design of algorithms for general game playing rather than for any particular game. Basically, think of a game AI that is first given a description of the rules of the game it’s about to play, which could be any game, and then must play the game as well as possible.
It tells us that within certain bounds computers can excel as tasks. I think in the near-term that means that computers will continue to excel in certain tasks like personal assistants, factory labor, menial tasks, and human-aided tasks.
AI seems to be pretty good at board games relative to us. Does this tell us anything interesting? For instance, about the difficulty of automating other kinds of tasks? How about the task of AI research? Some thoughts here.
For anything whose function and sequencing we thoroughly understand the programming is straightforward and easy, at least in the conceptual sense. That covers most games, including video games. The computer’s “side” in a video game, for example, which looks conceptually difficult, most of the time turns out logically to be only decision trees.
The challenge is the tasks we can’t precisely define, like general intelligence. The rewarding approach here is to break down processes into identifiable subtasks. A case in point is understanding natural languages, one of whose essential questions is, “What is the meaning of “meaning?” In terms of a machine it can only be the content of a subroutine or pointers to subroutines. The input problem, converting sentences into sets of executable concepts, is thus approachable. The output problem, however, converting unpredictable concepts into words, is much tougher. It may involve growing decision trees on the fly.
Thanks for the nice summary and the questions. I think it is worth noting that AI is good only at some board games (fully observable, deterministic games) and not at others (partially observable, non-deterministic games such as, say, Civilization).
Do you know of a partially observable game for which AI lags behind humans substantially? These examples are of particular interest to me because they would significantly revise my understanding of what problems are hard and easy.
The most prominent games of this partial information that I know are Bridge and Poker, and AI’s can now win at both of these (and which in fact proved to be much easier than the classic deterministic games). Backgammon is random, and also turned out to be relatively easy—in fact the randomness itself is widely considered to have made the game easy for computers! Scrabble is the other example that comes to mind, where the situation is the same.
For Civilization in particular, it seems very likely that AI would be wildly superhuman if it were subject to the same kind of attention as other games, simply because the techniques used in Go and Backgammon, together with a bunch of ad hoc logic for navigating the tech tree, should be able to get so much traction.
Agreed. It’s not Civilization, but Starcraft is also partially observable and non-deterministic, and a team of students managed to bring their Starcraft AI to the level of being able to defeat a “top 16 in Europe”-level human player after only a “few months” of work.
The game AIs for popular strategy games are often bad because the developers don’t actually have the time and resources to make a really good one, and it’s not a high priority anyway—most people playing games like Civilization want an AI that they’ll have fun defeating, not an AI that actually plays optimally.
In RTS games an AI has a large built-in advantage over humans because it can micromanage so much better.
That’s a very valid point: a successful AI in a game is the one which puts up a decent fight before losing.
Are you played this type of game?
[pollid:777]
I think that if you played on big map (freeciv support really huge) then your goals (like in real world) could be better fulfilled if you play WITH (not against) AI. For example managing 5 tousands engineers manually could take several hours per round.
You could meditate more concepts (for example for example geometric growing, metasthasis method of spread civilisation etc and for sure cooperation with some type of AI) in this game…
I think it would be easy to create a Civilization AI that would choose to grow on a certain path with a certain win-style in mind. So if the AI picks military win then it will focus on building troops and acquiring territory and maintaining states of war with other players. What might be hard is other win states like diplomatic or cultural because those require much more intuitive and nuanced decision making without a totally clear course of action.
The popular AI mods for Civ actually tend to make the AIs less thematic—they’re less likely to be nice to you just because of a thousand year harmonious and profitable peace, for example, and more likely to build unattractive but efficient Stacks of Doom. Of course there are selection effects on who installs such mods.
I think you’re mostly correct on this. Sometimes difficult opponents are needed, but for almost all games that can be trivially achieved by making the AI cheat rather than improving the algorithms. That said, when playing a game vs an AI you do want the AI to at least appear to be intelligent; although humans can often be quite easy to fool with cheating, a good algorithm is still a better way of giving this appearance than a fake. It doesn’t have to be optimal, and even if it is you can constrain it enough to make it beatable, or intentionally design different kinds of weaknesses into the AI so that humans can have fun looking for those weaknesses and feel good when they find them. Ultimately, though, the point is that the standard approach of having lots and lots of scripting still tends to get the job done, and developers almost never find the resource expenditure for good AI to be worthwhile.
However, I think that genuinely superhuman AI in games like Starcraft and Civilization is far harder than you imply. For example, in RTS games (as Lumifer has said) the AI has a built-in advantage due to its capacity for micromanagement. Moreover, although the example you cite has an AI from a “few months” of work beating a high-level human player, I think that was quite likely to be a one-off occurrence. Beating a human once is quiet different to consistently beating a human.
If you look at the results of the AIIDE Man vs Machine matches, the top bots consistently lose every game to Bakuryu (the human representative). According to this report,
I seems to me that the best AIs in these kinds of games work by focusing on a relatively narrow set of overall strategies, and then focusing on executing those strategies as flawlessly as possible. In something like Starcraft the AI’s potential for this kind of execution is definitely superhuman, but as the Man vs Machine matches demonstrate this really isn’t enough.
In the case of the Civilization games, the fact that they aren’t real-time removes quite a lot of the advantage that an AI gets in terms of micromanagement. Also, like in Starcraft, classical AI techniques really don’t work particularly well due to the massive branching factor.
Granted, taking a similar approach to the Starcraft bots might still work pretty well; I believe there are some degenerate strategies in many of the Civ games that are quite strong on their own, and if you program an AI to execute them with a high degree of precision and good micromanagement, and add some decent reactive play, that might be good enough.
However, unless the game is simply broken due to bad design, I suspect that you would find that, like the Starcraft bots, AIs designed on that kind of idea would still be easily exploited and consistently beaten by the best human players.
I wouldn’t say that poker is “much easier than the classic deterministic games”, and poker AI still lags significantly behind humans in several regards. Basically, the strongest poker bots at the moment are designed around solving for Nash equilibrium strategies (of an abstracted version of the game) in advance, but this fails in a couple of ways:
These approaches haven’t really been extended past 2- or 3-player games.
Playing a NE strategy makes sense if your opponent is doing the same, but your opponent almost always won’t be. Thus, in order to play better, poker bots should be able to exploit weak opponents.
Both of these are rather nontrivial problems.
Kriegspiel, a partially observable version of chess, is another example where the best humans are still better than the best AIs, although I’ll grant that the gap isn’t a particularly big one, and likely mostly has to do with it not being a significant research focus.
Interestingly enough, a team at MIT managed to make an AI that learned how to play from the manual and proceeded to win 80% of it’s games against the AI, though I don’t know which difficulty it was set to, or how the freeciv AI compares to the one in normal Civilization.
I was disappointed to see my new favorite “pure” game Arimaa missing from Bostrom’s list. Arimaa was designed to be intuitive for humans but difficult for computers, making it a good test case. Indeed, I find it to be very fun, and computers do not seem to be able to play it very well. In particular, computers are nowhere close to beating top humans despite the fact that there has arguably been even more effort to make good computer players than good human players.
Arimaa’s branching factor dwarfs that of Go (which in turn beats every other commonly known example). Since a super-high branching factor is also a characteristic feature of general AI test problems, I think it remains plausible that simple, precisely defined games like Arimaa are good test cases for AI, as long as the branching factor keeps the game out of reach of brute force search.
Reportedly this just happened recently: http://games.slashdot.org/story/15/04/19/2332209/computer-beats-humans-at-arimaa
Go is super close to being beaten, and AIs do very well against all but the best humans.
This summary of already superhuman game playing AIs impressed me since two weeks. But only until yesterday. John McCarthy was attributed in Vardi(2012) to have said: “As soon as it works, no one calls it AI anymore.” (p13)
There is more truth in it than McCarthy expected it to be: A tailor made game playing algorithm, developed and optimized by generations of scientists and software engineers is no entity of AI. It is an algorithm. Human beings analyzed the rule set, found abstractions of it, developed evaluation schemes and found heuristics to prune the un-computable large search tree. With brute force and megawatts of computational evaluation power they managed to fill a database with millions of more or less favorable game situations. In direct competion of game playing algorithm vs. human being these pre-computed situations help to find short cuts in the tree search to achieve superhuman performance in the end.
Is this entity an AI or an algorithm?
Game concept development: human.
Game rule definition and negotiation: human.
Game rule abstraction and translation into computable form: human designed algorithm.
Evaluation of game situation: human designed algorithm, computer aided optimization.
Search tree heuristics: human designed algorithm, computer aided optimization.
Database of favorable situations and moves: brute force tree search on massive parallel supercomputer.
Detection of favorable situations: human designed algorithm for pattern matching, computer aided optimization.
Active playing: Full automatic use of algorithms and information of points 3-7. No human being involved.
Unsupervised learning, search optimization and pattern matching (points 5-7) make this class of entities weak AIs. A human being playing against this entity will probably attribute intelligence to it. “Kasparov claims to have seen glimpses of true intelligence and creativity in some of the computers moves” (p12, Newborn[2011]).
But weak AI is not our focus. Our focus is strong AI, HLAI and superintelligence. It is good to know that human engineered weak AI algorithms can achieve superhuman performance. Not a single game playing weak AI achieved human level of intelligence. The following story will show why:
Watch two children, Alice and Bob, playing in the street. They found white and black pebbles and a piece of chalk. Bob has a faint idea of checkers (other names: “draught” or German: “Dame”) from having seen his elder brother playing it. He explains to Alice: “Lets draw a grid of chalk lines on the road and place our pebbles into the fields. I will show you.” In joint effort they draw several strait lines resulting in a 7x9 grid. Then Bob starts to place his black pebbles into his starting rows as he remembered it. Alice follows suit—but she has not enough white pebbles to fill her starting rows. They discuss their options and searched for more white pebbles. After two minutes of unsuccessful search Bob said: “Lets remove one column and I take two of my black pebbles away.” Then Bob explained to Alice how to make moves with her pebbles on the now smaller 7x8 board game grid. They started playing and enjoyed their time. Bob did win most of the games. He changed the rules to give Alice a starting advantage. Alice did not care losing frequently. They laughed a lot. She loves Bob and is happy for every minute being next to him.
This is a real game. It is a full body experience with all senses. These young children manipulate their material world, create and modify abstract rules, develop strategies for winning, communicate and have fun together.
The German Wikipedia entry for “Dame_(Spiel)” lists 3 4 4 (3 + many more) 2 = 288+ orthogonal rule variants. Playing Doppelkopf (popular 4-player card game in Germany) with people you have never played with takes at least five minutes discussion about the rules in the beginning. This demonstrates that developing and negotiating rules is central part of human game play.
If you would tell 10 year old Bob: “Alice has to go home with me for lunch. Look, this is Roboana (a strong AI robot), play with her instead.” You guide your girl-alike robot to Bob.
Roboana: “Hi, I’m Roboana, I saw you playing with Alice. It seems to be very funny. What is the game about?”
You, member of the Roboana development team, leave the scene for lunch. Will your maybe-HLAI robot manage the situation with Bob? Will Roboana modify the rules to balance the game if her strategy is too superior before Bob gets annoyed and walks away? Will Bob enjoy his time with Roboana?
Bob is assumingly 10 years old and qualifies only for sub human intelligence. Within the next 20 years I do not expect any artificial entity to reach this level of general intelligence. To know that algorithms meet the core performance for game play is only the smallest part of the problem. Therefore I prefer calling weak AI what it is: Algorithm.
In our further reading we should try not to forget that aspects of creativity, engineering, programming and social interaction are in most cases more complex than the core problem. Some rules are imprinted into us human beings: how a face looks like, how a fearful face looks like, how a fearful mother smells, how to smile to please, how to scream to alert the mother, how spit out bitter tasting food to protect against intoxication. To play with the environment is imprinted into our brains as well. We enjoy to manipulate things and observe with our fullest curiosity its outcome. A game is a regulated kind of play. For AI development it is worth to widen the focus from game to playing.
Now we have something! We have something we can actually use! AI must be able to interact with emotional intelligence!
Although computers beat humans at board games without needing any kind of general intelligence at all, I don’t think that invalidates game-playing as a useful domain for AGI research.
The strength of AI in games is, to a significant extent, due to the input of humans in being able to incorporate significant domain knowledge into the relatively simple algorithms that game AIs are built on.
However, it is quite easy to make game AI into a far, far more challenging problem (and, I suspect, a rather more widely applicable one)---consider the design of algorithms for general game playing rather than for any particular game. Basically, think of a game AI that is first given a description of the rules of the game it’s about to play, which could be any game, and then must play the game as well as possible.
It tells us that within certain bounds computers can excel as tasks. I think in the near-term that means that computers will continue to excel in certain tasks like personal assistants, factory labor, menial tasks, and human-aided tasks.