Thanks for the nice summary and the questions. I think it is worth noting that AI is good only at some board games (fully observable, deterministic games) and not at others (partially observable, non-deterministic games such as, say, Civilization).
Do you know of a partially observable game for which AI lags behind humans substantially? These examples are of particular interest to me because they would significantly revise my understanding of what problems are hard and easy.
The most prominent games of this partial information that I know are Bridge and Poker, and AI’s can now win at both of these (and which in fact proved to be much easier than the classic deterministic games). Backgammon is random, and also turned out to be relatively easy—in fact the randomness itself is widely considered to have made the game easy for computers! Scrabble is the other example that comes to mind, where the situation is the same.
For Civilization in particular, it seems very likely that AI would be wildly superhuman if it were subject to the same kind of attention as other games, simply because the techniques used in Go and Backgammon, together with a bunch of ad hoc logic for navigating the tech tree, should be able to get so much traction.
For Civilization in particular, it seems very likely that AI would be wildly superhuman if it were subject to the same kind of attention as other games, simply because the techniques used in Go and Backgammon, together with a bunch of ad hoc logic for navigating the tech tree, should be able to get so much traction.
Agreed. It’s not Civilization, but Starcraft is also partially observable and non-deterministic, and a team of students managed to bring their Starcraft AI to the level of being able to defeat a “top 16 in Europe”-level human player after only a “few months” of work.
The game AIs for popular strategy games are often bad because the developers don’t actually have the time and resources to make a really good one, and it’s not a high priority anyway—most people playing games like Civilization want an AI that they’ll have fun defeating, not an AI that actually plays optimally.
I think that if you played on big map (freeciv support really huge) then your goals (like in real world) could be better fulfilled if you play WITH (not against) AI. For example managing 5 tousands engineers manually could take several hours per round.
You could meditate more concepts (for example for example geometric growing, metasthasis method of spread civilisation etc and for sure cooperation with some type of AI) in this game…
I think it would be easy to create a Civilization AI that would choose to grow on a certain path with a certain win-style in mind. So if the AI picks military win then it will focus on building troops and acquiring territory and maintaining states of war with other players. What might be hard is other win states like diplomatic or cultural because those require much more intuitive and nuanced decision making without a totally clear course of action.
most people playing games like Civilization want an AI that they’ll have fun defeating, not an AI that actually plays optimally.
The popular AI mods for Civ actually tend to make the AIs less thematic—they’re less likely to be nice to you just because of a thousand year harmonious and profitable peace, for example, and more likely to build unattractive but efficient Stacks of Doom. Of course there are selection effects on who installs such mods.
The game AIs for popular strategy games are often bad because the developers don’t actually have the time and resources to make a really good one, and it’s not a high priority anyway—most people playing games like Civilization want an AI that they’ll have fun defeating, not an AI that actually plays optimally.
I think you’re mostly correct on this. Sometimes difficult opponents are needed, but for almost all games that can be trivially achieved by making the AI cheat rather than improving the algorithms. That said, when playing a game vs an AI you do want the AI to at least appear to be intelligent; although humans can often be quite easy to fool with cheating, a good algorithm is still a better way of giving this appearance than a fake. It doesn’t have to be optimal, and even if it is you can constrain it enough to make it beatable, or intentionally design different kinds of weaknesses into the AI so that humans can have fun looking for those weaknesses and feel good when they find them. Ultimately, though, the point is that the standard approach of having lots and lots of scripting still tends to get the job done, and developers almost never find the resource expenditure for good AI to be worthwhile.
However, I think that genuinely superhuman AI in games like Starcraft and Civilization is far harder than you imply. For example, in RTS games (as Lumifer has said) the AI has a built-in advantage due to its capacity for micromanagement. Moreover, although the example you cite has an AI from a “few months” of work beating a high-level human player, I think that was quite likely to be a one-off occurrence. Beating a human once is quiet different to consistently beating a human.
If you look at the results of the AIIDE Man vs Machine matches, the top bots consistently lose every game to Bakuryu (the human representative). According to this report,
In this match it was shown that the true weakness of state
of the art StarCraft AI systems was that humans are very
adept at recognizing scripted behaviors and exploiting them
to the fullest. A human player in Skynet’s position in the first
game would have realized he was being taken advantage of
and adapted his strategy accordingly, however the inability
to put the local context (Bakuryu kiting his units around his
base) into the larger context of the game (that this would
delay Skynet until reinforcements arrived) and then the lack
of strategy change to fix the situation led to an easy victory
for the human. These problems remain as some of the main
challenges in RTS AI today: to both recognize the strategy and
intent of an opponent’s actions, and how to effectively adapt
your own strategy to overcome them.
I seems to me that the best AIs in these kinds of games work by focusing on a relatively narrow set of overall strategies, and then focusing on executing those strategies as flawlessly as possible. In something like Starcraft the AI’s potential for this kind of execution is definitely superhuman, but as the Man vs Machine matches demonstrate this really isn’t enough.
In the case of the Civilization games, the fact that they aren’t real-time removes quite a lot of the advantage that an AI gets in terms of micromanagement. Also, like in Starcraft, classical AI techniques really don’t work particularly well due to the massive branching factor.
Granted, taking a similar approach to the Starcraft bots might still work pretty well; I believe there are some degenerate strategies in many of the Civ games that are quite strong on their own, and if you program an AI to execute them with a high degree of precision and good micromanagement, and add some decent reactive play, that might be good enough.
However, unless the game is simply broken due to bad design, I suspect that you would find that, like the Starcraft bots, AIs designed on that kind of idea would still be easily exploited and consistently beaten by the best human players.
I wouldn’t say that poker is “much easier than the classic deterministic games”, and poker AI still lags significantly behind humans in several regards. Basically, the strongest poker bots at the moment are designed around solving for Nash equilibrium strategies (of an abstracted version of the game) in advance, but this fails in a couple of ways:
These approaches haven’t really been extended past 2- or 3-player games.
Playing a NE strategy makes sense if your opponent is doing the same, but your opponent almost always won’t be. Thus, in order to play better, poker bots should be able to exploit weak opponents. Both of these are rather nontrivial problems.
Kriegspiel, a partially observable version of chess, is another example where the best humans are still better than the best AIs, although I’ll grant that the gap isn’t a particularly big one, and likely mostly has to do with it not being a significant research focus.
Interestingly enough, a team at MIT managed to make an AI that learned how to play from the manual and proceeded to win 80% of it’s games against the AI, though I don’t know which difficulty it was set to, or how the freeciv AI compares to the one in normal Civilization.
Thanks for the nice summary and the questions. I think it is worth noting that AI is good only at some board games (fully observable, deterministic games) and not at others (partially observable, non-deterministic games such as, say, Civilization).
Do you know of a partially observable game for which AI lags behind humans substantially? These examples are of particular interest to me because they would significantly revise my understanding of what problems are hard and easy.
The most prominent games of this partial information that I know are Bridge and Poker, and AI’s can now win at both of these (and which in fact proved to be much easier than the classic deterministic games). Backgammon is random, and also turned out to be relatively easy—in fact the randomness itself is widely considered to have made the game easy for computers! Scrabble is the other example that comes to mind, where the situation is the same.
For Civilization in particular, it seems very likely that AI would be wildly superhuman if it were subject to the same kind of attention as other games, simply because the techniques used in Go and Backgammon, together with a bunch of ad hoc logic for navigating the tech tree, should be able to get so much traction.
Agreed. It’s not Civilization, but Starcraft is also partially observable and non-deterministic, and a team of students managed to bring their Starcraft AI to the level of being able to defeat a “top 16 in Europe”-level human player after only a “few months” of work.
The game AIs for popular strategy games are often bad because the developers don’t actually have the time and resources to make a really good one, and it’s not a high priority anyway—most people playing games like Civilization want an AI that they’ll have fun defeating, not an AI that actually plays optimally.
In RTS games an AI has a large built-in advantage over humans because it can micromanage so much better.
That’s a very valid point: a successful AI in a game is the one which puts up a decent fight before losing.
Are you played this type of game?
[pollid:777]
I think that if you played on big map (freeciv support really huge) then your goals (like in real world) could be better fulfilled if you play WITH (not against) AI. For example managing 5 tousands engineers manually could take several hours per round.
You could meditate more concepts (for example for example geometric growing, metasthasis method of spread civilisation etc and for sure cooperation with some type of AI) in this game…
I think it would be easy to create a Civilization AI that would choose to grow on a certain path with a certain win-style in mind. So if the AI picks military win then it will focus on building troops and acquiring territory and maintaining states of war with other players. What might be hard is other win states like diplomatic or cultural because those require much more intuitive and nuanced decision making without a totally clear course of action.
The popular AI mods for Civ actually tend to make the AIs less thematic—they’re less likely to be nice to you just because of a thousand year harmonious and profitable peace, for example, and more likely to build unattractive but efficient Stacks of Doom. Of course there are selection effects on who installs such mods.
I think you’re mostly correct on this. Sometimes difficult opponents are needed, but for almost all games that can be trivially achieved by making the AI cheat rather than improving the algorithms. That said, when playing a game vs an AI you do want the AI to at least appear to be intelligent; although humans can often be quite easy to fool with cheating, a good algorithm is still a better way of giving this appearance than a fake. It doesn’t have to be optimal, and even if it is you can constrain it enough to make it beatable, or intentionally design different kinds of weaknesses into the AI so that humans can have fun looking for those weaknesses and feel good when they find them. Ultimately, though, the point is that the standard approach of having lots and lots of scripting still tends to get the job done, and developers almost never find the resource expenditure for good AI to be worthwhile.
However, I think that genuinely superhuman AI in games like Starcraft and Civilization is far harder than you imply. For example, in RTS games (as Lumifer has said) the AI has a built-in advantage due to its capacity for micromanagement. Moreover, although the example you cite has an AI from a “few months” of work beating a high-level human player, I think that was quite likely to be a one-off occurrence. Beating a human once is quiet different to consistently beating a human.
If you look at the results of the AIIDE Man vs Machine matches, the top bots consistently lose every game to Bakuryu (the human representative). According to this report,
I seems to me that the best AIs in these kinds of games work by focusing on a relatively narrow set of overall strategies, and then focusing on executing those strategies as flawlessly as possible. In something like Starcraft the AI’s potential for this kind of execution is definitely superhuman, but as the Man vs Machine matches demonstrate this really isn’t enough.
In the case of the Civilization games, the fact that they aren’t real-time removes quite a lot of the advantage that an AI gets in terms of micromanagement. Also, like in Starcraft, classical AI techniques really don’t work particularly well due to the massive branching factor.
Granted, taking a similar approach to the Starcraft bots might still work pretty well; I believe there are some degenerate strategies in many of the Civ games that are quite strong on their own, and if you program an AI to execute them with a high degree of precision and good micromanagement, and add some decent reactive play, that might be good enough.
However, unless the game is simply broken due to bad design, I suspect that you would find that, like the Starcraft bots, AIs designed on that kind of idea would still be easily exploited and consistently beaten by the best human players.
I wouldn’t say that poker is “much easier than the classic deterministic games”, and poker AI still lags significantly behind humans in several regards. Basically, the strongest poker bots at the moment are designed around solving for Nash equilibrium strategies (of an abstracted version of the game) in advance, but this fails in a couple of ways:
These approaches haven’t really been extended past 2- or 3-player games.
Playing a NE strategy makes sense if your opponent is doing the same, but your opponent almost always won’t be. Thus, in order to play better, poker bots should be able to exploit weak opponents.
Both of these are rather nontrivial problems.
Kriegspiel, a partially observable version of chess, is another example where the best humans are still better than the best AIs, although I’ll grant that the gap isn’t a particularly big one, and likely mostly has to do with it not being a significant research focus.
Interestingly enough, a team at MIT managed to make an AI that learned how to play from the manual and proceeded to win 80% of it’s games against the AI, though I don’t know which difficulty it was set to, or how the freeciv AI compares to the one in normal Civilization.