The game AIs for popular strategy games are often bad because the developers don’t actually have the time and resources to make a really good one, and it’s not a high priority anyway—most people playing games like Civilization want an AI that they’ll have fun defeating, not an AI that actually plays optimally.
I think you’re mostly correct on this. Sometimes difficult opponents are needed, but for almost all games that can be trivially achieved by making the AI cheat rather than improving the algorithms. That said, when playing a game vs an AI you do want the AI to at least appear to be intelligent; although humans can often be quite easy to fool with cheating, a good algorithm is still a better way of giving this appearance than a fake. It doesn’t have to be optimal, and even if it is you can constrain it enough to make it beatable, or intentionally design different kinds of weaknesses into the AI so that humans can have fun looking for those weaknesses and feel good when they find them. Ultimately, though, the point is that the standard approach of having lots and lots of scripting still tends to get the job done, and developers almost never find the resource expenditure for good AI to be worthwhile.
However, I think that genuinely superhuman AI in games like Starcraft and Civilization is far harder than you imply. For example, in RTS games (as Lumifer has said) the AI has a built-in advantage due to its capacity for micromanagement. Moreover, although the example you cite has an AI from a “few months” of work beating a high-level human player, I think that was quite likely to be a one-off occurrence. Beating a human once is quiet different to consistently beating a human.
If you look at the results of the AIIDE Man vs Machine matches, the top bots consistently lose every game to Bakuryu (the human representative). According to this report,
In this match it was shown that the true weakness of state
of the art StarCraft AI systems was that humans are very
adept at recognizing scripted behaviors and exploiting them
to the fullest. A human player in Skynet’s position in the first
game would have realized he was being taken advantage of
and adapted his strategy accordingly, however the inability
to put the local context (Bakuryu kiting his units around his
base) into the larger context of the game (that this would
delay Skynet until reinforcements arrived) and then the lack
of strategy change to fix the situation led to an easy victory
for the human. These problems remain as some of the main
challenges in RTS AI today: to both recognize the strategy and
intent of an opponent’s actions, and how to effectively adapt
your own strategy to overcome them.
I seems to me that the best AIs in these kinds of games work by focusing on a relatively narrow set of overall strategies, and then focusing on executing those strategies as flawlessly as possible. In something like Starcraft the AI’s potential for this kind of execution is definitely superhuman, but as the Man vs Machine matches demonstrate this really isn’t enough.
In the case of the Civilization games, the fact that they aren’t real-time removes quite a lot of the advantage that an AI gets in terms of micromanagement. Also, like in Starcraft, classical AI techniques really don’t work particularly well due to the massive branching factor.
Granted, taking a similar approach to the Starcraft bots might still work pretty well; I believe there are some degenerate strategies in many of the Civ games that are quite strong on their own, and if you program an AI to execute them with a high degree of precision and good micromanagement, and add some decent reactive play, that might be good enough.
However, unless the game is simply broken due to bad design, I suspect that you would find that, like the Starcraft bots, AIs designed on that kind of idea would still be easily exploited and consistently beaten by the best human players.
I think you’re mostly correct on this. Sometimes difficult opponents are needed, but for almost all games that can be trivially achieved by making the AI cheat rather than improving the algorithms. That said, when playing a game vs an AI you do want the AI to at least appear to be intelligent; although humans can often be quite easy to fool with cheating, a good algorithm is still a better way of giving this appearance than a fake. It doesn’t have to be optimal, and even if it is you can constrain it enough to make it beatable, or intentionally design different kinds of weaknesses into the AI so that humans can have fun looking for those weaknesses and feel good when they find them. Ultimately, though, the point is that the standard approach of having lots and lots of scripting still tends to get the job done, and developers almost never find the resource expenditure for good AI to be worthwhile.
However, I think that genuinely superhuman AI in games like Starcraft and Civilization is far harder than you imply. For example, in RTS games (as Lumifer has said) the AI has a built-in advantage due to its capacity for micromanagement. Moreover, although the example you cite has an AI from a “few months” of work beating a high-level human player, I think that was quite likely to be a one-off occurrence. Beating a human once is quiet different to consistently beating a human.
If you look at the results of the AIIDE Man vs Machine matches, the top bots consistently lose every game to Bakuryu (the human representative). According to this report,
I seems to me that the best AIs in these kinds of games work by focusing on a relatively narrow set of overall strategies, and then focusing on executing those strategies as flawlessly as possible. In something like Starcraft the AI’s potential for this kind of execution is definitely superhuman, but as the Man vs Machine matches demonstrate this really isn’t enough.
In the case of the Civilization games, the fact that they aren’t real-time removes quite a lot of the advantage that an AI gets in terms of micromanagement. Also, like in Starcraft, classical AI techniques really don’t work particularly well due to the massive branching factor.
Granted, taking a similar approach to the Starcraft bots might still work pretty well; I believe there are some degenerate strategies in many of the Civ games that are quite strong on their own, and if you program an AI to execute them with a high degree of precision and good micromanagement, and add some decent reactive play, that might be good enough.
However, unless the game is simply broken due to bad design, I suspect that you would find that, like the Starcraft bots, AIs designed on that kind of idea would still be easily exploited and consistently beaten by the best human players.