For Civilization in particular, it seems very likely that AI would be wildly superhuman if it were subject to the same kind of attention as other games, simply because the techniques used in Go and Backgammon, together with a bunch of ad hoc logic for navigating the tech tree, should be able to get so much traction.
Agreed. It’s not Civilization, but Starcraft is also partially observable and non-deterministic, and a team of students managed to bring their Starcraft AI to the level of being able to defeat a “top 16 in Europe”-level human player after only a “few months” of work.
The game AIs for popular strategy games are often bad because the developers don’t actually have the time and resources to make a really good one, and it’s not a high priority anyway—most people playing games like Civilization want an AI that they’ll have fun defeating, not an AI that actually plays optimally.
I think that if you played on big map (freeciv support really huge) then your goals (like in real world) could be better fulfilled if you play WITH (not against) AI. For example managing 5 tousands engineers manually could take several hours per round.
You could meditate more concepts (for example for example geometric growing, metasthasis method of spread civilisation etc and for sure cooperation with some type of AI) in this game…
I think it would be easy to create a Civilization AI that would choose to grow on a certain path with a certain win-style in mind. So if the AI picks military win then it will focus on building troops and acquiring territory and maintaining states of war with other players. What might be hard is other win states like diplomatic or cultural because those require much more intuitive and nuanced decision making without a totally clear course of action.
most people playing games like Civilization want an AI that they’ll have fun defeating, not an AI that actually plays optimally.
The popular AI mods for Civ actually tend to make the AIs less thematic—they’re less likely to be nice to you just because of a thousand year harmonious and profitable peace, for example, and more likely to build unattractive but efficient Stacks of Doom. Of course there are selection effects on who installs such mods.
The game AIs for popular strategy games are often bad because the developers don’t actually have the time and resources to make a really good one, and it’s not a high priority anyway—most people playing games like Civilization want an AI that they’ll have fun defeating, not an AI that actually plays optimally.
I think you’re mostly correct on this. Sometimes difficult opponents are needed, but for almost all games that can be trivially achieved by making the AI cheat rather than improving the algorithms. That said, when playing a game vs an AI you do want the AI to at least appear to be intelligent; although humans can often be quite easy to fool with cheating, a good algorithm is still a better way of giving this appearance than a fake. It doesn’t have to be optimal, and even if it is you can constrain it enough to make it beatable, or intentionally design different kinds of weaknesses into the AI so that humans can have fun looking for those weaknesses and feel good when they find them. Ultimately, though, the point is that the standard approach of having lots and lots of scripting still tends to get the job done, and developers almost never find the resource expenditure for good AI to be worthwhile.
However, I think that genuinely superhuman AI in games like Starcraft and Civilization is far harder than you imply. For example, in RTS games (as Lumifer has said) the AI has a built-in advantage due to its capacity for micromanagement. Moreover, although the example you cite has an AI from a “few months” of work beating a high-level human player, I think that was quite likely to be a one-off occurrence. Beating a human once is quiet different to consistently beating a human.
If you look at the results of the AIIDE Man vs Machine matches, the top bots consistently lose every game to Bakuryu (the human representative). According to this report,
In this match it was shown that the true weakness of state
of the art StarCraft AI systems was that humans are very
adept at recognizing scripted behaviors and exploiting them
to the fullest. A human player in Skynet’s position in the first
game would have realized he was being taken advantage of
and adapted his strategy accordingly, however the inability
to put the local context (Bakuryu kiting his units around his
base) into the larger context of the game (that this would
delay Skynet until reinforcements arrived) and then the lack
of strategy change to fix the situation led to an easy victory
for the human. These problems remain as some of the main
challenges in RTS AI today: to both recognize the strategy and
intent of an opponent’s actions, and how to effectively adapt
your own strategy to overcome them.
I seems to me that the best AIs in these kinds of games work by focusing on a relatively narrow set of overall strategies, and then focusing on executing those strategies as flawlessly as possible. In something like Starcraft the AI’s potential for this kind of execution is definitely superhuman, but as the Man vs Machine matches demonstrate this really isn’t enough.
In the case of the Civilization games, the fact that they aren’t real-time removes quite a lot of the advantage that an AI gets in terms of micromanagement. Also, like in Starcraft, classical AI techniques really don’t work particularly well due to the massive branching factor.
Granted, taking a similar approach to the Starcraft bots might still work pretty well; I believe there are some degenerate strategies in many of the Civ games that are quite strong on their own, and if you program an AI to execute them with a high degree of precision and good micromanagement, and add some decent reactive play, that might be good enough.
However, unless the game is simply broken due to bad design, I suspect that you would find that, like the Starcraft bots, AIs designed on that kind of idea would still be easily exploited and consistently beaten by the best human players.
Agreed. It’s not Civilization, but Starcraft is also partially observable and non-deterministic, and a team of students managed to bring their Starcraft AI to the level of being able to defeat a “top 16 in Europe”-level human player after only a “few months” of work.
The game AIs for popular strategy games are often bad because the developers don’t actually have the time and resources to make a really good one, and it’s not a high priority anyway—most people playing games like Civilization want an AI that they’ll have fun defeating, not an AI that actually plays optimally.
In RTS games an AI has a large built-in advantage over humans because it can micromanage so much better.
That’s a very valid point: a successful AI in a game is the one which puts up a decent fight before losing.
Are you played this type of game?
[pollid:777]
I think that if you played on big map (freeciv support really huge) then your goals (like in real world) could be better fulfilled if you play WITH (not against) AI. For example managing 5 tousands engineers manually could take several hours per round.
You could meditate more concepts (for example for example geometric growing, metasthasis method of spread civilisation etc and for sure cooperation with some type of AI) in this game…
I think it would be easy to create a Civilization AI that would choose to grow on a certain path with a certain win-style in mind. So if the AI picks military win then it will focus on building troops and acquiring territory and maintaining states of war with other players. What might be hard is other win states like diplomatic or cultural because those require much more intuitive and nuanced decision making without a totally clear course of action.
The popular AI mods for Civ actually tend to make the AIs less thematic—they’re less likely to be nice to you just because of a thousand year harmonious and profitable peace, for example, and more likely to build unattractive but efficient Stacks of Doom. Of course there are selection effects on who installs such mods.
I think you’re mostly correct on this. Sometimes difficult opponents are needed, but for almost all games that can be trivially achieved by making the AI cheat rather than improving the algorithms. That said, when playing a game vs an AI you do want the AI to at least appear to be intelligent; although humans can often be quite easy to fool with cheating, a good algorithm is still a better way of giving this appearance than a fake. It doesn’t have to be optimal, and even if it is you can constrain it enough to make it beatable, or intentionally design different kinds of weaknesses into the AI so that humans can have fun looking for those weaknesses and feel good when they find them. Ultimately, though, the point is that the standard approach of having lots and lots of scripting still tends to get the job done, and developers almost never find the resource expenditure for good AI to be worthwhile.
However, I think that genuinely superhuman AI in games like Starcraft and Civilization is far harder than you imply. For example, in RTS games (as Lumifer has said) the AI has a built-in advantage due to its capacity for micromanagement. Moreover, although the example you cite has an AI from a “few months” of work beating a high-level human player, I think that was quite likely to be a one-off occurrence. Beating a human once is quiet different to consistently beating a human.
If you look at the results of the AIIDE Man vs Machine matches, the top bots consistently lose every game to Bakuryu (the human representative). According to this report,
I seems to me that the best AIs in these kinds of games work by focusing on a relatively narrow set of overall strategies, and then focusing on executing those strategies as flawlessly as possible. In something like Starcraft the AI’s potential for this kind of execution is definitely superhuman, but as the Man vs Machine matches demonstrate this really isn’t enough.
In the case of the Civilization games, the fact that they aren’t real-time removes quite a lot of the advantage that an AI gets in terms of micromanagement. Also, like in Starcraft, classical AI techniques really don’t work particularly well due to the massive branching factor.
Granted, taking a similar approach to the Starcraft bots might still work pretty well; I believe there are some degenerate strategies in many of the Civ games that are quite strong on their own, and if you program an AI to execute them with a high degree of precision and good micromanagement, and add some decent reactive play, that might be good enough.
However, unless the game is simply broken due to bad design, I suspect that you would find that, like the Starcraft bots, AIs designed on that kind of idea would still be easily exploited and consistently beaten by the best human players.