Before doing the whole EA thing, I played starcraft semi-professionally. I was consistently ranked grandmaster primarily making money from coaching players of all skill levels. I also co-authored a ML paper on starcraft II win prediction.
TL;DR: Alphastar shows us what it will look like when humans are beaten in completely fair fight.
I feel fundamentally confused about a lot of the discussion surrounding alphastar. The entire APM debate feels completely misguided to me and seems to be born out of fundamental misunderstandings of what it means to be good at starcraft.
Being skillful at starcraft, is the ability to compute which set of actions needs to be made and to do so very fast. A low skilled player, has to spend seconds figuring out their next move, whereas a pro player will determine it in milliseconds. This skill takes years to build, through mental caching of game states, so that the right moves become instinct and can be quickly computed without much mental effort.
As you showed clearly in the blogpost, Mana (or any other player) reach a much higher apm by mindlessly tabbing between control groups. You can click predetermined spots on the screen more than fast enough to control individual units.
We are physically capable of playing this fast, yet we do not.
The reason for this, is that in a real game my actions are limited by the speed it takes to figure them out. Likewise if you were to play speedchess against alpha-zero you will get creamed, not because you can’t move the pieces fast enough, but because alpha-zero can calculate much better moves much faster than you can.
I am convinced a theoretical AI playing with a mouse and keyboard with the motorcontrols equivalent of a human, would largely be making the same ‘inhuman’ plays we are seeing currently. Difficulty of input is simply not the bottleneck.
Alphastar can only do its ‘inhuman’ moves because it’s capable of calculating starcraft equations MUCH faster than humans are. Likewise, I can only do ‘pro’ moves because I’m capable of calculating starcraft equations much faster than an amateur.
You could argue that it’s not showcasing the skills we’re interested in, as it doesn’t need to put the same emphasis on long-term planning and outsmarting its opponent, that equal human players have to. But that will also be the case if you put me against someone who’s never played the game.
If what we really care about is proving that it can do long term thinking and planning in a game with a large actionspace and imperfect information, why choose starcraft? Why not select something like Frozen Synapse where the only way to win is to fundamentally understand these concepts?
The entire debate of ‘fairness’ seems somewhat misguided to me. Even if we found an apm measure that looks fair, I could move the goal post and point out that it makes selections and commands with perfect precision, whereas a human has to do it through a mouse and keyboard. There are moves that are extremely risky to pull off due to the difficulty of precisely clicking things. If we supplied it a virtual mouse to move arround, I could move the goal post again and complain how my eyes cannot take in the entire screen at once.
It’s clear alphastar is not a fair fight, yet I think we got a very good look at what the fair fight eventually will look like. Alphastar fundamentally is what superhuman starcraft intelligence looks like (or at least it will be with more training) and it’s abusing the exact skillset that make pro players stand out from amateurs in the first place.
I think your feelings stem from you considering it to be enough If AS simply beats human players while APM whiners would like AS to learn all the aspect of Starcraft skill it can reasonably be expected to learn.
The agents on ladder don’t scout much and can’t react accordingly. They don’t tech switch midgame and some of them get utterly confused in ways a human wouldn’t. Game 11 agent vs MaNa couldn’t figure out it could build 1 phoenix to kill the warp prism and chose to follow it with 3 oracles (units which cant shoot at flying units). The ladder agents display similar mistakes.
Considering how many millions of dollars AS has cost already (could be hundreds at this point) these holes are simply too big to call the agents robust or the project complete and Starcraft conquered.
If they somehow could manage to pull off ASZero which humans can’t reliably abuse I’d admit they’ve done all there is to do. Then they could declared victory.
I think you’re right when it comes to SC2, but that doesn’t really matter for DeepMind’s ultimate goal with AlphaStar: to show an AI that can learn anything a human can learn.
In a sense AlphaStar just proves that SC2 is not balanced for superhuman ( https://news.ycombinator.com/item?id=19038607 ) micro. Big stalker army shouldn’t beat big Immortal army. In current SC2 it obviously can with good enough micro. There are probably all sorts of other situations where soft-scissor beats soft-rock with good enough micro.
Does this make AlphaStar’s SC2 performance illegitimate? Not really? Tho in the specific Stalker-Immortal fight, input through an actual robot looking at an actual screen and having to cycle through control groups to check HP and select units PROBABLY would not have been able to achieve that level of micro.
The deeper problem is that this isn’t DeepMind’s goal. It just means that SC2 is a cognitively simpler game than initially thought(note, not easy, simple as in a lot of the strategy employed by humans is unnecessary with sufficient athletic skill). The higher goal of AlphaStar is to prove that an AI can be trained from nothing to learn the rules of the game and then behave in a human-like, long term fashion. Scout the opponent, react to their strategy with your own strategy etc.
Simply bulldozing the opponent with superior micro and not even worrying about their counterplay(since there is no counterplay) is not particularly smart. It’s certainly still SC2, it just reveals the fact that SC2 is a much simpler game(when you have superhuman micro).
You could argue that it’s not showcasing the skills we’re interested in, as it doesn’t need to put the same emphasis on long-term planning and outsmarting its opponent, that equal human players have to. But that will also be the case if you put me against someone who’s never played the game.
Interesting point. Would it be fair to say that, in a tournament match, a human pro player is behaving much more like a reinforcement learning agent than a general intelligence using System 2? In other words, the human player is also just executing reflexes he has gained through experience, and not coming up with ingenious novel strategies in the middle of a game.
I guess it was unreasonable to complain about the lack of inductive reasoning and game-theoretic thinking in AlphaStar from the beginning since DeepMind is a RL company, and RL agents just don’t do that sort of stuff. But I think it’s fair to say that AlphaStar’s victory was much less satisfying than AlphaZero, being not only unable to generalize across multiple RTS games, but also unable to explore the strategy space of a single game (hence the incentivizing of use of certain units during training). I think we all expected seeing perfect game sense and situation-dependent strategy choice, but instead blink stalkers is the one build to rule them all, apparently.
I think that’s a very fair way to put it, yes. One way this becomes very apparent, is that you can have a conversation with a starcraft player while he’s playing. It will be clear the player is not paying you his full attention at particularly demanding moments, however.
Novel strategies are thought up inbetween games and refined through dozens of practice games. In the end you have a mental decision tree of how to respond to most situations that could arise. Without having played much chess, I imagine this is how people do chess openers do as well.
I considered using system 1 and 2 analogies, but because of certain resevations I have with the dichotomy, I opted not to. Basically I don’t think you can cleanly divide human intelligence into those two catagories.
Ask a starcraft player why they made a certain maneuver and they will for the most part be able to tell you why he did it, despite never having thought the reason out loud until you asked. There is some deep strategical thinking being done at the instinctual level. This intelligence is just as real as system 2 intelligence and should not be dismissed as being merely reflexes.
My central critique is essentially of seeing starcraft ‘mechanics’ as unintelligent. Every small maneuver has a (most often implicit) reason for being made. Starcraft players are not limited by their physical capabilities nearly as much as they are limited by their ability to think fast enough. If we are interested in something other than what it looks like when someone can think at much higher speeds than humans, we should be picking another game than starcraft.
I think the abstract question of how to cognitively manage a “large action space” and “fog of war” is central here.
In some sense StarCraft could be seen as turn based, with each turn lasting for 1 microsecond, but this framing makes the action space of a beginning-to-end game *enormous*. Maybe not so enormous that a bigger data center couldn’t fix it? In some sense, brute force can eventually solve ANY problem tractable to a known “vaguely O(N*log(N))” algorithm.
BUT facing “a limit that forces meta-cognition” is a key idea for “the reason to apply AI to an RTS next, as opposed to a turn based game.”
(Tangent: The Portia spider is relevant here because it seems that its whole schtick is that it scans with its (limited, but far seeing) eyes, builds up a model of the world via an accumulation of glances, re-uses (limited) neurons to slowly imagine a route through that space, and then follows the route to sneak up on other (similarly limited, but less “meta-cognitive”?) spiders which are its prey.)
No matter how fast something can think or react, SOME game could hypothetically be invented that forces a finitely speedy mind to need action space compression and (maybe) even compression of compression choices. Also, the physical world itself appears to contain huge computational depths.
In some sense then, the “idea of an AI getting good *at an RTS*” is an attempt (which might have failed or might be poorly motivated) to point at issues related to cognitive compression and meta-cognition. There is an implied research strategy aimed at learning to use a pragmatically finite mind to productively work on a pragmatically infinite challenge.
The hunch is that maybe object level compression choices should always have the capacity to suggest not just a move IN THE GAME of doing certain things, but also a move IN THE MIND to re-parse the action space, compress it differently, and hope to bring a different (and more appropriate) set of “reflexes” to bear.
The idea of a game with “fog of war” helps support this research vision. Some actions are pointless for the game, but essential to ensuring the game is “being understood correctly” and game designers adding fog of war to a video game could be seen as an attempt to represent this possibly universally inevitable cognitive limitation in a concretely-ludic symbolic form.
If an AI is trained by programmers “to learn to play an RTS” but that AI doesn’t seem to be learning lessons about meta-cognition or clock/calendar management, then it feels a little bit like the AI is not learning what we hoped it was suppose to learn from “an RTS”.
This is why these points made by maximkazhenkov in a neighboring comment are central:
The agents on [the public game] ladder don’t scout much and can’t react accordingly. They don’t tech switch midgame and some of them get utterly confused in ways a human wouldn’t.
I think this is conceptually linked (through the idea of having strategic access to the compression strategy currently employed) to this thing you said:
...you can have a conversation with a starcraft player while he’s playing. It will be clear the player is not paying you his full attention at particularly demanding moments, however… I considered using system 1 and 2 analogies, but because of certain resevations I have with the dichotomy… [that said] there is some deep strategical thinking being done at the instinctual level. This intelligence is just as real as system 2 intelligence and should not be dismissed as being merely reflexes.
In the story about metacognition, verbal powers seem to come up over and over.
I think a lot of people who think hard about this understand that “mere reflexes” are not mere (especially when deeply linked to a reasoning engine that has theories about reflexes).
Also, I think that human meta-cognitive processes might reveal themselves to some degree in the apparent fact that a verbal summary can be generated by a human *in parallel without disrupting the “reflexes” very much*… then sometimes there is a pause in the verbalization while a player concentrates on <something>, and then the verbalization resumes (possibly with a summary of the ‘strategic meaning’ of the actions that just occurred).
Arguably, to close the loop and make the system more like the general intelligence of a human, part of what should be happening is that any reasoning engine bolted onto the (constrained) reflex engine should be able to be queried by ML programmers to get advice about what kinds of “practice” or “training” needs to be attempted next.
The idea is that by *constraining* the “reflex engine” (to be INadequate for directly mastering the game) we might be forced to develop a reasoning engine for understanding the reflex engine and squeezing the most performance out of it in the face of constraints on what is known and how much time there is to correlate and integrate what is known.
A decent “reflexive reasoning engine” (ie a reasoning engine focused on reflexive engines) might be able to nudge the reflex engine (every 1-30 seconds or so?) to do things that allow the reflex engine to scout brand new maps or change tech trees or do whatever else “seems meta-cognitively important”.
A good reasoning engine might be able to DESIGN new maps that would stress test a specific reflex repertoire that it thinks it is currently bad at.
A *great* reasoning engine might be able to predict in the first 30 seconds of a game that it is facing a “stronger player” (with a more relevant reflex engine for this game) such that it will probably lose the game for lack of “the right pre-computed way of thinking about the game”.
A really FANTASTIC reflexive reasoning engine might even be able to notice a weaker opponent and then play a “teaching game” that shows that opponent a technique (a locally coherent part of the action space that is only sometimes relevant) that the opponent doesn’t understand yet, in a way that might cause the opponent’s own reflexive reasoning engine to understand its own weakness and be correctly motivated to practice a way to fix that weakness.
(Tangent: To recall the tangent above to the Portia spider. It preyed on other spiders with similar spider limits. One of the fears here is that all this metacognition, when it occurs in nature, is often deployed in service to competition, either with other members of the same species or else to catch prey. Giving these powers to software entities that ALREADY have better thinking hardware than humans in many ways… well… it certainly gives ME pause. Interesting to think about… but scary to imagine being deployed in the midst of WW3.)
It sounds, Mathias, like you understand a lot of the centrality and depth of “trained reflexes” intuitively from familiarity with BOTH StarCraft and ML both, and part of what I’m doing here is probably just restating large areas of agreement in a new way. Hopefully I am also pointing to other things that are relevant and unknown to some readers :-)
If what we really care about is proving that it can do long term thinking and planning in a game with a large actionspace and imperfect information, why choose starcraft? Why not select something like Frozen Synapse where the only way to win is to fundamentally understand these concepts?
Personally, I did not know that Frozen Synapse existed before I read your comment here. I suspect a lot of people didn’t… and also I suspect that part of using StarCraft was simply for its PR value as a beloved RTS classic with a thriving pro scene and deep emotional engagement by many people.
I’m going to go explore Frozen Synapse now. Thank you for calling my attention to it!
Before doing the whole EA thing, I played starcraft semi-professionally. I was consistently ranked grandmaster primarily making money from coaching players of all skill levels. I also co-authored a ML paper on starcraft II win prediction.
TL;DR: Alphastar shows us what it will look like when humans are beaten in completely fair fight.
I feel fundamentally confused about a lot of the discussion surrounding alphastar. The entire APM debate feels completely misguided to me and seems to be born out of fundamental misunderstandings of what it means to be good at starcraft.
Being skillful at starcraft, is the ability to compute which set of actions needs to be made and to do so very fast. A low skilled player, has to spend seconds figuring out their next move, whereas a pro player will determine it in milliseconds. This skill takes years to build, through mental caching of game states, so that the right moves become instinct and can be quickly computed without much mental effort.
As you showed clearly in the blogpost, Mana (or any other player) reach a much higher apm by mindlessly tabbing between control groups. You can click predetermined spots on the screen more than fast enough to control individual units.
We are physically capable of playing this fast, yet we do not.
The reason for this, is that in a real game my actions are limited by the speed it takes to figure them out. Likewise if you were to play speedchess against alpha-zero you will get creamed, not because you can’t move the pieces fast enough, but because alpha-zero can calculate much better moves much faster than you can.
I am convinced a theoretical AI playing with a mouse and keyboard with the motorcontrols equivalent of a human, would largely be making the same ‘inhuman’ plays we are seeing currently. Difficulty of input is simply not the bottleneck.
Alphastar can only do its ‘inhuman’ moves because it’s capable of calculating starcraft equations MUCH faster than humans are. Likewise, I can only do ‘pro’ moves because I’m capable of calculating starcraft equations much faster than an amateur.
You could argue that it’s not showcasing the skills we’re interested in, as it doesn’t need to put the same emphasis on long-term planning and outsmarting its opponent, that equal human players have to. But that will also be the case if you put me against someone who’s never played the game.
If what we really care about is proving that it can do long term thinking and planning in a game with a large actionspace and imperfect information, why choose starcraft? Why not select something like Frozen Synapse where the only way to win is to fundamentally understand these concepts?
The entire debate of ‘fairness’ seems somewhat misguided to me. Even if we found an apm measure that looks fair, I could move the goal post and point out that it makes selections and commands with perfect precision, whereas a human has to do it through a mouse and keyboard. There are moves that are extremely risky to pull off due to the difficulty of precisely clicking things. If we supplied it a virtual mouse to move arround, I could move the goal post again and complain how my eyes cannot take in the entire screen at once.
It’s clear alphastar is not a fair fight, yet I think we got a very good look at what the fair fight eventually will look like. Alphastar fundamentally is what superhuman starcraft intelligence looks like (or at least it will be with more training) and it’s abusing the exact skillset that make pro players stand out from amateurs in the first place.
I think your feelings stem from you considering it to be enough If AS simply beats human players while APM whiners would like AS to learn all the aspect of Starcraft skill it can reasonably be expected to learn.
The agents on ladder don’t scout much and can’t react accordingly. They don’t tech switch midgame and some of them get utterly confused in ways a human wouldn’t. Game 11 agent vs MaNa couldn’t figure out it could build 1 phoenix to kill the warp prism and chose to follow it with 3 oracles (units which cant shoot at flying units). The ladder agents display similar mistakes.
Considering how many millions of dollars AS has cost already (could be hundreds at this point) these holes are simply too big to call the agents robust or the project complete and Starcraft conquered.
If they somehow could manage to pull off ASZero which humans can’t reliably abuse I’d admit they’ve done all there is to do. Then they could declared victory.
I think you’re right when it comes to SC2, but that doesn’t really matter for DeepMind’s ultimate goal with AlphaStar: to show an AI that can learn anything a human can learn.
In a sense AlphaStar just proves that SC2 is not balanced for superhuman ( https://news.ycombinator.com/item?id=19038607 ) micro. Big stalker army shouldn’t beat big Immortal army. In current SC2 it obviously can with good enough micro. There are probably all sorts of other situations where soft-scissor beats soft-rock with good enough micro.
Does this make AlphaStar’s SC2 performance illegitimate? Not really? Tho in the specific Stalker-Immortal fight, input through an actual robot looking at an actual screen and having to cycle through control groups to check HP and select units PROBABLY would not have been able to achieve that level of micro.
The deeper problem is that this isn’t DeepMind’s goal. It just means that SC2 is a cognitively simpler game than initially thought(note, not easy, simple as in a lot of the strategy employed by humans is unnecessary with sufficient athletic skill). The higher goal of AlphaStar is to prove that an AI can be trained from nothing to learn the rules of the game and then behave in a human-like, long term fashion. Scout the opponent, react to their strategy with your own strategy etc.
Simply bulldozing the opponent with superior micro and not even worrying about their counterplay(since there is no counterplay) is not particularly smart. It’s certainly still SC2, it just reveals the fact that SC2 is a much simpler game(when you have superhuman micro).
Interesting point. Would it be fair to say that, in a tournament match, a human pro player is behaving much more like a reinforcement learning agent than a general intelligence using System 2? In other words, the human player is also just executing reflexes he has gained through experience, and not coming up with ingenious novel strategies in the middle of a game.
I guess it was unreasonable to complain about the lack of inductive reasoning and game-theoretic thinking in AlphaStar from the beginning since DeepMind is a RL company, and RL agents just don’t do that sort of stuff. But I think it’s fair to say that AlphaStar’s victory was much less satisfying than AlphaZero, being not only unable to generalize across multiple RTS games, but also unable to explore the strategy space of a single game (hence the incentivizing of use of certain units during training). I think we all expected seeing perfect game sense and situation-dependent strategy choice, but instead blink stalkers is the one build to rule them all, apparently.
I think that’s a very fair way to put it, yes. One way this becomes very apparent, is that you can have a conversation with a starcraft player while he’s playing. It will be clear the player is not paying you his full attention at particularly demanding moments, however.
Novel strategies are thought up inbetween games and refined through dozens of practice games. In the end you have a mental decision tree of how to respond to most situations that could arise. Without having played much chess, I imagine this is how people do chess openers do as well.
I considered using system 1 and 2 analogies, but because of certain resevations I have with the dichotomy, I opted not to. Basically I don’t think you can cleanly divide human intelligence into those two catagories.
Ask a starcraft player why they made a certain maneuver and they will for the most part be able to tell you why he did it, despite never having thought the reason out loud until you asked. There is some deep strategical thinking being done at the instinctual level. This intelligence is just as real as system 2 intelligence and should not be dismissed as being merely reflexes.
My central critique is essentially of seeing starcraft ‘mechanics’ as unintelligent. Every small maneuver has a (most often implicit) reason for being made. Starcraft players are not limited by their physical capabilities nearly as much as they are limited by their ability to think fast enough. If we are interested in something other than what it looks like when someone can think at much higher speeds than humans, we should be picking another game than starcraft.
I think the abstract question of how to cognitively manage a “large action space” and “fog of war” is central here.
In some sense StarCraft could be seen as turn based, with each turn lasting for 1 microsecond, but this framing makes the action space of a beginning-to-end game *enormous*. Maybe not so enormous that a bigger data center couldn’t fix it? In some sense, brute force can eventually solve ANY problem tractable to a known “vaguely O(N*log(N))” algorithm.
BUT facing “a limit that forces meta-cognition” is a key idea for “the reason to apply AI to an RTS next, as opposed to a turn based game.”
If DeepMind solves it with “merely a bigger data center” then there is a sense in which maybe DeepMind has not yet found the kinds of algorithms that deal with “nebulosity” as an explicit part of the action space (and which are expected by numerous people (including me) to be widely useful in many domains).
(Tangent: The Portia spider is relevant here because it seems that its whole schtick is that it scans with its (limited, but far seeing) eyes, builds up a model of the world via an accumulation of glances, re-uses (limited) neurons to slowly imagine a route through that space, and then follows the route to sneak up on other (similarly limited, but less “meta-cognitive”?) spiders which are its prey.)
No matter how fast something can think or react, SOME game could hypothetically be invented that forces a finitely speedy mind to need action space compression and (maybe) even compression of compression choices. Also, the physical world itself appears to contain huge computational depths.
In some sense then, the “idea of an AI getting good *at an RTS*” is an attempt (which might have failed or might be poorly motivated) to point at issues related to cognitive compression and meta-cognition. There is an implied research strategy aimed at learning to use a pragmatically finite mind to productively work on a pragmatically infinite challenge.
The hunch is that maybe object level compression choices should always have the capacity to suggest not just a move IN THE GAME of doing certain things, but also a move IN THE MIND to re-parse the action space, compress it differently, and hope to bring a different (and more appropriate) set of “reflexes” to bear.
The idea of a game with “fog of war” helps support this research vision. Some actions are pointless for the game, but essential to ensuring the game is “being understood correctly” and game designers adding fog of war to a video game could be seen as an attempt to represent this possibly universally inevitable cognitive limitation in a concretely-ludic symbolic form.
If an AI is trained by programmers “to learn to play an RTS” but that AI doesn’t seem to be learning lessons about meta-cognition or clock/calendar management, then it feels a little bit like the AI is not learning what we hoped it was suppose to learn from “an RTS”.
This is why these points made by maximkazhenkov in a neighboring comment are central:
I think this is conceptually linked (through the idea of having strategic access to the compression strategy currently employed) to this thing you said:
In the story about metacognition, verbal powers seem to come up over and over.
I think a lot of people who think hard about this understand that “mere reflexes” are not mere (especially when deeply linked to a reasoning engine that has theories about reflexes).
Also, I think that human meta-cognitive processes might reveal themselves to some degree in the apparent fact that a verbal summary can be generated by a human *in parallel without disrupting the “reflexes” very much*… then sometimes there is a pause in the verbalization while a player concentrates on <something>, and then the verbalization resumes (possibly with a summary of the ‘strategic meaning’ of the actions that just occurred).
Arguably, to close the loop and make the system more like the general intelligence of a human, part of what should be happening is that any reasoning engine bolted onto the (constrained) reflex engine should be able to be queried by ML programmers to get advice about what kinds of “practice” or “training” needs to be attempted next.
The idea is that by *constraining* the “reflex engine” (to be INadequate for directly mastering the game) we might be forced to develop a reasoning engine for understanding the reflex engine and squeezing the most performance out of it in the face of constraints on what is known and how much time there is to correlate and integrate what is known.
A decent “reflexive reasoning engine” (ie a reasoning engine focused on reflexive engines) might be able to nudge the reflex engine (every 1-30 seconds or so?) to do things that allow the reflex engine to scout brand new maps or change tech trees or do whatever else “seems meta-cognitively important”.
A good reasoning engine might be able to DESIGN new maps that would stress test a specific reflex repertoire that it thinks it is currently bad at.
A *great* reasoning engine might be able to predict in the first 30 seconds of a game that it is facing a “stronger player” (with a more relevant reflex engine for this game) such that it will probably lose the game for lack of “the right pre-computed way of thinking about the game”.
A really FANTASTIC reflexive reasoning engine might even be able to notice a weaker opponent and then play a “teaching game” that shows that opponent a technique (a locally coherent part of the action space that is only sometimes relevant) that the opponent doesn’t understand yet, in a way that might cause the opponent’s own reflexive reasoning engine to understand its own weakness and be correctly motivated to practice a way to fix that weakness.
(Tangent: To recall the tangent above to the Portia spider. It preyed on other spiders with similar spider limits. One of the fears here is that all this metacognition, when it occurs in nature, is often deployed in service to competition, either with other members of the same species or else to catch prey. Giving these powers to software entities that ALREADY have better thinking hardware than humans in many ways… well… it certainly gives ME pause. Interesting to think about… but scary to imagine being deployed in the midst of WW3.)
It sounds, Mathias, like you understand a lot of the centrality and depth of “trained reflexes” intuitively from familiarity with BOTH StarCraft and ML both, and part of what I’m doing here is probably just restating large areas of agreement in a new way. Hopefully I am also pointing to other things that are relevant and unknown to some readers :-)
Personally, I did not know that Frozen Synapse existed before I read your comment here. I suspect a lot of people didn’t… and also I suspect that part of using StarCraft was simply for its PR value as a beloved RTS classic with a thriving pro scene and deep emotional engagement by many people.
I’m going to go explore Frozen Synapse now. Thank you for calling my attention to it!