I don’t really know SC2 but played Civ4, so by ‘scouting’ did you mean fogbusting? And the cost is to spend a unit to do it? Is fogbusting even possible in a real life board game?
Yes. There has to be some cost associated with it, so that deciding whether, when and where to scout becomes an essential part of the game. The most advanced game-playing AIs to date, AlphaStar and OpenAI5, have both demonstrated tremendous weakness in this respect.
What does it have to do with Markov property?
Markov property refers to the idea that the future only depends on the current state, thus the history can be safely ignored. This is true for e.g. chess or Go; AlphaGoZero could play a game of Go starting from any board configuration without knowing how it got there. It’s not easily applicable to Starcraft because of the fog of war; what you scouted inside your opponent’s base a minute ago but isn’t visible right now still provides valuable information about what’s the right action to take. Storing the entire history as part of the “game state” would add huge complexity (tens of thousands of static game states).
Is fogbusting even possible in a real life board game?
RL stands for reinforcement learning, basically all recent advances in game-playing AI has come from this field and is the reason why it’s so hard to come up with a board game that would be hard to solve for AI (you could always reconfigure the Turing test or some other AGI-complete task into a “board game” but that’s cheating). I’d even guess it’s impossible to design such a board game because there is just too much brute force compute now.
Ah, I see. All of your explanations led to 1 thing: imperfect information. Fogged Markov tiles or coin-like tokens are ways to confuse AI and force it to ramp up the brute power exponentially without much effort from the puzzler &/or much effect in game. And since it doesn’t know the info, it can’t accurately calculate the value of that info, that’s why AI sucks at scouting.
Coincidently, I’ve already invented a board game that incorporates imperfect info to beat AI back in 2016. I guess I’d need to put some more into it.
Yes. There has to be some cost associated with it, so that deciding whether, when and where to scout becomes an essential part of the game. The most advanced game-playing AIs to date, AlphaStar and OpenAI5, have both demonstrated tremendous weakness in this respect.
Markov property refers to the idea that the future only depends on the current state, thus the history can be safely ignored. This is true for e.g. chess or Go; AlphaGoZero could play a game of Go starting from any board configuration without knowing how it got there. It’s not easily applicable to Starcraft because of the fog of war; what you scouted inside your opponent’s base a minute ago but isn’t visible right now still provides valuable information about what’s the right action to take. Storing the entire history as part of the “game state” would add huge complexity (tens of thousands of static game states).
Yes, hidden identity chess for instance.
RL stands for reinforcement learning, basically all recent advances in game-playing AI has come from this field and is the reason why it’s so hard to come up with a board game that would be hard to solve for AI (you could always reconfigure the Turing test or some other AGI-complete task into a “board game” but that’s cheating). I’d even guess it’s impossible to design such a board game because there is just too much brute force compute now.
Ah, I see. All of your explanations led to 1 thing: imperfect information. Fogged Markov tiles or coin-like tokens are ways to confuse AI and force it to ramp up the brute power exponentially without much effort from the puzzler &/or much effect in game. And since it doesn’t know the info, it can’t accurately calculate the value of that info, that’s why AI sucks at scouting.
Coincidently, I’ve already invented a board game that incorporates imperfect info to beat AI back in 2016. I guess I’d need to put some more into it.