It’s less surprising if you’re familiar with the history of MCTS. MCTS is a generic MDP or decision-tree solver: you can use it for pretty much any kind of non-adversarial discrete fully-observed planning process where you have a model; you can extend it to non-fully-observed POMDP and continuous observations fairly easily, and that was done back in the 2000s. (Adversarial is also easy—minimax it—but adversarial+POMDP mostly breaks MCTS which is why you don’t see it but other methods solving poker.) Path planning is a classic tree search problem which comes up all the time in robotics and other planning domains like planning movement paths in simulations/games, and so if you go back and look, you’ll find plenty of pre-AlphaGo applications of MCTS to path planning.
It’s less surprising if you’re familiar with the history of MCTS. MCTS is a generic MDP or decision-tree solver: you can use it for pretty much any kind of non-adversarial discrete fully-observed planning process where you have a model; you can extend it to non-fully-observed POMDP and continuous observations fairly easily, and that was done back in the 2000s. (Adversarial is also easy—minimax it—but adversarial+POMDP mostly breaks MCTS which is why you don’t see it but other methods solving poker.) Path planning is a classic tree search problem which comes up all the time in robotics and other planning domains like planning movement paths in simulations/games, and so if you go back and look, you’ll find plenty of pre-AlphaGo applications of MCTS to path planning.