I have had a similar mindset in games for a long time. This exploration you describe in my experience felt like the fastest way to improve in games (and in maths: when a teacher says that some method would not work to solve a problem, without telling me why, I am likely to try and see myself, how the edge cases invalidate the method). Apart from giving a broader intuition about the subject I believe it enreaches the “toolkit”.
There is a 1v4 computer game Dead by daylight. The thing that attracts me is the ability to make your own build—set of powers your character will enter the match with. There are more than 10^12 combinations, and that creates an optimisation problem. Community of the game has agreed on around 12 best builds, and a new player could just take one of them into the game without understanding why they are considered best. But I decided to do exploration and play with random builds. This opened to me a vast space of rare power synergies the community never agrees to discuss. They throw unconventional builds and playstyles into the bin.
If I had friends who play computer games, I would really want to convey an experiment. I would ask them to start learning the game, completing the same amount of matches per weekend. One group would be presented with classical introduction, materials and videos that explain the current meta (“best” builds and methods of winning, as community states), and they would only be limited to the professionals’ recommendations.
Another group would be allowed to play with only random builds and would not be presented with classical “how to win” materials, instead they’ll see videos where gamers explore the space of possible builds, try to win by unconventional methods, experiment with setting weird winning criterias, playing with handicaps, or other things which are considered “inefficient” by the community majority.
My intuition says that the second group would improve at the game much faster and after 50 matches be on a higher level (even though they might have lost more matches during training). They will find the powers that fit them better. They will know more mechanics of the game, that they could apply in edge cases. They might play on a psychological level by surprising opponents with some weird strategy.
One of the reasons I did not start that little experiment is the absence of visible Elo in the game. The only way of testing which group is better currently is a face-to-face matches between them, but that might not be the correct evaluation of skill. Rock-paper-scissors problem: when one group is taught to only throw rock and another is taught to adapt, the outcome is trivial. True testing would need many matches against random opponents to make statistically significant conclusions. I have not yet found people with dedication for that. So, there is a strong belief that I have not found a way to test.
I have had a similar mindset in games for a long time. This exploration you describe in my experience felt like the fastest way to improve in games (and in maths: when a teacher says that some method would not work to solve a problem, without telling me why, I am likely to try and see myself, how the edge cases invalidate the method). Apart from giving a broader intuition about the subject I believe it enreaches the “toolkit”.
There is a 1v4 computer game Dead by daylight. The thing that attracts me is the ability to make your own build—set of powers your character will enter the match with. There are more than 10^12 combinations, and that creates an optimisation problem. Community of the game has agreed on around 12 best builds, and a new player could just take one of them into the game without understanding why they are considered best. But I decided to do exploration and play with random builds. This opened to me a vast space of rare power synergies the community never agrees to discuss. They throw unconventional builds and playstyles into the bin.
If I had friends who play computer games, I would really want to convey an experiment. I would ask them to start learning the game, completing the same amount of matches per weekend. One group would be presented with classical introduction, materials and videos that explain the current meta (“best” builds and methods of winning, as community states), and they would only be limited to the professionals’ recommendations. Another group would be allowed to play with only random builds and would not be presented with classical “how to win” materials, instead they’ll see videos where gamers explore the space of possible builds, try to win by unconventional methods, experiment with setting weird winning criterias, playing with handicaps, or other things which are considered “inefficient” by the community majority.
My intuition says that the second group would improve at the game much faster and after 50 matches be on a higher level (even though they might have lost more matches during training). They will find the powers that fit them better. They will know more mechanics of the game, that they could apply in edge cases. They might play on a psychological level by surprising opponents with some weird strategy.
One of the reasons I did not start that little experiment is the absence of visible Elo in the game. The only way of testing which group is better currently is a face-to-face matches between them, but that might not be the correct evaluation of skill. Rock-paper-scissors problem: when one group is taught to only throw rock and another is taught to adapt, the outcome is trivial. True testing would need many matches against random opponents to make statistically significant conclusions. I have not yet found people with dedication for that. So, there is a strong belief that I have not found a way to test.