Ultimately the heuristic-forming is what turns deliberate System 2 thinking into automatic System 1 thinking, but we don’t have direct control over that process. So long as it matches predicted reward, that’s the thing that matters. And so long as mental rotation would reliably solve the problem, there is almost always going to be a set of heuristics that solves the same problem faster. The question is whether the learned heuristics generalise outside of the training set of the game.
That’s an interesting thought. It suggests a rule:
Any form of mental exercise will eventually be replaced by a narrow heuristic.
That’s an interesting thought. It suggests a rule:
Any form of mental exercise will eventually be replaced by a narrow heuristic.