Glad you liked it! The times when the task is most difficult to use heuristics for are when the shape is partially obscured by itself due to the viewing angle (e.g. below), so you don’t always have complete information about the shape. So to my mind a first pass would be intentionally obscuring a section of the view of each block—but even then, it’s not really immune to the issue.
Ultimately the heuristic-forming is what turns deliberate System 2 thinking into automatic System 1 thinking, but we don’t have direct control over that process. So long as it matches predicted reward, that’s the thing that matters. And so long as mental rotation would reliably solve the problem, there is almost always going to be a set of heuristics that solves the same problem faster. The question is whether the learned heuristics generalise outside of the training set of the game.
Ultimately the heuristic-forming is what turns deliberate System 2 thinking into automatic System 1 thinking, but we don’t have direct control over that process. So long as it matches predicted reward, that’s the thing that matters. And so long as mental rotation would reliably solve the problem, there is almost always going to be a set of heuristics that solves the same problem faster. The question is whether the learned heuristics generalise outside of the training set of the game.
That’s an interesting thought. It suggests a rule:
Any form of mental exercise will eventually be replaced by a narrow heuristic.
Glad you liked it! The times when the task is most difficult to use heuristics for are when the shape is partially obscured by itself due to the viewing angle (e.g. below), so you don’t always have complete information about the shape. So to my mind a first pass would be intentionally obscuring a section of the view of each block—but even then, it’s not really immune to the issue.
Ultimately the heuristic-forming is what turns deliberate System 2 thinking into automatic System 1 thinking, but we don’t have direct control over that process. So long as it matches predicted reward, that’s the thing that matters. And so long as mental rotation would reliably solve the problem, there is almost always going to be a set of heuristics that solves the same problem faster. The question is whether the learned heuristics generalise outside of the training set of the game.
That’s an interesting thought. It suggests a rule:
Any form of mental exercise will eventually be replaced by a narrow heuristic.