One of the computer science tricks I know might be helpful here.
A depth-first search takes less memory than a breadth-first search, since you only have to remember your current path through the tree, rather than your progress through entire layers, which can get quite large.
The problem with depth-first is that you have to hit the bottom before you can go sideways at all. It doesn’t work on infinite trees, or graphs with cycles. You keep going deeper, even if what you were looking for was only a few steps from the root, you may never get there. Breadth-first search is going to find the low-hanging fruit first.
But there is a best-of-both-worlds approach with the benefits of both: iterative deepening depth-first search. You create an artificial “bottom” by limiting how many steps deep you go.
Start with three, say. Once you hit your limit, you try a sibling instead of going deeper, or back up a layer if there are no siblings. This will quickly find any fruit hanging within three steps deep, like a breadth-first search would. But it will fail to find anything any deeper than your limit. If you want to search more, you increase your limit. Maybe by three if your branching factor is low, or maybe only by one if you’re already deep or if it’s high. Then you start over. You’ll search the top of the tree again (kind of a waste), but that’s the smallest part, so it’s fast compared to searching the bottom at your new depth. This tradeoff is often worth it for the lower memory requirements. For a human, the repetition of the parts near the root will probably tend to shift that part into your long-term memory, making it even faster, so that’s the part worth getting familiar with anyway.
One of the computer science tricks I know might be helpful here.
A depth-first search takes less memory than a breadth-first search, since you only have to remember your current path through the tree, rather than your progress through entire layers, which can get quite large.
The problem with depth-first is that you have to hit the bottom before you can go sideways at all. It doesn’t work on infinite trees, or graphs with cycles. You keep going deeper, even if what you were looking for was only a few steps from the root, you may never get there. Breadth-first search is going to find the low-hanging fruit first.
But there is a best-of-both-worlds approach with the benefits of both: iterative deepening depth-first search. You create an artificial “bottom” by limiting how many steps deep you go.
Start with three, say. Once you hit your limit, you try a sibling instead of going deeper, or back up a layer if there are no siblings. This will quickly find any fruit hanging within three steps deep, like a breadth-first search would. But it will fail to find anything any deeper than your limit. If you want to search more, you increase your limit. Maybe by three if your branching factor is low, or maybe only by one if you’re already deep or if it’s high. Then you start over. You’ll search the top of the tree again (kind of a waste), but that’s the smallest part, so it’s fast compared to searching the bottom at your new depth. This tradeoff is often worth it for the lower memory requirements. For a human, the repetition of the parts near the root will probably tend to shift that part into your long-term memory, making it even faster, so that’s the part worth getting familiar with anyway.