What if the lock has multiple combinations that are close to each other. In this case the brute force does much worse than a random search. Because the correct combinations could be 98,99,100. This is true for real combination locks btw, they usually aren’t too sensitive to being off by a single number, and so multiple combinations do work.
Or an even simpler case, you need to create a single algorithm to break a large number of locks. And all the locks have the same combination. And you can’t save memory or modify the algorithm after it it’s finished, so you have to pick the best one.
If you pick a deterministic algorithm, there is a significant chance that the combination could be something like 99999. Then you have the worst case time. Whereas the time for the random algorithm will be a Gaussian distribution around the average case.
The expected completion time for each is the same, but the deterministic algorithm has a significantly higher chance of hitting a worst case scenario.
Now a random search is bad because it can forget what it’s already tried, but you can just add a memory or some other way of creating a random ordering.
And in general, there exists a pattern that will disrupt any deterministic algorithm and give it much worse performance. And they aren’t rare pathological cases either.
What if the lock has multiple combinations that are close to each other. In this case the brute force does much worse than a random search. Because the correct combinations could be 98,99,100. This is true for real combination locks btw, they usually aren’t too sensitive to being off by a single number, and so multiple combinations do work.
Or an even simpler case, you need to create a single algorithm to break a large number of locks. And all the locks have the same combination. And you can’t save memory or modify the algorithm after it it’s finished, so you have to pick the best one.
If you pick a deterministic algorithm, there is a significant chance that the combination could be something like 99999. Then you have the worst case time. Whereas the time for the random algorithm will be a Gaussian distribution around the average case.
The expected completion time for each is the same, but the deterministic algorithm has a significantly higher chance of hitting a worst case scenario.
Now a random search is bad because it can forget what it’s already tried, but you can just add a memory or some other way of creating a random ordering.
And in general, there exists a pattern that will disrupt any deterministic algorithm and give it much worse performance. And they aren’t rare pathological cases either.