I’ve always felt that this is actually a defect in big O notation. If we want to know how long a computation is going to take, it honestly doesn’t help us at all to know that it’s O(1), because as you point out it could be a constant time so mind-bendingly huge that it will never complete. The O(n) or even O(2^n) method might well be faster in realistic situations.
Seems like we actually should be measuring our computations some other way, perhaps in terms of the actual number of seconds (or if you want to be cross-platform, number of steps) involved. So you wouldn’t say O(1), you’d say “1200 steps”; you wouldn’t say O(n), you’d say “480n steps”.
I’ve always felt that this is actually a defect in big O notation. If we want to know how long a computation is going to take, it honestly doesn’t help us at all to know that it’s O(1), because as you point out it could be a constant time so mind-bendingly huge that it will never complete. The O(n) or even O(2^n) method might well be faster in realistic situations.
Seems like we actually should be measuring our computations some other way, perhaps in terms of the actual number of seconds (or if you want to be cross-platform, number of steps) involved. So you wouldn’t say O(1), you’d say “1200 steps”; you wouldn’t say O(n), you’d say “480n steps”.