Since I have a computer science background, this reminds me of something called ‘bucket sort’. Instead of doing an awful lot of calculations to get things in exact places, you split it based on some relevant but extremely easy to calculate criteria. For instance, when doing an alphabetical sorting of a string (a series of letters and other symbols), you might put multiple letters together. For instance, A, B, and C might be one bucket. You then use some other procedure to sort within the bucket, and then just read out the contents of the buckets in order to have an end result. This makes sense not because A,B, and C belong together, but because things scale worse than the number of items.
As is usually the case in CS, assume n is the size of the input, which we’ll have be eight times as large as whatever we’ll claim causes one unit of work in all of the following algorithms at the same time (this is a common simplifying assumption that approximates the way CS approaches things in a way that doesn’t require much explanation). Best (vaguely reasonable) case for how much this would make things better if the other sorting algorithm you would use in n^2, and supposing an even distribution between buckets, there might be an eighth as many per bucket, which means you are now solving problems that are a sixty-fourth the amount of calculation each, for a total of 9/64ths of the remaining work. Add whatever it cost you to do the initial round (which could very well be 1/64th), and you might have 10/64ths total work, or about 15%. This is not the best possible result, but we usually call those specific things. Most people in CS incorrectly claim the best possible result is n log n (note this is base 2 log in computer science), which in this case is 24/64ths of an n^2 algorithm.
Can we do better than 10/64ths? Absolutely, we can do an algorithm that scales linearly in n, and thus 8/64ths. The way we do this is a special variant of ‘bucket sort’, where each bucket holds exactly one thing (simplification), but since we don’t know how things are sorted, we need m buckets where m is the number of possibilities, or (assuming it is just the alphabet) m=26^maximumStringLength, which is ridiculous even for computers (a ten character maximum would be 141,167,095,653,376 possibilities, or over five hundred terabytes of extra memory), so we don’t often use such schemes. Even if you were sorting a trillion things, it just doesn’t make sense. Even just for integers, it might be 16 gigabytes. Hence, bucket sort is a thing. If, on the other hand, you know everything will be between one and 100, 400 extra bytes is nothing.
If too many things are in a bucket, it doesn’t give you much saving on calculation. for instance, half of n size buckets would be 2(4^2)/64ths+1/64th=33/64ths, which is much worse than n log n. (The n log n ones could be argued to be iterated bucket sort. The most famous is quicksort, which just asks higher or lower n times, and is intended to be a series of half-n buckets, and actually only fully sorts a single thing per iteration.)
Unlike a well written program (well, most of them), Humans just simplify the calculation and get it wrong if there are too many calculations or they don’t have the memory space (and humans have incredibly tiny working memory). Bundling into buckets is both completely necessary and possible to do completely wrong even if you ignore the connections between things.
Since I have a computer science background, this reminds me of something called ‘bucket sort’. Instead of doing an awful lot of calculations to get things in exact places, you split it based on some relevant but extremely easy to calculate criteria. For instance, when doing an alphabetical sorting of a string (a series of letters and other symbols), you might put multiple letters together. For instance, A, B, and C might be one bucket. You then use some other procedure to sort within the bucket, and then just read out the contents of the buckets in order to have an end result. This makes sense not because A,B, and C belong together, but because things scale worse than the number of items.
As is usually the case in CS, assume n is the size of the input, which we’ll have be eight times as large as whatever we’ll claim causes one unit of work in all of the following algorithms at the same time (this is a common simplifying assumption that approximates the way CS approaches things in a way that doesn’t require much explanation). Best (vaguely reasonable) case for how much this would make things better if the other sorting algorithm you would use in n^2, and supposing an even distribution between buckets, there might be an eighth as many per bucket, which means you are now solving problems that are a sixty-fourth the amount of calculation each, for a total of 9/64ths of the remaining work. Add whatever it cost you to do the initial round (which could very well be 1/64th), and you might have 10/64ths total work, or about 15%. This is not the best possible result, but we usually call those specific things. Most people in CS incorrectly claim the best possible result is n log n (note this is base 2 log in computer science), which in this case is 24/64ths of an n^2 algorithm.
Can we do better than 10/64ths? Absolutely, we can do an algorithm that scales linearly in n, and thus 8/64ths. The way we do this is a special variant of ‘bucket sort’, where each bucket holds exactly one thing (simplification), but since we don’t know how things are sorted, we need m buckets where m is the number of possibilities, or (assuming it is just the alphabet) m=26^maximumStringLength, which is ridiculous even for computers (a ten character maximum would be 141,167,095,653,376 possibilities, or over five hundred terabytes of extra memory), so we don’t often use such schemes. Even if you were sorting a trillion things, it just doesn’t make sense. Even just for integers, it might be 16 gigabytes. Hence, bucket sort is a thing. If, on the other hand, you know everything will be between one and 100, 400 extra bytes is nothing.
If too many things are in a bucket, it doesn’t give you much saving on calculation. for instance, half of n size buckets would be 2(4^2)/64ths+1/64th=33/64ths, which is much worse than n log n. (The n log n ones could be argued to be iterated bucket sort. The most famous is quicksort, which just asks higher or lower n times, and is intended to be a series of half-n buckets, and actually only fully sorts a single thing per iteration.)
Unlike a well written program (well, most of them), Humans just simplify the calculation and get it wrong if there are too many calculations or they don’t have the memory space (and humans have incredibly tiny working memory). Bundling into buckets is both completely necessary and possible to do completely wrong even if you ignore the connections between things.