How can list sorting be O(n)? There are n! ways to sort a list, which means that it’s impossible to have a list sorting algorithm faster than O(log(n!)) = O(n*log(n)).
He probably means bucket sort or basically same concept radix sort. You might call that cheating, but if the range of values you sort is small, then you can think of it as linear.
Morpheus is incorrect, I am actually referring to a generalization of ‘counting sort’ that I independently reinvented (where you are no longer limited to small positive numbers). I had never heard of sub n log n sorts at that time either. It’s actually quite simple. In fact, it’s how humans prefer to sort things. If you look at the object, and just know where it goes, put it there. The downside is just that there has to be a known place to put it. The O(n log n) lower bound is only for comparison sorts.
On a computer, we can just make a place to put it. If each value is unique, or we’re sorting numbers, or each value that is the same does not need further sorting, we can implement this in constant time per object being sorted. Also linear space, with space being O(m + n) where n is the number of objects being sorted and m being the range of values they can hold. This only works well if m is reasonable in size, not being too much larger than n. For instance, you might be sorting integers between 0 and 100, requiring only a hundred and one extra spots in memory, and be sorting 1,000,000,000 of them. (This is conceptually related to the bucket sort mentioned by Morpheus, but they are distinct, and this is truly linear).
Worst case is usually that you are sorting things whose values are an unknown 32bit integer, in which case you would need about 16GB of memory just for the possible values. This is normally unjustifiable, but if you were sorting a trillion numbers (and had a large amount or RAM), it’s nothing. (And the fact that it is linear would be quite a godsend.) Another positive is that this sort has virtually no branching, and is this much easier on a per instruction basis too. (Branching kills performance).
This can also be integrated as a sub-routine (that requires very little memory) in an overall sorting routine similar to (but better than) quicksort, and then it is more similar to bucket sort, but kind of reversed.
I feel I should mention an important implementation detail. If you do not know the range of possible values, it is often a good idea to check, as most generative processes only use a subset. This is also a linear time process, that only takes two values in memory, but does involve some branching. It is better to just know and put that in, but the ability to do this makes it a much more practical algorithm.
How can list sorting be O(n)? There are n! ways to sort a list, which means that it’s impossible to have a list sorting algorithm faster than O(log(n!)) = O(n*log(n)).
He probably means bucket sort or basically same concept radix sort. You might call that cheating, but if the range of values you sort is small, then you can think of it as linear.
Yes, technically it solves only a subproblem of the general problem of sorting: the case where the number of possible values is bounded.
Morpheus is incorrect, I am actually referring to a generalization of ‘counting sort’ that I independently reinvented (where you are no longer limited to small positive numbers). I had never heard of sub n log n sorts at that time either. It’s actually quite simple. In fact, it’s how humans prefer to sort things. If you look at the object, and just know where it goes, put it there. The downside is just that there has to be a known place to put it. The O(n log n) lower bound is only for comparison sorts.
On a computer, we can just make a place to put it. If each value is unique, or we’re sorting numbers, or each value that is the same does not need further sorting, we can implement this in constant time per object being sorted. Also linear space, with space being O(m + n) where n is the number of objects being sorted and m being the range of values they can hold. This only works well if m is reasonable in size, not being too much larger than n. For instance, you might be sorting integers between 0 and 100, requiring only a hundred and one extra spots in memory, and be sorting 1,000,000,000 of them. (This is conceptually related to the bucket sort mentioned by Morpheus, but they are distinct, and this is truly linear).
Worst case is usually that you are sorting things whose values are an unknown 32bit integer, in which case you would need about 16GB of memory just for the possible values. This is normally unjustifiable, but if you were sorting a trillion numbers (and had a large amount or RAM), it’s nothing. (And the fact that it is linear would be quite a godsend.) Another positive is that this sort has virtually no branching, and is this much easier on a per instruction basis too. (Branching kills performance).
This can also be integrated as a sub-routine (that requires very little memory) in an overall sorting routine similar to (but better than) quicksort, and then it is more similar to bucket sort, but kind of reversed.
Interesting!
I feel I should mention an important implementation detail. If you do not know the range of possible values, it is often a good idea to check, as most generative processes only use a subset. This is also a linear time process, that only takes two values in memory, but does involve some branching. It is better to just know and put that in, but the ability to do this makes it a much more practical algorithm.