I know several reasonable algorithms for stable sorting in O(n log n) time and O(sqrt(n)) extra space, like Mrrl’s SqrtSort. That’s good enough for all practical purposes, because anyone who wants to sort a billion elements can afford an extra array of 30000 elements. And all known algorithms using less extra space essentially emulate O(sqrt(n)) bits of storage by doing swaps inside the array, which is clearly a hack.
Radix sort has its own rabbit hole. If you’re sorting strings that are likely to have common prefixes, comparison sorting isn’t the best way, because it needs to look at the initial characters over and over. There are many faster algorithms for string sorting based on ideas from radix sort: American flag sort, three-way radix quicksort, etc. The Haskell package Data.Discrimination generalizes the idea from strings to arbitrary algebraic datatypes, allowing you to sort them in almost linear time (in terms of total size, not number of elements).
It’s an old problem, cousin_it has posted:
Radix. Except that it’s not in place.
I know several reasonable algorithms for stable sorting in O(n log n) time and O(sqrt(n)) extra space, like Mrrl’s SqrtSort. That’s good enough for all practical purposes, because anyone who wants to sort a billion elements can afford an extra array of 30000 elements. And all known algorithms using less extra space essentially emulate O(sqrt(n)) bits of storage by doing swaps inside the array, which is clearly a hack.
Radix sort has its own rabbit hole. If you’re sorting strings that are likely to have common prefixes, comparison sorting isn’t the best way, because it needs to look at the initial characters over and over. There are many faster algorithms for string sorting based on ideas from radix sort: American flag sort, three-way radix quicksort, etc. The Haskell package Data.Discrimination generalizes the idea from strings to arbitrary algebraic datatypes, allowing you to sort them in almost linear time (in terms of total size, not number of elements).