Thanks for presenting this algorithm! I had not heard about it, and might try it at some point.
One thing I’m curious about is its interpretation as a sorting algorithm. Because basically what you are doing is sorting your list using pairwise comparisons. It cannot be better than the classical sorting algorithms, because comparison based sorting takes at least O(nlog(n)). But maybe the benefits is that it’s more intuitive to do for a human being than quicksort or mergesort? Or that it require a weaker sense of transitivity than those? Also, I wonder if this isn’t a known sorting algorithm.
My take is that quicksort, merge sort and all other common sorts involve physically rearranging memory in a way that would be a bad fit for a list on paper. This one doesn’t change the original list, it just emits its elements one at a time in order.
This is worst-case O(n2) - if they’re currently in order (i.e. sorted from highest to lowest priority), you’d do n(n−1)2 comparisons. Best case O(n) comparisons, if they’re currently in reverse order. (You could interpret it as high priority elements being high in the sort order, but then the algorithm emits elements one at a time in reverse order.) Memory use is O(n) pointers/indexes (the list of marked elements), n bools (a list of which items have already been crossed off) and O(1) bookkeeping.
You could see it as a variant on selection sort—in particular it has the same property of “find the least element, then the next least, and so on” that lets you stop part way through if you only want to do three tasks. But because of the “emit items instead of changing the original list” behaviour, instead of just keeping track of the current minimum, we can keep track of a descending subsequence of the original list.
In pseudocode, I think we get something like...
let input: [Task] be the input list (n elements)
initialize:
emitted: [Bool] = [False, False, ..., False] (n elements)
descending-stack: [Int] = [-1]
current-min: (Infinity | Task) = Infinity
last-pop: Int = -1
repeat:
for i from last-pop + 1 to n-1:
if not emitted[i] and input[i] < current-min:
push(descending-stack, i)
current-min = input[i]
last-pop = pop(descending-stack)
if last-pop == -1:
finished()
emit(input[i])
emitted[i] = True
For “what task should I do next,” it’s O(n), because you just go down the list once doing a pairwise comparison. “What task should I do next” seems more important for actually doing things than sorting the entire list at first to avoid deliberation time / indecision / harder 3-way value comparisons.
Seems like it’s a lazy sort to me (with obvious wrinkles from the fact that the list can grow). It also seems to be a variant of drop sort (which is O(n) via cheating) designed for repeated passes on the remaining list
Thanks for presenting this algorithm! I had not heard about it, and might try it at some point.
One thing I’m curious about is its interpretation as a sorting algorithm. Because basically what you are doing is sorting your list using pairwise comparisons. It cannot be better than the classical sorting algorithms, because comparison based sorting takes at least O(nlog(n)). But maybe the benefits is that it’s more intuitive to do for a human being than quicksort or mergesort? Or that it require a weaker sense of transitivity than those? Also, I wonder if this isn’t a known sorting algorithm.
My take is that quicksort, merge sort and all other common sorts involve physically rearranging memory in a way that would be a bad fit for a list on paper. This one doesn’t change the original list, it just emits its elements one at a time in order.
This is worst-case O(n2) - if they’re currently in order (i.e. sorted from highest to lowest priority), you’d do n(n−1)2 comparisons. Best case O(n) comparisons, if they’re currently in reverse order. (You could interpret it as high priority elements being high in the sort order, but then the algorithm emits elements one at a time in reverse order.) Memory use is O(n) pointers/indexes (the list of marked elements), n bools (a list of which items have already been crossed off) and O(1) bookkeeping.
You could see it as a variant on selection sort—in particular it has the same property of “find the least element, then the next least, and so on” that lets you stop part way through if you only want to do three tasks. But because of the “emit items instead of changing the original list” behaviour, instead of just keeping track of the current minimum, we can keep track of a descending subsequence of the original list.
In pseudocode, I think we get something like...
Thanks a lot for the thoughtful reply! Not having to move the items around could indeed help to lower the cognitive burden.
For “what task should I do next,” it’s O(n), because you just go down the list once doing a pairwise comparison. “What task should I do next” seems more important for actually doing things than sorting the entire list at first to avoid deliberation time / indecision / harder 3-way value comparisons.
Right, for a single pass it’s a find-the-maximum-element algorithm in O(n).
I think if you eventually do every task on the list it’s equivalent to sorting the list? But this basically never happens to me.
Presumably intermediate states (doing e.g. half the items) is of intermediate efficiency? But my grasp of the underlying theory here is pretty weak.
Seems like it’s a lazy sort to me (with obvious wrinkles from the fact that the list can grow). It also seems to be a variant of drop sort (which is O(n) via cheating) designed for repeated passes on the remaining list