It sounds to me like Peter is suggesting to randomly permute all inputs to the sorting algorithm, in hopes of improving the average case runtime. The justification appears to be that the worst case has low complexity because it’s easy to describe, because you can describe it by saying “it’s the worst case for ”.
For example, the worst-case input for Quicksort is an already-sorted list. You could, indeed, get better average performance out of quicksort if you could randomly permute its inputs in zero time, because code is more likely to produce already-sorted lists than lists sorted, say, by { x mod 17 }.
My initial reaction was that the implicit description “that input which is worst case for this algorithm” might specify an input which was not unusually likely to occur in practice. Peter is saying that it is more likely to occur in practice, even if it’s an implicit description—even if we don’t know what the worst-case input is—because it has lower complexity in an absolute sense, and so more processes in the real world will generate that input than one with a higher complexity.
Is that right?
And no fast deterministic permutation can be used instead, because the deterministic permutation could be specified concisely.
In practice, our random number generators use a seed of 16 to 32 bits, so they introduce almost no complexity at all by this criterion. No RNG can be useful for this purpose, for the same reason no deterministic permutation can be useful.
Still, I think this is a conclusive proof that randomness can be useful.
It sounds to me like Peter is suggesting to randomly permute all inputs to the sorting algorithm, in hopes of improving the average case runtime. The justification appears to be that the worst case has low complexity because it’s easy to describe, because you can describe it by saying “it’s the worst case for ”.
For example, the worst-case input for Quicksort is an already-sorted list. You could, indeed, get better average performance out of quicksort if you could randomly permute its inputs in zero time, because code is more likely to produce already-sorted lists than lists sorted, say, by { x mod 17 }.
My initial reaction was that the implicit description “that input which is worst case for this algorithm” might specify an input which was not unusually likely to occur in practice. Peter is saying that it is more likely to occur in practice, even if it’s an implicit description—even if we don’t know what the worst-case input is—because it has lower complexity in an absolute sense, and so more processes in the real world will generate that input than one with a higher complexity.
Is that right?
And no fast deterministic permutation can be used instead, because the deterministic permutation could be specified concisely.
In practice, our random number generators use a seed of 16 to 32 bits, so they introduce almost no complexity at all by this criterion. No RNG can be useful for this purpose, for the same reason no deterministic permutation can be useful.
Still, I think this is a conclusive proof that randomness can be useful.