“Worst case analysis” is a standard term of art in computer science, that shows up as early as second-semester programming, and Eliezer will be better understood if he uses the standard term in the standard way.
Actually, in the context of randomized algorithms, I’ve always seen the term “worst case running time” refer to Oscar’s case 6, and “worst-case expected running time”—often somewhat misleadingly simplified to “expected running time”—refer to Oscar’s case 2.
A computer scientist would not describe the “omega” case as random—if the input is correlated with the random number source in a way that is detectable by the algorithm, they’re by definition not random.
A system that reliably behaves like the omega case is clearly not random. However, a random system such as case 2 may still occasionally behave like omega, with probability epsilon, and it is not at all unreasonable or uncommon to require your algorithm to work efficiently even in those rare cases. Thus, one might optimize a random system by modelling it as an omega system, and demanding that it works well enough even in that context.
Actually, in the context of randomized algorithms, I’ve always seen the term “worst case running time” refer to Oscar’s case 6, and “worst-case expected running time”—often somewhat misleadingly simplified to “expected running time”—refer to Oscar’s case 2.
A system that reliably behaves like the omega case is clearly not random. However, a random system such as case 2 may still occasionally behave like omega, with probability epsilon, and it is not at all unreasonable or uncommon to require your algorithm to work efficiently even in those rare cases. Thus, one might optimize a random system by modelling it as an omega system, and demanding that it works well enough even in that context.