I suppose that there are really 6 different complexities. Let x be the random numbers you pick and let y be the input you get. Let f(x,y) be the time your algorithm takes on that instance. We can maximise or take the expected value for each of x and y, in either order.
E_x E_y f(x,y) (Average case complexity; Eliezer’s favourite.)
max_y E_x f(x,y) (Worst case complexity; Scott’s favourite.)
E_x max_y f(x,y) (facing an adversary who knows what random numbers you are going to get.)
max_x E_y f(x,y) (you get an input at random, your random number generator tries to give you bad numbers but doesn’t know what case you’ll get.)
E_y max_x f(x,y) (you get an input at random, your random number generator gives you the worst numbers for that input.)
max_x max_y f(x,y) (‘Omega’ case complexity; your environment and your random number generator conspire to kill you.)
There’s not one best way to put a distribution on the possible inputs, so computer scientists who don’t want to do this can only look at 2, 3, and 6. Then problems that can be solved in poly time according to 2 are the ones in BPP, and those that can be solved in poly time according to 6 are the ones in P (you don’t use your RNG if it’s trying to kill you). I don’t know if the corresponding class for 3 has a name, but it doesn’t seem very interesting.
EDIT: You also don’t want to use your RNG if you’re being judged by 4 or 5, so these both correspond to the complexity class where you judge by average time and you don’t have a RNG. Then 1 corresponds to the complexity class where you’re judged by average time and you do have an RNG. These complexity classes are known to be the same, since as Scott says
given any ensemble of strategies, by convexity there must be at least one deterministic strategy in the ensemble that does at least as well as the average.
I suppose that there are really 6 different complexities. Let x be the random numbers you pick and let y be the input you get. Let f(x,y) be the time your algorithm takes on that instance. We can maximise or take the expected value for each of x and y, in either order.
E_x E_y f(x,y) (Average case complexity; Eliezer’s favourite.)
max_y E_x f(x,y) (Worst case complexity; Scott’s favourite.)
E_x max_y f(x,y) (facing an adversary who knows what random numbers you are going to get.)
max_x E_y f(x,y) (you get an input at random, your random number generator tries to give you bad numbers but doesn’t know what case you’ll get.)
E_y max_x f(x,y) (you get an input at random, your random number generator gives you the worst numbers for that input.)
max_x max_y f(x,y) (‘Omega’ case complexity; your environment and your random number generator conspire to kill you.)
There’s not one best way to put a distribution on the possible inputs, so computer scientists who don’t want to do this can only look at 2, 3, and 6. Then problems that can be solved in poly time according to 2 are the ones in BPP, and those that can be solved in poly time according to 6 are the ones in P (you don’t use your RNG if it’s trying to kill you). I don’t know if the corresponding class for 3 has a name, but it doesn’t seem very interesting.
EDIT: You also don’t want to use your RNG if you’re being judged by 4 or 5, so these both correspond to the complexity class where you judge by average time and you don’t have a RNG. Then 1 corresponds to the complexity class where you’re judged by average time and you do have an RNG. These complexity classes are known to be the same, since as Scott says