If you want to be pedantic, randomization doesn’t buy you anything for your complexity classes, because probably “P=BPP”[1](BPP is the class of problems where you can give probabilistic yes and no answers), so if you can solve it with randomness, it is already in P (if P = BPP), thus already easy to begin with. I’d say complexity classes are pretty important for realizing when your algorithm just isn’t going to work, and you need a different approach. For example, no matter how theoretically satisfying you will not get to AGI, if you just use lots of compute and implement naive inference with Bayes Nets (https://en.wikipedia.org/wiki/Bayesian_network).
If you want to be pedantic, randomization doesn’t buy you anything for your complexity classes, because probably “P=BPP”[1](BPP is the class of problems where you can give probabilistic yes and no answers), so if you can solve it with randomness, it is already in P (if P = BPP), thus already easy to begin with. I’d say complexity classes are pretty important for realizing when your algorithm just isn’t going to work, and you need a different approach. For example, no matter how theoretically satisfying you will not get to AGI, if you just use lots of compute and implement naive inference with Bayes Nets (https://en.wikipedia.org/wiki/Bayesian_network).
ScottAaronson2006: “Over the last decade and a half, mounting evidence has convinced almost all of us that in fact P=BPP. In the remaining ten minutes of this lecture, we certainly won’t be able to review this evidence in any depth. But let me quote one theorem, just to give you a flavor of it: ”