Re 1: I guess I’d say there are different ways to be reliable; one way is simply being better at not making mistakes in the first place, another way is being better at noticing and correcting them before anything is locked in / before it’s too late to correct. I think that LLMs are already probably around human-level at the first method of being reliable, but they seem to be subhuman at the second method. And I think the second method is really important to how humans achieve high reliability in practice. Hence why LLMs are generally less reliable than humans. But notice how o1 is already pretty good at correcting its mistakes, at least in the domain of math reasoning, compared to earlier models… and correspondingly, o1 is way better at math.
Re 1: I guess I’d say there are different ways to be reliable; one way is simply being better at not making mistakes in the first place, another way is being better at noticing and correcting them before anything is locked in / before it’s too late to correct. I think that LLMs are already probably around human-level at the first method of being reliable, but they seem to be subhuman at the second method. And I think the second method is really important to how humans achieve high reliability in practice. Hence why LLMs are generally less reliable than humans. But notice how o1 is already pretty good at correcting its mistakes, at least in the domain of math reasoning, compared to earlier models… and correspondingly, o1 is way better at math.