I do think reliability is quite important. As one potential counterargument, though, you can get by with lower reliability if you can add additional error checking and error correcting steps. The research I’ve seen is somewhat mixed on how good LLMs are at catching their own errors (but I haven’t dived into it deeply or tried to form a strong opinion from that research).
I do think reliability is quite important. As one potential counterargument, though, you can get by with lower reliability if you can add additional error checking and error correcting steps. The research I’ve seen is somewhat mixed on how good LLMs are at catching their own errors (but I haven’t dived into it deeply or tried to form a strong opinion from that research).