Obviously this is a tradeoff that depends on how useful LLMs are to you.
As for me, I haven’t found current LLMs to be useful for my work or interests at all. They’re usually right when something is easily searched for, but when something is hard to search for, they’re almost always wrong. So, in my experience, they’re only really useful if one of the following is true:
you’re bad at searching the internet
you’re bad at writing and need to reword something
correctness doesn’t matter (eg essays for college classes)
I am confused by takes like this—it just seems so blatantly wrong to me.
For example, yesterday I showed GPT-4o this image.
I asked it to show why (10) is the solution to (9). It wrote out the derivation in perfect Latex.
I guess this is in some sense a “trivial” problem, but I couldn’t immediately think of the solution. It is googleable, but only indirectly, because you have to translate the problem to a more general form first. So I think for you to claim that LLMs are not useful you have to have incredibly high standards for what problems are easy / googleable and not value the convenience of just asking the exact question with the opportunity to ask followups.
I haven’t used LLMs for math problems. Maybe they’re better at that, or maybe it’s calling WolframAlpha to get that result, or maybe the answer it gave you is wrong and you just don’t realize it. What I can say is that for any kind of non-obvious chemistry, biology, mechanical engineering, or electrical engineering question, or something about the legal meaning of wording in a patent, they’re wrong >90% of the time.
If you’re going to post something like the above, I think you should also include the response you got.
Obviously this is a tradeoff that depends on how useful LLMs are to you.
As for me, I haven’t found current LLMs to be useful for my work or interests at all. They’re usually right when something is easily searched for, but when something is hard to search for, they’re almost always wrong. So, in my experience, they’re only really useful if one of the following is true:
you’re bad at searching the internet
you’re bad at writing and need to reword something
correctness doesn’t matter (eg essays for college classes)
I am confused by takes like this—it just seems so blatantly wrong to me.
For example, yesterday I showed GPT-4o this image.
I asked it to show why (10) is the solution to (9). It wrote out the derivation in perfect Latex.
I guess this is in some sense a “trivial” problem, but I couldn’t immediately think of the solution. It is googleable, but only indirectly, because you have to translate the problem to a more general form first. So I think for you to claim that LLMs are not useful you have to have incredibly high standards for what problems are easy / googleable and not value the convenience of just asking the exact question with the opportunity to ask followups.
I haven’t used LLMs for math problems. Maybe they’re better at that, or maybe it’s calling WolframAlpha to get that result, or maybe the answer it gave you is wrong and you just don’t realize it. What I can say is that for any kind of non-obvious chemistry, biology, mechanical engineering, or electrical engineering question, or something about the legal meaning of wording in a patent, they’re wrong >90% of the time.
If you’re going to post something like the above, I think you should also include the response you got.
Unfortunately the sharing function is broken for me.
Screenshot?