Yeah, here Google search has approximately zero chance.
Google searches for keywords, LLMs search for the meaning—maybe not in the fully human way, but still much better than “does it contain the right keywords (even if each of them in a completely different context)?”
Trump was huge for helping me understand LLMs. I realized that they were doing something remarkably similar to what he was doing, vibing off of associations, choosing continuations word by word on instinct, [other things]. It makes so much sense that Trump is super impressed by its ability to write him a speech.
Haha, that’s funny; I just realized recently that I prefer talking to a LLM to talking to most people, because the LLM talks like me.
I mean, it answers my questions, instead of ignoring me, saying something irrelevant, asking me why I don’t already know that, repeating what I already said in a slightly condescending tone while ignoring the question I asked at the end, etc. It is also good at summarizing things, or trying a different solution. Yes, sometimes it makes mistakes… but I hope it’s not too controversial to say that humans do that too; the main difference is that a LLM doesn’t get offended when you point out the mistake it made.
Basically just like I try to communicate with people, only much faster and on a vastly wider range of topics.
...so I guess the real problem is, why does LLM talk to Trump in a Trump-like way, and to me in a Viliam-like way? Is the prompt enough to figure out the personality of the person asking the question? Or does it actually have much more information about us that we assume?
Yeah, here Google search has approximately zero chance.
Google searches for keywords, LLMs search for the meaning—maybe not in the fully human way, but still much better than “does it contain the right keywords (even if each of them in a completely different context)?”
Haha, that’s funny; I just realized recently that I prefer talking to a LLM to talking to most people, because the LLM talks like me.
I mean, it answers my questions, instead of ignoring me, saying something irrelevant, asking me why I don’t already know that, repeating what I already said in a slightly condescending tone while ignoring the question I asked at the end, etc. It is also good at summarizing things, or trying a different solution. Yes, sometimes it makes mistakes… but I hope it’s not too controversial to say that humans do that too; the main difference is that a LLM doesn’t get offended when you point out the mistake it made.
Basically just like I try to communicate with people, only much faster and on a vastly wider range of topics.
...so I guess the real problem is, why does LLM talk to Trump in a Trump-like way, and to me in a Viliam-like way? Is the prompt enough to figure out the personality of the person asking the question? Or does it actually have much more information about us that we assume?