Or, what someone says is a subset of what they’re really thinking, transformed by their personality. (personality plus intentions, context, emotions, etc.)
(You can tell I’m not a mathematician because I didn’t express this in latex xD but I feel like there’s a very elegant Linear Algebra description where, ground truth is a high dimensional object, personality transforms to make it low-dimensional enough to communicate, and changes its “angle” (?) / vibe so it fits their goals better)
So, if you know someone’s personality, meaning, what kind of way they reshape a given ground truth, you can infer things about the ground truth.
Anecdote
I feel like everyone does this implicitly, it’s one of those social skills I struggle with. I always thought it was a flaw of other people, to not simply take things at face value, but to think at the level of “why do they say this”, “this is just because they want <x>”, while I’m always thinking about “no but what did they literally say? is it true or not?”.
It’s frustrating because usually, the underlying social/personality context completely dominates what someone says, like, 95% of the time it literally doesn’t fucking matter one bit what someone is saying at the object level. And maybe, the fact that humans have evolved to think in this way, is evidence that in fact, 95% of what people say doesn’t matter and shouldn’t matter object-level, that the only thing that matters is implications about underlying personality and intentions.
My recollection of sitting through “adult conversations” of family and relative visits for probably many hundreds of hours is, “I remember almost zero things that anybody has ever talked about”. I guess the point is that they’re talking, not what it’s about.
Practical
This reframing makes it easier to handle social situations. I can ask myself questions like, “what is the point of this social interaction?”, “why is this person saying this to me?”
Often the answer is, “they just wanna vibe, and this is an excuse to do so”.
Yes, that’s how they signal good relations. Also, it keeps the communication channels open, just in case they might want to say something useful in future.
There have been a number of debates (which I can’t easily search on, which is sad) about whether speech is an action (intended to bring about a consequence) or a truth-communication or truth-seeking (both imperfect, of course) mechanism. It’s both, at different times to different degrees, and often not explicit about what the goals are.
The practical outcome seems spot-on. With some people you can have the meta-conversation about what they want from an interaction, with most you can’t, and you have to make your best guess, which you can refine or change based on their reactions.
Out of curiosity, when chatting with an LLM, do you wonder what its purpose is in the responses it gives? I’m pretty sure it’s “predict a plausible next token”, but I don’t know how I’ll know to change my belief.
when chatting with an LLM, do you wonder what its purpose is in the responses it gives? I’m pretty sure it’s “predict a plausible next token”, but I don’t know how I’ll know to change my belief.
I think “it has a simulated purpose, but whether it has an actual purpose is not relevant for interacting with it”.
My intuition is that the token predicter doesn’t have a purpose, that it’s just answering, “what would this chatbot that I am simulating, respond with?” For the chatbot character (the Simulacrum) it’s, “What would a helpful chatbot want in this situation?” It behaves as if its purpose is to be helpful and harmless (or whatever personality it was instilled with).
(I’m assuming that as part of making a prediction, it is building (and/or using) models of things, which is a strong statement and idk how to argue for it)
I think framing it as “predicting the next token” is similar to explaining a rock’s behavior when rolling as, “obeying the laws of physics”. Like, it’s a lower-than-useful level of abstraction. It’s easier to predict the rock’s behavior via things it’s bouncing off of, its direction, speed, mass, etc.
Or put another way, “sure it’s predicting the next token, but how is it actually doing that? what does that mean?”. A straightforward way to predict the next token is to actually understand what it means to be a helpful chatbot in this conversation (which includes understanding the world, human psychology, etc.) and completing whatever sentence is currently being written, given that understanding.
(There’s another angle that makes this very confusing: whether RLHF fundamentally changes the model or not. Does it turn it from a single token predicter to a multi-token response predicter? Also is it possible that the base model already has goals beyond predicting 1 token? Maybe the way it’s trained somehow makes it useful for it to have goals.)
There have been a number of debates (which I can’t easily search on, which is sad) about whether speech is an action (intended to bring about a consequence) or a truth-communication or truth-seeking (both imperfect, of course) mechanism
The immediate reason is, we just enjoy talking to people. Similar to “we eat because we enjoy food/dislike being hungry”. Biologically, hunger developed for many low-level processes like our muscles needing glycogen, but subjectively, the cause of eating is some emotion.
I think asking what speech really is at some deeper level doesn’t make sense. Or at least it should recognize that why individual people speak, and why speech developed in humans, are separate topics, with (I’m guessing) very small intersections.
personality( ground_truth ) --> stated_truth
Or, what someone says is a subset of what they’re really thinking, transformed by their personality.
(personality plus intentions, context, emotions, etc.)
(You can tell I’m not a mathematician because I didn’t express this in latex xD but I feel like there’s a very elegant Linear Algebra description where, ground truth is a high dimensional object, personality transforms to make it low-dimensional enough to communicate, and changes its “angle” (?) / vibe so it fits their goals better)
So, if you know someone’s personality, meaning, what kind of way they reshape a given ground truth, you can infer things about the ground truth.
Anecdote
I feel like everyone does this implicitly, it’s one of those social skills I struggle with. I always thought it was a flaw of other people, to not simply take things at face value, but to think at the level of “why do they say this”, “this is just because they want <x>”, while I’m always thinking about “no but what did they literally say? is it true or not?”.
It’s frustrating because usually, the underlying social/personality context completely dominates what someone says, like, 95% of the time it literally doesn’t fucking matter one bit what someone is saying at the object level. And maybe, the fact that humans have evolved to think in this way, is evidence that in fact, 95% of what people say doesn’t matter and shouldn’t matter object-level, that the only thing that matters is implications about underlying personality and intentions.
My recollection of sitting through “adult conversations” of family and relative visits for probably many hundreds of hours is, “I remember almost zero things that anybody has ever talked about”. I guess the point is that they’re talking, not what it’s about.
Practical
This reframing makes it easier to handle social situations. I can ask myself questions like, “what is the point of this social interaction?”, “why is this person saying this to me?”
Often the answer is, “they just wanna vibe, and this is an excuse to do so”.
Yes, that’s how they signal good relations. Also, it keeps the communication channels open, just in case they might want to say something useful in future.
There have been a number of debates (which I can’t easily search on, which is sad) about whether speech is an action (intended to bring about a consequence) or a truth-communication or truth-seeking (both imperfect, of course) mechanism. It’s both, at different times to different degrees, and often not explicit about what the goals are.
The practical outcome seems spot-on. With some people you can have the meta-conversation about what they want from an interaction, with most you can’t, and you have to make your best guess, which you can refine or change based on their reactions.
Out of curiosity, when chatting with an LLM, do you wonder what its purpose is in the responses it gives? I’m pretty sure it’s “predict a plausible next token”, but I don’t know how I’ll know to change my belief.
I think “it has a simulated purpose, but whether it has an actual purpose is not relevant for interacting with it”.
My intuition is that the token predicter doesn’t have a purpose, that it’s just answering, “what would this chatbot that I am simulating, respond with?”
For the chatbot character (the Simulacrum) it’s, “What would a helpful chatbot want in this situation?” It behaves as if its purpose is to be helpful and harmless (or whatever personality it was instilled with).
(I’m assuming that as part of making a prediction, it is building (and/or using) models of things, which is a strong statement and idk how to argue for it)
I think framing it as “predicting the next token” is similar to explaining a rock’s behavior when rolling as, “obeying the laws of physics”. Like, it’s a lower-than-useful level of abstraction. It’s easier to predict the rock’s behavior via things it’s bouncing off of, its direction, speed, mass, etc.
Or put another way, “sure it’s predicting the next token, but how is it actually doing that? what does that mean?”. A straightforward way to predict the next token is to actually understand what it means to be a helpful chatbot in this conversation (which includes understanding the world, human psychology, etc.) and completing whatever sentence is currently being written, given that understanding.
(There’s another angle that makes this very confusing: whether RLHF fundamentally changes the model or not. Does it turn it from a single token predicter to a multi-token response predicter? Also is it possible that the base model already has goals beyond predicting 1 token? Maybe the way it’s trained somehow makes it useful for it to have goals.)
The immediate reason is, we just enjoy talking to people. Similar to “we eat because we enjoy food/dislike being hungry”. Biologically, hunger developed for many low-level processes like our muscles needing glycogen, but subjectively, the cause of eating is some emotion.
I think asking what speech really is at some deeper level doesn’t make sense. Or at least it should recognize that why individual people speak, and why speech developed in humans, are separate topics, with (I’m guessing) very small intersections.