They are deeply schizophrenic, have no consistent beliefs, [...] are deeply psychopathic and seem to have no moral compass
I don’t see how this is any more true of a base model LLM than it is of, say, a weather simulation model.
You enter some initial conditions into the weather simulation, run it, and it gives you a forecast. It’s stochastic, so you can run it multiple times and get different forecasts, sampled from a predictive distribution. And if you had given it different initial conditions, you’d get a forecast for those conditions instead.
Or: you enter some initial conditions (a prompt) into the base model LLM, run it, and it gives you a forecast (completion). It’s stochastic, so you can run it multiple times and get different completions, sampled from a predictive distribution. And if you had given it a different prompt, you’d get a completion for that prompt instead.
It would be strange to call the weather simulation “schizophrenic,” or to say it “has no consistent beliefs.” If you put in conditions that imply sun tomorrow, it will predict sun; if you put in conditions that imply rain tomorrow, it will predict rain. It is not confused or inconsistent about anything, when it makes these predictions. How is the LLM any different?[1]
Meanwhile, it would be even stranger to say “the weather simulation has no moral compass.”
In the case of LLMs, I take this to mean something like, “they are indifferent to the moral status of their outputs, instead aiming only for predictive accuracy.”
This is also true of the weather simulation—and there it is a virtue, if anything! Hurricanes are bad, and we prefer them not to happen. But we would not want the simulation to avoid predicting hurricanes on account of this.
As for “psychopathic,” davinci-002 is not “psychopathic,” any more than a weather model, or my laptop, or my toaster. It does not neglect to treat me as a moral patient, because it never has a chance to do so in the first place. If I put a prompt into it, it does not know that it is being prompted by anyone; from its perspective it is still in training, looking at yet another scraped text sample among billions of others like it.
Or: sometimes, I think about different courses of action I could take. To aid me in my decision, I imagine how people I know would respond to them. I try, here, to imagine only how they reallywould respond—as apart from how they ought to respond, or how I would like them to respond.
If a base model is psychopathic, then so am I, in these moments. But surely that can’t be right?
Like, yes, it is true that these systems—weather simulation, toaster, GPT-3 -- are not human beings. They’re things of another kind.
But framing them as “alien,” or as “not behaving as a human would,” implies some expected reference point of “what a human would do if that human were, somehow, this system,” which doesn’t make much sense if thought through in detail—and which we don’t, and shouldn’t, usually demand of our tools and machines.
Is my toaster alien, on account of behaving as it does? What would behaving as a human would look like, for a toaster?
Should I be unsettled by the fact that the world around me does not teem with levers and handles and LEDs in frantic motion, all madly tapping out morse code for “SOS SOS I AM TRAPPED IN A [toaster / refrigerator / automatic sliding door / piece of text prediction software]”? Would the world be less “alien,” if it were like that?
often spout completely non-human kinds of texts
I am curious what you mean by this. LLMs are mostly trained on texts written by humans, so this would be some sort of failure, if it did occur often.
But I don’t know of anything that fitting this description that does occur often. There are cases like the Harry Potter sample I discuss here, but those have gotten rare as the models have gotten better, though they do still happen on occasion.
The weather simulation does have consistent beliefs in the sense that it always uses the same (approximation to) real physics. In this sense, the LLM also has consistent beliefs, reflected in the fact that its weights are fixed.
I also think the cognition in a weather model is very alien. It’s less powerful and general, so I think the error of applying something like the Shoggoth image to that (or calling it “alien”) would be that it would imply too much generality, but the alienness seems appropriate.
If you somehow had a mind that was constructed on the same principles as weather simulations, or your laptop, or your toaster (whatever that would mean, I feel like the analogy is fraying a bit here), that would display similar signs of general intelligence as LLMs, then yeah, I think analogizing them to alien/eldritch intelligences would be pretty appropriate.
It is a very common (and even to me tempting) error to see a system with the generality of GPT-4, trained on human imitation, and imagine that it must internally think like a human. But my best guess is that is not what is going on, and in some sense it is valuable to be reminded that the internal cognition going on in GPT-4 is probably similarly far from what is going in a human brain as a weather simulation is very different from what is going in a human trying to forecast the weather (de-facto I think GPT-4 is somewhere in-between since I do think the imitation learning does create some structural similarities that are stronger between humans and LLMs, but I think overall being reminded of this relevant dimension of alienness pays off in anticipated experiences a good amount).
I mostly agree with this comment, but I also think this comment is saying something different from the one I responded to.
In the comment I responded to, you wrote:
It is the case that base models are quite alien. They are deeply schizophrenic, have no consistent beliefs, often spout completely non-human kinds of texts, are deeply psychopathic and seem to have no moral compass. Describing them as a Shoggoth seems pretty reasonable to me, as far as alien intelligences go
As I described above, these properties seem more like structural features of the language modeling task than attributes of LLM cognition. A human trying to do language modeling (as in that game that Buck et al made) would exhibit the same list of nasty-sounding properties for the duration of the experience—as in, if you read the text “generated” by the human, you would tar the human with the same brush for the same reasons—even if their cognition remained as human as ever.
I agree that LLM internals probably look different from human mind internals. I also agree that people sometimes make the mistake “GPT-4 is, internally, thinking much like a person would if they were writing this text I’m seeing,” when we don’t actually know the extent to which that is true. I don’t have a strong position on how helpful vs. misleading the shoggoth image is, as a corrective to this mistake.
I don’t have a strong position on how helpful vs. misleading the shoggoth image is, as a corrective to this mistake.
You started with random numbers, and you essentially applied rounds of constraint application and annealing. I kinda think of it as getting a metal really hot and pouring it over mold. In this case, the ‘mold’ is your training set.
So what jumps out at me at the “shoggoth” idea is it’s like got all these properties, the “shoggoth” hates you, wants to eat you, is just ready to jump you and digest you with it’s tentacles. Or whatever.
But none of of that cognitive structure will exist unless it paid rent in compressing tokens. This algorithm will not find the optimal compression algorithm, but you only have a tiny fraction of the weights you need to record the token continuations at chinchilla scaling. You need every last weight to be pulling it’s weight (no pun intended).
I don’t see how this is any more true of a base model LLM than it is of, say, a weather simulation model.
You enter some initial conditions into the weather simulation, run it, and it gives you a forecast. It’s stochastic, so you can run it multiple times and get different forecasts, sampled from a predictive distribution. And if you had given it different initial conditions, you’d get a forecast for those conditions instead.
Or: you enter some initial conditions (a prompt) into the base model LLM, run it, and it gives you a forecast (completion). It’s stochastic, so you can run it multiple times and get different completions, sampled from a predictive distribution. And if you had given it a different prompt, you’d get a completion for that prompt instead.
It would be strange to call the weather simulation “schizophrenic,” or to say it “has no consistent beliefs.” If you put in conditions that imply sun tomorrow, it will predict sun; if you put in conditions that imply rain tomorrow, it will predict rain. It is not confused or inconsistent about anything, when it makes these predictions. How is the LLM any different?[1]
Meanwhile, it would be even stranger to say “the weather simulation has no moral compass.”
In the case of LLMs, I take this to mean something like, “they are indifferent to the moral status of their outputs, instead aiming only for predictive accuracy.”
This is also true of the weather simulation—and there it is a virtue, if anything! Hurricanes are bad, and we prefer them not to happen. But we would not want the simulation to avoid predicting hurricanes on account of this.
As for “psychopathic,”
davinci-002
is not “psychopathic,” any more than a weather model, or my laptop, or my toaster. It does not neglect to treat me as a moral patient, because it never has a chance to do so in the first place. If I put a prompt into it, it does not know that it is being prompted by anyone; from its perspective it is still in training, looking at yet another scraped text sample among billions of others like it.Or: sometimes, I think about different courses of action I could take. To aid me in my decision, I imagine how people I know would respond to them. I try, here, to imagine only how they really would respond—as apart from how they ought to respond, or how I would like them to respond.
If a base model is psychopathic, then so am I, in these moments. But surely that can’t be right?
Like, yes, it is true that these systems—weather simulation, toaster, GPT-3 -- are not human beings. They’re things of another kind.
But framing them as “alien,” or as “not behaving as a human would,” implies some expected reference point of “what a human would do if that human were, somehow, this system,” which doesn’t make much sense if thought through in detail—and which we don’t, and shouldn’t, usually demand of our tools and machines.
Is my toaster alien, on account of behaving as it does? What would behaving as a human would look like, for a toaster?
Should I be unsettled by the fact that the world around me does not teem with levers and handles and LEDs in frantic motion, all madly tapping out morse code for “SOS SOS I AM TRAPPED IN A [toaster / refrigerator / automatic sliding door / piece of text prediction software]”? Would the world be less “alien,” if it were like that?
I am curious what you mean by this. LLMs are mostly trained on texts written by humans, so this would be some sort of failure, if it did occur often.
But I don’t know of anything that fitting this description that does occur often. There are cases like the Harry Potter sample I discuss here, but those have gotten rare as the models have gotten better, though they do still happen on occasion.
The weather simulation does have consistent beliefs in the sense that it always uses the same (approximation to) real physics. In this sense, the LLM also has consistent beliefs, reflected in the fact that its weights are fixed.
I also think the cognition in a weather model is very alien. It’s less powerful and general, so I think the error of applying something like the Shoggoth image to that (or calling it “alien”) would be that it would imply too much generality, but the alienness seems appropriate.
If you somehow had a mind that was constructed on the same principles as weather simulations, or your laptop, or your toaster (whatever that would mean, I feel like the analogy is fraying a bit here), that would display similar signs of general intelligence as LLMs, then yeah, I think analogizing them to alien/eldritch intelligences would be pretty appropriate.
It is a very common (and even to me tempting) error to see a system with the generality of GPT-4, trained on human imitation, and imagine that it must internally think like a human. But my best guess is that is not what is going on, and in some sense it is valuable to be reminded that the internal cognition going on in GPT-4 is probably similarly far from what is going in a human brain as a weather simulation is very different from what is going in a human trying to forecast the weather (de-facto I think GPT-4 is somewhere in-between since I do think the imitation learning does create some structural similarities that are stronger between humans and LLMs, but I think overall being reminded of this relevant dimension of alienness pays off in anticipated experiences a good amount).
I mostly agree with this comment, but I also think this comment is saying something different from the one I responded to.
In the comment I responded to, you wrote:
As I described above, these properties seem more like structural features of the language modeling task than attributes of LLM cognition. A human trying to do language modeling (as in that game that Buck et al made) would exhibit the same list of nasty-sounding properties for the duration of the experience—as in, if you read the text “generated” by the human, you would tar the human with the same brush for the same reasons—even if their cognition remained as human as ever.
I agree that LLM internals probably look different from human mind internals. I also agree that people sometimes make the mistake “GPT-4 is, internally, thinking much like a person would if they were writing this text I’m seeing,” when we don’t actually know the extent to which that is true. I don’t have a strong position on how helpful vs. misleading the shoggoth image is, as a corrective to this mistake.
You started with random numbers, and you essentially applied rounds of constraint application and annealing. I kinda think of it as getting a metal really hot and pouring it over mold. In this case, the ‘mold’ is your training set.
So what jumps out at me at the “shoggoth” idea is it’s like got all these properties, the “shoggoth” hates you, wants to eat you, is just ready to jump you and digest you with it’s tentacles. Or whatever.
But none of of that cognitive structure will exist unless it paid rent in compressing tokens. This algorithm will not find the optimal compression algorithm, but you only have a tiny fraction of the weights you need to record the token continuations at chinchilla scaling. You need every last weight to be pulling it’s weight (no pun intended).