I’m a little wary of that dichotomy utilized in a load-bearing way
Yeah, I realize that the whole “shoggoth” and “mask” distinction is just a metaphor, but I think it’s a useful one. It’s there in the data—in the infinite data and infinite parameters limit the model is the accurate universe simulator, including human writing text on the internet and separately the system that tweaks the parameters of the simulation according to the input. That of course doesn’t necessary mean that actual LLM’s far away from that limit reflect that distinction, but it seems to me natural to analyze model’s “psychology” in that terms. One can even speculate that probably the layers of neurons closer to the input are “more shoggoth” and the ones closer to the output are “more mask”.
I would consider it major progress on the inner alignment problem
I would not. Being vaguely kinda sorta human-like doesn’t mean safe. Even regular humans are not aligned with other humans. That’s why we have democracy and law. And kinda-sorta-humans with superhuman abilities may be even less safe that any old half-consequentialist half-deontological quasi-agent we can train with pure RLHF. But who knows.
given a long context window, LLMs reconstruct the information which would have been kept around in a recurrent state pretty well anyway.
True. All that incredible progress of modern LLM’s is just a set of clever optimization tricks over RNN’s that made em less computationally expensive. That doesn’t say anything about agency or safety though.
not very plausible that the key dividing line between agentic and non-agentic is whether the architecture keeps state around
Sorry, looks like I wasn’t very clear. My point is not that stateless function can’t be agentic when looping around a state. Any computable process can be represented as a stateless function in a loop, as any functional bro knows. And of course LLM’s do keep state around.
Some kind of state/memory (or good enough environment observation ability) is necessary for agency but not sufficient. All existing agents we know are agents because they were specifically trained for agency. Chess AI is an agent in the chess board because it was trained specifically to do things on the chess board, i.e. win the game. Human brain is an agent in the real world because it was specifically trained to do stuff in the real world i.e. surviving in savannah and make more humans. Then of course the real world has changed and the proxy objectives like “have sex” stopped being correlated with meta-objective “make more copies of your genes”. But the agency in the real world was there in the data from the start, it didn’t just popped up from nothing.
Shoggoth wasn’t trained to do stuff in the real world. It is trained to output parameters of the simulation of the virtual world, then the simulator part is trained to simulate that virtual world is such a way that tiny simulated human inside would write a text on its tiny simulated computer and that text must be the same as the text that real humans in the real world would write given previous text. That’s the setup. That’s what shoggoth does in the limit.
Agency (and consequentialism in particular) is when you output stuff to the real world—and you’re getting rewarded depending on what real world looks like as a consequence of your output. There is no correlation between what shoggoth (or any given LLM as a whole for that matter) outputs and whatever happens in the real world as a consequence of that in such a way that shoggoth (I mean the gradient descend that shapes it) would have any feedback on. The training data doesn’t care, it’s static. And there is no such correlations in the data in the first place. So where does shoggoth’s agency comes from?
RLHF on the other hand does feed back around. And that is why I think RLHF potentially can make LLM less safe, not more.
Since the world-model-consultation is only selected to be useful for predicting the next token, the consequentialist question which the system asks its world-model could be fairly arbitrary so long as it has a good correlation with next-token-prediction utility on the training data.
I would argue that in the LLM case this emerging prediction-utility is not a thing at all, since there’s no pressure on shoggoth (or LLM as a whole) to measure it somehow. What will it do knowing that it just made a mistake? Excuse and rewrite a paragraph again? That’s not how texts on the internet work. Again, agents have a feedback from the environment signaling that the plan didn’t work. That’s not the case with LLM’s. But that’s irrelevant, let’s say that this utilitarian behavior does indeed emerge. Does this prediction-utility has anything to do with the consequences in the real world? Which world that world-model is a model of? Chess AI does clearly have a “winning utility”, it’s an agent, but only in a small world of the chess board.
Is this planning? IE does the “query to the world-model” involve considering multiple plans and rejecting worse ones?
I guess it’s plausible that there is planning mechanism somewhere inside the LLM’s. But it’s not a planning on shoggoth’s part. I can imagine the simulator part “thinking”: “okay, this simulation sequence doesn’t seem very realistic, let’s try it this way instead”, but again, it’s not a planning in the real world, it is a planning of how to simulate virtual one.
Input-output type signatures do not tell us much about the simplicity or complexity of calculations within. “It’s just circuits” but large circuits can implement some pretty sophisticated algorithms. Big NNs do not equal big lookup tables.
Yeah, I realize that the whole “shoggoth” and “mask” distinction is just a metaphor, but I think it’s a useful one. It’s there in the data—in the infinite data and infinite parameters limit the model is the accurate universe simulator, including human writing text on the internet and separately the system that tweaks the parameters of the simulation according to the input. That of course doesn’t necessary mean that actual LLM’s far away from that limit reflect that distinction, but it seems to me natural to analyze model’s “psychology” in that terms. One can even speculate that probably the layers of neurons closer to the input are “more shoggoth” and the ones closer to the output are “more mask”.
I would not. Being vaguely kinda sorta human-like doesn’t mean safe. Even regular humans are not aligned with other humans. That’s why we have democracy and law. And kinda-sorta-humans with superhuman abilities may be even less safe that any old half-consequentialist half-deontological quasi-agent we can train with pure RLHF. But who knows.
True. All that incredible progress of modern LLM’s is just a set of clever optimization tricks over RNN’s that made em less computationally expensive. That doesn’t say anything about agency or safety though.
Sorry, looks like I wasn’t very clear. My point is not that stateless function can’t be agentic when looping around a state. Any computable process can be represented as a stateless function in a loop, as any functional bro knows. And of course LLM’s do keep state around.
Some kind of state/memory (or good enough environment observation ability) is necessary for agency but not sufficient. All existing agents we know are agents because they were specifically trained for agency. Chess AI is an agent in the chess board because it was trained specifically to do things on the chess board, i.e. win the game. Human brain is an agent in the real world because it was specifically trained to do stuff in the real world i.e. surviving in savannah and make more humans. Then of course the real world has changed and the proxy objectives like “have sex” stopped being correlated with meta-objective “make more copies of your genes”. But the agency in the real world was there in the data from the start, it didn’t just popped up from nothing.
Shoggoth wasn’t trained to do stuff in the real world. It is trained to output parameters of the simulation of the virtual world, then the simulator part is trained to simulate that virtual world is such a way that tiny simulated human inside would write a text on its tiny simulated computer and that text must be the same as the text that real humans in the real world would write given previous text. That’s the setup. That’s what shoggoth does in the limit.
Agency (and consequentialism in particular) is when you output stuff to the real world—and you’re getting rewarded depending on what real world looks like as a consequence of your output. There is no correlation between what shoggoth (or any given LLM as a whole for that matter) outputs and whatever happens in the real world as a consequence of that in such a way that shoggoth (I mean the gradient descend that shapes it) would have any feedback on. The training data doesn’t care, it’s static. And there is no such correlations in the data in the first place. So where does shoggoth’s agency comes from?
RLHF on the other hand does feed back around. And that is why I think RLHF potentially can make LLM less safe, not more.
I would argue that in the LLM case this emerging prediction-utility is not a thing at all, since there’s no pressure on shoggoth (or LLM as a whole) to measure it somehow. What will it do knowing that it just made a mistake? Excuse and rewrite a paragraph again? That’s not how texts on the internet work. Again, agents have a feedback from the environment signaling that the plan didn’t work. That’s not the case with LLM’s. But that’s irrelevant, let’s say that this utilitarian behavior does indeed emerge. Does this prediction-utility has anything to do with the consequences in the real world? Which world that world-model is a model of? Chess AI does clearly have a “winning utility”, it’s an agent, but only in a small world of the chess board.
I guess it’s plausible that there is planning mechanism somewhere inside the LLM’s. But it’s not a planning on shoggoth’s part. I can imagine the simulator part “thinking”: “okay, this simulation sequence doesn’t seem very realistic, let’s try it this way instead”, but again, it’s not a planning in the real world, it is a planning of how to simulate virtual one.
Agree.