Thanks for feedback. I agree with worries (1) and (2). I think there is a way to de-risk this.
The block hierarchy that is responsible for tracking the local context consists of classic Transformer blocks. Only the user’s own history tracking really needs to be an SSM hierarchy because it quickly surpasses the scalability limits of self-attention (also, interlocutor’s tracking blocks on private 1-1 chats that can also be arbitrarily long, but there is probably no such data available for training). On the public data (such as forums, public chats room logs, Diplomacy and other text game logs) the interlocutor’s history traces will 99% of the time would easily be less than 100k symbols, but for the symmetry with user’s own state (same weights!) and for having the same representation structure it should mirror the user’s own SSM blocks, of course.
With such an approach, the SSM hierarchies could start very small, with only a few blocks or even just a single SSM block (i.e., two blocks in total: one for user’s own and one for interlocutor’s state), and attach to the middle of the Transformer hierarchy to select from it. However, I think this approach couldn’t be just slapped on the tre-trained LLama or another large Transformer LLM model. I suspect the transformer should be co-trained with the SSM blocks to induce the Transformer to make the corresponding representations useful for the SSM blocks. “Pretraining Language Models with Human Preferences” is my intuition pump here.
Regarding the sufficiency and quality of training data, the Transformer hierarchy itself could still be trained on arbitrary texts, as well as the current LLMs. And we can adjust the size of the SSM hierarchies to the amounts of high-quality dialogue and forum data that we are able to obtain. I think this a no-brainer that this design would improve the frontier quality in LLM apps that value personalisation and attunement to the user’s current state (psychological, emotional, levels of knowledge, etc.), relative to whatever “base” Transformer model we would take (such as Llama, or any other).
One additional worry is that many of the research benefits of SociaLLM may not be out of reach for current foundation models
With this I disagree, I think it’s critical for the user state tracking to be energy-based. I don’t think there are ways to recapitulate this with auto-regressive Transformer language models (cf. any LeCun’s presentation from the last year). There are potential ways to recapitulate this with other language modelling architectures (non-Transformer and non-SSM), but they currently don’t hold any stronger promise than SSM, so I don’t see any reasons to pick them.
Interesting, thanks for sharing your thoughts. If this could improve the social intelligence of models then it can raise the risk of pushing the frontier of dangerous capabilities. It is worth noting that we are generally more interested in methods that are (1) more likely to transfer to AGI (don’t over-rely on specific details of a chosen architecture) and (2) that specially target alignment instead of furthering the social capabilities of the model.
On (1), cf. this report: “The current portfolio of work on AI risk is over-indexed on work which treats “transformative AI” as a black box and tries to plan around that. I think that we can and should be peering inside that box (and this may involve plans targeted at more specific risks).”
On (2), I’m surprised to read this from you, since you suggested to engineer Self-Other Overlap into LLMs in your AI Safety Camp proposal, if I understood and remember correctly. Do you actually see a line (or a way) of increasing the overlap without furthering ToM and therefore “social capabilities”? (Which ties back to “almost all applied/empirical AI safety work is simultaneously capabilities work”.)
On (1), some approaches are neglected for a good reason. You can also target specific risks while treating TAI as a black-box (such as self-other overlap for deception). I think it can be reasonable to “peer inside the box” if your model is general enough and you have a good enough reason to think that your assumption about model internals has any chance at all of resembling the internals of transformative AI. On (2), I expect that if the internals of LLMs and humans are different enough, self-other overlap would not provide any significant capability benefits. I also expect that in so far as using self-representations is useful to predict others, you probably don’t need to specifically induce self-other overlap at the neural level for that strategy to be learned, but I am uncertain about this as this is highly contingent on the learning setup.
I absolutely agree that the future TAI may look nothing like the current architectures. Cf. this tweet by Kenneth Stanley, with whom I agree 100%. At the same time, I think it’s a methodological mistake to therefore conclude that we should only work on approaches and techniques that are applicable to any AI, in a black-box manner. It’s like tying our hands behind our backs. We can and should affect the designs of future TAIs through our research, by demonstrating promise (or inherent limitations) of this or that alignment technique, so that these techniques get or lose traction and are included or excluded from the TAI design. So, we are not just making “assumptions” about the internals of the future TAIs; we are shaping these internals.
I do not think we should only work on approaches that work on any AI, I agree that would constitute a methodological mistake. I found a framing that general to not be very conducive to progress.
You are right that we still have the chance to shape the internals of TAI, even though there are a lot of hoops to go through to make that happen. We think that this is still worthwhile, which is why we stated our interest in potentially helping with the development and deployment of provably safe architectures, even though they currently seem less competitive.
In my response, I was trying to highlight the point that whenever we can, we should keep our assumptions to a minimum given the uncertainty we are under. Having that said, it is reasonable to have some working assumptions that allow progress to be made in the first place as long as they are clearly stated.
I also agree with Davidad about the importance of governance for the successful implementation of a technical AI Safety plan as well as with your claim that proliferation risks are important, with the caveat that I am less worried about proliferation risks in a world with very short timelines.
Thanks for feedback. I agree with worries (1) and (2). I think there is a way to de-risk this.
The block hierarchy that is responsible for tracking the local context consists of classic Transformer blocks. Only the user’s own history tracking really needs to be an SSM hierarchy because it quickly surpasses the scalability limits of self-attention (also, interlocutor’s tracking blocks on private 1-1 chats that can also be arbitrarily long, but there is probably no such data available for training). On the public data (such as forums, public chats room logs, Diplomacy and other text game logs) the interlocutor’s history traces will 99% of the time would easily be less than 100k symbols, but for the symmetry with user’s own state (same weights!) and for having the same representation structure it should mirror the user’s own SSM blocks, of course.
With such an approach, the SSM hierarchies could start very small, with only a few blocks or even just a single SSM block (i.e., two blocks in total: one for user’s own and one for interlocutor’s state), and attach to the middle of the Transformer hierarchy to select from it. However, I think this approach couldn’t be just slapped on the tre-trained LLama or another large Transformer LLM model. I suspect the transformer should be co-trained with the SSM blocks to induce the Transformer to make the corresponding representations useful for the SSM blocks. “Pretraining Language Models with Human Preferences” is my intuition pump here.
Regarding the sufficiency and quality of training data, the Transformer hierarchy itself could still be trained on arbitrary texts, as well as the current LLMs. And we can adjust the size of the SSM hierarchies to the amounts of high-quality dialogue and forum data that we are able to obtain. I think this a no-brainer that this design would improve the frontier quality in LLM apps that value personalisation and attunement to the user’s current state (psychological, emotional, levels of knowledge, etc.), relative to whatever “base” Transformer model we would take (such as Llama, or any other).
With this I disagree, I think it’s critical for the user state tracking to be energy-based. I don’t think there are ways to recapitulate this with auto-regressive Transformer language models (cf. any LeCun’s presentation from the last year). There are potential ways to recapitulate this with other language modelling architectures (non-Transformer and non-SSM), but they currently don’t hold any stronger promise than SSM, so I don’t see any reasons to pick them.
Interesting, thanks for sharing your thoughts. If this could improve the social intelligence of models then it can raise the risk of pushing the frontier of dangerous capabilities. It is worth noting that we are generally more interested in methods that are (1) more likely to transfer to AGI (don’t over-rely on specific details of a chosen architecture) and (2) that specially target alignment instead of furthering the social capabilities of the model.
On (1), cf. this report: “The current portfolio of work on AI risk is over-indexed on work which treats “transformative AI” as a black box and tries to plan around that. I think that we can and should be peering inside that box (and this may involve plans targeted at more specific risks).”
On (2), I’m surprised to read this from you, since you suggested to engineer Self-Other Overlap into LLMs in your AI Safety Camp proposal, if I understood and remember correctly. Do you actually see a line (or a way) of increasing the overlap without furthering ToM and therefore “social capabilities”? (Which ties back to “almost all applied/empirical AI safety work is simultaneously capabilities work”.)
On (1), some approaches are neglected for a good reason. You can also target specific risks while treating TAI as a black-box (such as self-other overlap for deception). I think it can be reasonable to “peer inside the box” if your model is general enough and you have a good enough reason to think that your assumption about model internals has any chance at all of resembling the internals of transformative AI.
On (2), I expect that if the internals of LLMs and humans are different enough, self-other overlap would not provide any significant capability benefits. I also expect that in so far as using self-representations is useful to predict others, you probably don’t need to specifically induce self-other overlap at the neural level for that strategy to be learned, but I am uncertain about this as this is highly contingent on the learning setup.
I absolutely agree that the future TAI may look nothing like the current architectures. Cf. this tweet by Kenneth Stanley, with whom I agree 100%. At the same time, I think it’s a methodological mistake to therefore conclude that we should only work on approaches and techniques that are applicable to any AI, in a black-box manner. It’s like tying our hands behind our backs. We can and should affect the designs of future TAIs through our research, by demonstrating promise (or inherent limitations) of this or that alignment technique, so that these techniques get or lose traction and are included or excluded from the TAI design. So, we are not just making “assumptions” about the internals of the future TAIs; we are shaping these internals.
We can and should think about the proliferation risks[1] (i.e., the risks that some TAI will be created by downright rogue actors), but IMO most of that thinking should be on the governance, not technical side. We agree with Davidad here that a good technical AI safety plan should be accompanied with a good governance (including compute monitoring) plan.
In our own plan (Gaia Network), we do this in the penultimate paragraph here.
I do not think we should only work on approaches that work on any AI, I agree that would constitute a methodological mistake. I found a framing that general to not be very conducive to progress.
You are right that we still have the chance to shape the internals of TAI, even though there are a lot of hoops to go through to make that happen. We think that this is still worthwhile, which is why we stated our interest in potentially helping with the development and deployment of provably safe architectures, even though they currently seem less competitive.
In my response, I was trying to highlight the point that whenever we can, we should keep our assumptions to a minimum given the uncertainty we are under. Having that said, it is reasonable to have some working assumptions that allow progress to be made in the first place as long as they are clearly stated.
I also agree with Davidad about the importance of governance for the successful implementation of a technical AI Safety plan as well as with your claim that proliferation risks are important, with the caveat that I am less worried about proliferation risks in a world with very short timelines.
This conversation has prompted me to write “AGI will be made of heterogeneous components, Transformer and Selective SSM blocks will be among them”.