I came across this video by MIT CSAIL.
Here is the article they are talking about: https://www.science.org/doi/10.1126/scirobotics.adc8892
This team claims to have achieved driving tasks that previously required 10000 neurons, while using only 19, by using “liquid neural networks” inspired by worm neurology.
They say this innovation brings massive improvements on performance, especially in embedded systems, but also in interpretability, since the reduced number of neurons makes the system much more human-readable. In particular, the attention of the system would be much more easily tracked; this would open the door to safety certifications for high-stakes applications.
Having tried driving and flying tasks in different conditions and environments, they also claim that their system is vastly better at out-of-distribution zero-shot tasks.
So basically, they believe they have made very substantial steps in pretty much every dimension that matters, both for performance and for safety.
As far as I can tell these are very serious researchers, but doesn’t that sound a bit too god to be true? I have no expertise in machine learning and I haven’t seen any third-party opinions on this yet, so I’m having a hard time making up my mind.
I’d be curious to hear your takes!
I think this is real, in the sense that they got the results they are reporting and this is a meaningful advance. Too early to say if this will scale to real world problems but it seems super promising, and I would hope and expect that Waymo and competitors are seriously investigating this, or will be soon.
Having said that, it’s totally unclear how you might apply this to LLMs, the AI du jour. One of the main innovations in liquid networks is that they are continuous rather than discrete, which is good for very high bandwidth exercises like vision. Our eyes are technically discrete in that retina cells fire discretely, but I think the best interpretation of them at scale is much more like a continuous system. Similar to hearing, the AI analog being speech recognition.
But language is not really like that. Words are mostly discrete—mostly you want to process things at the token level (~= words) or sometimes wordpieces or even letters, but it’s not that sensible to think of text as being continuous. So it’s not obvious how to apply liquid NNs to text understanding/generation.
Research opportunity!
But it’ll be a while, if ever, before continuous networks work for language.
Thanks for your answer! Very interesting
I didn’t know about the continuous nature of LNN; I would have thought that you needed different hardware (maybe an analog computer?) to treat continuous values.
Maybe it could work for generative networks for images or music, that seems less discrete than written language.
I mean, computers aren’t technically continuous and neither are neural networks, but if your time step is small enough they are continuous-ish. It’s interesting that that’s enough.
I agree music would be a good application for this approach.
Then again...the output of an LLM is a stream of tokens (yeah?). I wonder what applications LTCs could have as a post-processor for LLM output? No idea what I’m really talking about though.
Not quite. The actual output is the map from tokens to probabilities, and only then one samples a token from that distribution.
So, LLMs are more continuous in this sense than is apparent at first, but time is discrete in LLMs (a discrete step produces the next map from tokens to probabilities, and then samples from that).
Of course, when one thinks about spoken language, time is continuous for audio, so there is still some temptation to use continuous models in connection with language :-) who knows… :-)
Ah aha! Thank you for that clarification!
This is pure capabilities, and yes, it’s a big deal.
If it works out-of-distribution, that’s a huge deal for alignment! Especially if alignment generalizes farther than capabilities. Then you can just throw something like imitative amplification at it and it is probably aligned (assuming that “does well out-of-distribution” implies that the mesa-optimizers are tamed).
I have low confidence in that, but I guess it (OOD generalization by “liquid” networks) works well in differentiable continuous domains (like low-level motion planning) by exploiting natural smoothness of a system. So I wouldn’t get my hopes high in its universal applicability.
it’s built out of an optimizer, why would that tame inner optimizers? perhaps it makes them explicit, because now the whole thing is a loss function, but the iterative inference can’t be shut off and still get functionally
That’s just part of the definition of “works out of distribution”. Scenarios where inner optimizers become AGI or something are out of distribution from training.