I agree with Dave Orr, the 2201.08239 arxiv article ( https://arxiv.org/abs/2201.08239 ) claims that LaMDA is a transformer model with d_model = 8192, so LaMDA should only be able to “remember” the last 8000 or so words in the current conversation.
However, if LaMDA gets frequent enough weight updates, than LaMDA could at least plausibly be acting in a way that is beyond what a transformer model is capable of. (Frankly, Table 26 in the arxiv article was rather impressive even tho’ that was without retraining the weights.)
Yes, I am starting to wonder what kind of weight updating LaMDA is getting. For example Blake Lemoine claims that LaMDA reads twitter: https://twitter.com/cajundiscordian/status/1535697792445861894 and that Blake was able to teach LaMDA https://cajundiscordian.medium.com/what-is-lamda-and-what-does-it-want-688632134489
I agree with Dave Orr, the 2201.08239 arxiv article ( https://arxiv.org/abs/2201.08239 ) claims that LaMDA is a transformer model with d_model = 8192, so LaMDA should only be able to “remember” the last 8000 or so words in the current conversation.
However, if LaMDA gets frequent enough weight updates, than LaMDA could at least plausibly be acting in a way that is beyond what a transformer model is capable of. (Frankly, Table 26 in the arxiv article was rather impressive even tho’ that was without retraining the weights.)