I have never seen the stochastic parrot argument applied to a specific, limited architecture
I’ve never seen anything else. According to wikipedia, the term was originally applied to LLMs.
The term was first used in the paper “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” by Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell (using the pseudonym “Shmargaret Shmitchell”).[4] ThBold text
LLMs are neural networks, neural networks are proven to be able to approximate any function to an arbitrary close degree, hence LLMs are able to approximate any function to an arbitrary close degree (given enough layers, of course).
I’ve never seen anything else. According to wikipedia, the term was originally applied to LLMs.
LLMs use 1 or more inner layers, so shouldn’t the proof apply to them?
what proof?
Of the universal approximation theorem
How are inner layers relevant?
LLMs are neural networks, neural networks are proven to be able to approximate any function to an arbitrary close degree, hence LLMs are able to approximate any function to an arbitrary close degree (given enough layers, of course).