Definately. The lower the neuron vs ‘concepts’ ratio is, the more superposition required to represent everything. That said with the continuous function nature of LNNs these seem to be the wrong abstraction for language. Image models? Maybe. Audio models? Definately. Tokens and/or semantic data? That doesnt seeem practical.
Definately. The lower the neuron vs ‘concepts’ ratio is, the more superposition required to represent everything. That said with the continuous function nature of LNNs these seem to be the wrong abstraction for language. Image models? Maybe. Audio models? Definately. Tokens and/or semantic data? That doesnt seeem practical.