I think I’m lacking some jargon here. What’s a latent/patent in the context of a large language model? “patent” is ungoogleable if you’re not talking about intellectual property law.
The Eyeronman link didn’t seem very informative. No explanation of how it works. I already knew sensory substitution was a thing, but is this different somehow? Is there some neural net pre-digesting its outputs? Is it similarly a random-seeming mismash? Are there any other examples of this kind of thing working for humans? Visually?
Would the mismash from a smaller text model be any easier/faster for the human to learn?
I think I’m lacking some jargon here. What’s a latent/patent in the context of a large language model? “patent” is ungoogleable if you’re not talking about intellectual property law.
The Eyeronman link didn’t seem very informative. No explanation of how it works. I already knew sensory substitution was a thing, but is this different somehow? Is there some neural net pre-digesting its outputs? Is it similarly a random-seeming mismash? Are there any other examples of this kind of thing working for humans? Visually?
Would the mismash from a smaller text model be any easier/faster for the human to learn?
My money’s on: typo.