Would it be possible to determine the equivalent dimension of a layer of the human language cortex with this method? You can’t do API calls to a brain, but you can prompt people and estimate the probability of a response token by repeated sampling, maybe from different people.
The true rank is revealed because the output dimensionality is vocab_size, which is >> hidden_dim. It is unclear how to get something equivalent to that from the cortex. It is possible to record multiple neurons (population) and use dimensionality reduction (usually some sort of manifold learning) to learn the true dimensionality of the population. It is useful in some areas of the brain such as the hippocampal formation.
Would it be possible to determine the equivalent dimension of a layer of the human language cortex with this method? You can’t do API calls to a brain, but you can prompt people and estimate the probability of a response token by repeated sampling, maybe from different people.
The true rank is revealed because the output dimensionality is vocab_size, which is >> hidden_dim. It is unclear how to get something equivalent to that from the cortex. It is possible to record multiple neurons (population) and use dimensionality reduction (usually some sort of manifold learning) to learn the true dimensionality of the population. It is useful in some areas of the brain such as the hippocampal formation.
Could this be used to determine an estimate of the “number of parameters” of the brain?