--Human brains have special architectures, various modules that interact in various ways (priors?)
--Human brains don’t use Backprop; maybe they have some sort of even-better algorithm
This is a funny distinction to me. These things seem like two ends of a spectrum (something like, the physical scale of “one unit of structure”; predictive coding is few-neuron-scale, modules are big-brain-chunk scale; in between, there’s micro-columns, columns, lamina, feedback circuits, relays, fiber bundles; and below predictive coding there’s the rules for dendrite and synapse change).
I wouldn’t characterize my own position as “we know a lot about the brain.” I think we should taboo “a lot.”
I think there’s mounting evidence that brains use predictive coding
Are you saying, there’s mounting evidence that predictive coding screens off all lower levels from all higher levels? Like all high-level phenomena are the result of predictive coding, plus an architecture that hooks up bits of predictive coding together?
This is a funny distinction to me. These things seem like two ends of a spectrum (something like, the physical scale of “one unit of structure”; predictive coding is few-neuron-scale, modules are big-brain-chunk scale; in between, there’s micro-columns, columns, lamina, feedback circuits, relays, fiber bundles; and below predictive coding there’s the rules for dendrite and synapse change).
Are you saying, there’s mounting evidence that predictive coding screens off all lower levels from all higher levels? Like all high-level phenomena are the result of predictive coding, plus an architecture that hooks up bits of predictive coding together?