Cool thanks for reading! I’ll have to research hemodynamics and possibly update accordingly. Regarding hiding complexities, I’m grossly oversimplifying almost everything in the article in order to get the theory across, so I’m curious what specifically aspects about paradoxical sleep, neuromodulators, and cerebellum you think are important to include?
In my view they have suggestive interpretations as batchnorm, loss functions and superesolution, respectively. Obviously thalamocortical loops as stacked autoencoders, cortical columns as convolutions, and visual cognition as filmed network.. Less obvious: sensory gating as attention head.
Yeah see you’re going a level deeper than seems appropriate for this piece. For example my overview of LLM intentionally skips the inner working of decoders, etc. So why would I include that level of detail for the brain? The goal of this article was to provide a high level overview of these mechanisms, because a common general understanding is currently lacking from the discourse, across both AI and consciousness. Make sense?
Totally! After checking with my torontonian son, « the most questionable » was poor handling of english language on my part. The intended meaning was more like « the most interesting question is which complexity to hide ». Sorry for that! ☺️ To me the level of description is almost an artistic choice, so please don’t take my opinion for expectations you must have the same taste.
Cool thanks for reading! I’ll have to research hemodynamics and possibly update accordingly. Regarding hiding complexities, I’m grossly oversimplifying almost everything in the article in order to get the theory across, so I’m curious what specifically aspects about paradoxical sleep, neuromodulators, and cerebellum you think are important to include?
In my view they have suggestive interpretations as batchnorm, loss functions and superesolution, respectively. Obviously thalamocortical loops as stacked autoencoders, cortical columns as convolutions, and visual cognition as filmed network.. Less obvious: sensory gating as attention head.
Yeah see you’re going a level deeper than seems appropriate for this piece. For example my overview of LLM intentionally skips the inner working of decoders, etc. So why would I include that level of detail for the brain? The goal of this article was to provide a high level overview of these mechanisms, because a common general understanding is currently lacking from the discourse, across both AI and consciousness. Make sense?
Totally! After checking with my torontonian son, « the most questionable » was poor handling of english language on my part. The intended meaning was more like « the most interesting question is which complexity to hide ». Sorry for that! ☺️ To me the level of description is almost an artistic choice, so please don’t take my opinion for expectations you must have the same taste.