There’s plenty, including a line of work by Carina Curto, Katrin Hess and others that is taken seriously by a number of mathematically inclined neuroscience people (Tom Burns if he’s reading can comment further). As far as I know this kind of work is the closest to breaking through into the mainstream. At some level you can think of homology as a natural way of preserving information in noisy systems, for reasons similar to why (co)homology of tori was a useful way for Kitaev to formulate his surface code. Whether or not real brains/NNs have some emergent computation that makes use of this is a separate question, I’m not aware of really compelling evidence.
There is more speculative but definitely interesting work by Matilde Marcolli. I believe Manin has thought about this (because he’s thought about everything) and if you have twenty years to acquire the prerequisites (gamma spaces!) you can gaze into deep pools by reading that too.
Though my understanding is this is used in interp, not so much because people necessarily expect deep connections to homology, but because its just another way to look for structure in your data.
As someone who does both data analysis and algebraic topology, my take is that TDA showed promise but ultimately there’s something missing such that it’s not at full capacity. Either the formalism isn’t developed enough or it’s being consistently used on the wrong kinds of datasets. Which is kind of a shame, because it’s the kind of thing that should work beautifully and in some cases even does!
Are there any examples yet, of homology or cohomology being applied to cognition, whether human or AI?
There’s plenty, including a line of work by Carina Curto, Katrin Hess and others that is taken seriously by a number of mathematically inclined neuroscience people (Tom Burns if he’s reading can comment further). As far as I know this kind of work is the closest to breaking through into the mainstream. At some level you can think of homology as a natural way of preserving information in noisy systems, for reasons similar to why (co)homology of tori was a useful way for Kitaev to formulate his surface code. Whether or not real brains/NNs have some emergent computation that makes use of this is a separate question, I’m not aware of really compelling evidence.
There is more speculative but definitely interesting work by Matilde Marcolli. I believe Manin has thought about this (because he’s thought about everything) and if you have twenty years to acquire the prerequisites (gamma spaces!) you can gaze into deep pools by reading that too.
Topological data analysis comes closest, and there are some people who try to use it for ML, eg.
Though my understanding is this is used in interp, not so much because people necessarily expect deep connections to homology, but because its just another way to look for structure in your data.
TDA itself is also a relatively shallow tool too.
As someone who does both data analysis and algebraic topology, my take is that TDA showed promise but ultimately there’s something missing such that it’s not at full capacity. Either the formalism isn’t developed enough or it’s being consistently used on the wrong kinds of datasets. Which is kind of a shame, because it’s the kind of thing that should work beautifully and in some cases even does!
No.