I’m finishing up my PhD on tensor network algorithms at the University of Queensland, Australia, under Ian McCulloch. I’ve also proposed a new definition of wavefunction branches using quantum circuit complexity.
Predictably, I’m moving into AI safety work. See my post on graphical tensor notation for interpretability. I also attended the Machine Learning for Alignment Bootcamp in Berkeley in 2022, did a machine learning/ neuroscience internship in 2020/2021, and also wrote a post exploring the potential counterfactual impact of AI safety work.
My website: https://sites.google.com/view/jordantensor/
Contact me: jordantensor [at] gmail [dot] com Also see my CV, LinkedIn, or Twitter.
This is an interesting and useful overview, though it’s important not to confuse their notation with the Penrose graphical notation I use in this post, since lines in their notation seem to represent the message-passing contributions to a vector, rather than the indices of a tensor.
That said, there are connections between tensor network contractions and message passing algorithms like Belief Propagation, which I haven’t taken the time to really understand. Some references are:
Duality of graphical models and tensor networks—Elina Robeva and Anna Seigal
Tensor network contraction and the belief propagation algorithm—R. Alkabetz and I. Arad
Tensor Network Message Passing—Yijia Wang, Yuwen Ebony Zhang, Feng Pan, Pan Zhang
Gauging tensor networks with belief propagation—Joseph Tindall, Matt Fishman