Over the past couple months, I gave weekly lectures on applied linear algebra. The lectures cover a grab-bag of topics which I’ve needed to know for my own work, but which typically either aren’t covered in courses or are covered only briefly in advanced courses which use them (like e.g. quantum). The series is now complete, and recordings of all the lectures are available here.
Be warned: all of the lectures were given with zero review and minimal prep. There are errors. There are poor explanations and too few examples. There are places where I only vaguely gesture at an idea and then say to google it if and when you need it. The flip side is that you will see only things I know off the top of my head—and therefore things which I’ve found useful enough often enough to remember.
Outline of Topics
Lecture 1
Prototypical use cases of linear algebra
First-order approximation of systems of equations for solving or stability analysis
Second-order approximation of a scalar function in many dimensions for optimization or characterizing of peak/bowl shape
First-order approximation of a dynamical system near a steady state
Principal components of a covariance matrix
Lecture 2
Working with efficient representations of large matrices
Tricks for jacobian and hessian matrices
Prototypical API for implicit matrix representations: scipy’s LinearOperator
Lecture 3
Suppose we look at a matrix (e.g. using pyplot.matshow()). What patterns are we most likely to see, and what can we do with them?
Recognizing sparse & low-rank structure
Interpreting sparse & low-rank structure
Leveraging sparse & low-rank structure
Lecture 4
Matrix calculus, with a focus on stability of eigendecomposition
Basics: tensor notation
Differentiating eigendecomposition
Instability of eigenvectors of (approximately) repeated eigenvalues
Lecture 5
Leveraging symmetry
Suppose my system is invariant under some permutation (e.g. a PDE with wraparound boundary, or exchangeable variables in a covariance matrix). How can I leverage that to more efficiently find an eigendecomposition (and invert the matrix etc)?
What Fourier transforms have to do with symmetry, and how to compute them quickly
How to represent rotations/orthogonal matrices
Lecture 6
Wedge products: those “dx dy dz” things in integrals
How to do coordinate transformations with things like “dx dy”, even when embedded in a higher-dimensional space
Map between function operations/properties and matrix operations/properties
Applied Linear Algebra Lecture Series
Over the past couple months, I gave weekly lectures on applied linear algebra. The lectures cover a grab-bag of topics which I’ve needed to know for my own work, but which typically either aren’t covered in courses or are covered only briefly in advanced courses which use them (like e.g. quantum). The series is now complete, and recordings of all the lectures are available here.
Be warned: all of the lectures were given with zero review and minimal prep. There are errors. There are poor explanations and too few examples. There are places where I only vaguely gesture at an idea and then say to google it if and when you need it. The flip side is that you will see only things I know off the top of my head—and therefore things which I’ve found useful enough often enough to remember.
Outline of Topics
Lecture 1
Prototypical use cases of linear algebra
First-order approximation of systems of equations for solving or stability analysis
Second-order approximation of a scalar function in many dimensions for optimization or characterizing of peak/bowl shape
First-order approximation of a dynamical system near a steady state
Principal components of a covariance matrix
Lecture 2
Working with efficient representations of large matrices
Tricks for jacobian and hessian matrices
Prototypical API for implicit matrix representations: scipy’s LinearOperator
Lecture 3
Suppose we look at a matrix (e.g. using pyplot.matshow()). What patterns are we most likely to see, and what can we do with them?
Recognizing sparse & low-rank structure
Interpreting sparse & low-rank structure
Leveraging sparse & low-rank structure
Lecture 4
Matrix calculus, with a focus on stability of eigendecomposition
Basics: tensor notation
Differentiating eigendecomposition
Instability of eigenvectors of (approximately) repeated eigenvalues
Lecture 5
Leveraging symmetry
Suppose my system is invariant under some permutation (e.g. a PDE with wraparound boundary, or exchangeable variables in a covariance matrix). How can I leverage that to more efficiently find an eigendecomposition (and invert the matrix etc)?
What Fourier transforms have to do with symmetry, and how to compute them quickly
How to represent rotations/orthogonal matrices
Lecture 6
Wedge products: those “dx dy dz” things in integrals
How to do coordinate transformations with things like “dx dy”, even when embedded in a higher-dimensional space
Map between function operations/properties and matrix operations/properties