Wait which algorithm for semidefinite programming are you using? The ones I’ve seen look like they should translate to a runtime even slower than O(nω√m). For example the one here:
Also, do you have a source for the runtime of PSD testing being nω? I assume no lower bound is known, i.e. I doubt PSD testing is hard for matrix multiplication. Am I wrong about that?
That’s a good question. From what I’ve seen, PSD testing can be done by trying to make a Cholesky decomposition (writing the matrix as LL∗ with L lower-triangular) and seeing if it fails. The Cholesky decomposition is an LU decomposition in which the lower-triangular L and upper-triangular U are simply taken to have the same diagonal entries, so PSD testing should have the same complexity as LU decomposition. Wikipedia quotes Bunch and Hopcroft 1974 who show that LU decomposition can be done in O(nlog27) by Strassen, and presumably the more modern matrix multiplication algorithms also give an improvement for LU.
I also doubt that PSD testing is hard for matrix multiplication, even though you can get farther than you’d think. Given a positive-definite matrix A whose inverse we are interested in, consider the 2n×2n block matrix (AididC). It is positive-definite if and only if all principal minors are positive. The minors that are minors of A are positive by assumption, and the bigger minors are equal to det(A) times minors of C−A−1, so altogether the big matrix is positive-definite iff C−A−1 is. Continuing in this direction, we can get in O(PSD) time (times logϵ) any specific component of A−1. This is not enough at all to get the full inverse.
There is a simple intuition for why PSD testing cannot be hard for matrix multiplication or inversion: regardless of how you do it and what matrix you apply it to, it only gives you one bit of information. Getting even just one bit of information about each matrix element of the result requires n2 applications of PSD testing. The only way out would be if one only needed to apply PSD testing to tiny matrices.
Wait which algorithm for semidefinite programming are you using? The ones I’ve seen look like they should translate to a runtime even slower than O(nω√m). For example the one here:
https://arxiv.org/pdf/2009.10217.pdf
Also, do you have a source for the runtime of PSD testing being nω? I assume no lower bound is known, i.e. I doubt PSD testing is hard for matrix multiplication. Am I wrong about that?
That’s a good question. From what I’ve seen, PSD testing can be done by trying to make a Cholesky decomposition (writing the matrix as LL∗ with L lower-triangular) and seeing if it fails. The Cholesky decomposition is an LU decomposition in which the lower-triangular L and upper-triangular U are simply taken to have the same diagonal entries, so PSD testing should have the same complexity as LU decomposition. Wikipedia quotes Bunch and Hopcroft 1974 who show that LU decomposition can be done in O(nlog27) by Strassen, and presumably the more modern matrix multiplication algorithms also give an improvement for LU.
I also doubt that PSD testing is hard for matrix multiplication, even though you can get farther than you’d think. Given a positive-definite matrix A whose inverse we are interested in, consider the 2n×2n block matrix (AididC). It is positive-definite if and only if all principal minors are positive. The minors that are minors of A are positive by assumption, and the bigger minors are equal to det(A) times minors of C−A−1, so altogether the big matrix is positive-definite iff C−A−1 is. Continuing in this direction, we can get in O(PSD) time (times logϵ) any specific component of A−1. This is not enough at all to get the full inverse.
There is a simple intuition for why PSD testing cannot be hard for matrix multiplication or inversion: regardless of how you do it and what matrix you apply it to, it only gives you one bit of information. Getting even just one bit of information about each matrix element of the result requires n2 applications of PSD testing. The only way out would be if one only needed to apply PSD testing to tiny matrices.