The Gram matrix bit is certainly correct, though I’m not sure how the distances come into it. The m constraints are constraints on dot products, so more angles (well, cosines) than distances between vectors.
I think you can convert between the two representations in O(m) time, which would mean that any algorithm that solves either version in O(n*m) solves both in O(n*m).
Do you have some large positive and negative examples of the kinds of sparse matrix you’re trying to check for the existence of a PSD completion on, or alternatively a method for generating such examples with a known ground truth? I have a really dumb idea for a possible algorithm here (that shamelessly exploits the exact shape of this problem in a way that probably doesn’t generalize to being useful for broader problems like MDS) that I think would complete in approximately the time constraints you’re looking for. It almost certainly won’t work, but I think it’s at least worth an hour of my time to check and figure out why (especially since I’m trying to improve my linear algebra skills anyway).
Edit: there’s the obvious approach, which I’m trying, of “start with only 1s on the diagonal and then keep adding random entries until it no longer has a PSD completion, then removing random entries until it does, and repeat to build a test set” but I doubt that covers the interesting corners of the problem space.
Edit 2: the really dumb thing does not work. I think I haven’t ruled out that a slightly less dumb approach could work though?
Edit 3: never mind, my really dumb “solution” requires inverting a matrix that is, in the worst case, nxn, if e.g. you have an input that looks like
1 n n n n n n
n 1 - - - - n
n - 1 - - - n
n - - 1 - - n
n - - - 1 - n
n - - - - 1 n
n n n n n n 1
you’ll have to invert 6 2x2 matrices and one each of 3x3 to 7x7 matrices.
You mean as in, if I have two points xi and xj, then (xi−xj)2=x2i+x2j−2xi⋅xj?
Guess so, but yeah, I just don’t tend to think of it immediately as a part of the distances rather than just vectors. It works though since you mentioned the diagonal elements are known.
The Gram matrix bit is certainly correct, though I’m not sure how the distances come into it. The m constraints are constraints on dot products, so more angles (well, cosines) than distances between vectors.
If you are given the dot products exactly, that’s equivalent to being given the distances exactly.
(I think this reduction goes the wrong way though, if I wanted to solve the problem about distances I’d translate it back into dot products.)
I think you can convert between the two representations in
O(m)
time, which would mean that any algorithm that solves either version inO(n*m)
solves both inO(n*m)
.Do you have some large positive and negative examples of the kinds of sparse matrix you’re trying to check for the existence of a PSD completion on, or alternatively a method for generating such examples with a known ground truth? I have a really dumb idea for a possible algorithm here (that shamelessly exploits the exact shape of this problem in a way that probably doesn’t generalize to being useful for broader problems like MDS) that I think would complete in approximately the time constraints you’re looking for. It almost certainly won’t work, but I think it’s at least worth an hour of my time to check and figure out why (especially since I’m trying to improve my linear algebra skills anyway).
Edit: there’s the obvious approach, which I’m trying, of “start with only 1s on the diagonal and then keep adding random entries until it no longer has a PSD completion, then removing random entries until it does, and repeat to build a test set” but I doubt that covers the interesting corners of the problem space.
Edit 2: the really dumb thing does not work. I think I haven’t ruled out that a slightly less dumb approach could work though?
Edit 3: never mind, my really dumb “solution” requires inverting a matrix that is, in the worst case, nxn, if e.g. you have an input that looks like
you’ll have to invert 6 2x2 matrices and one each of 3x3 to 7x7 matrices.
You mean as in, if I have two points xi and xj, then (xi−xj)2=x2i+x2j−2xi⋅xj?
Guess so, but yeah, I just don’t tend to think of it immediately as a part of the distances rather than just vectors. It works though since you mentioned the diagonal elements are known.