I think logarithmic dependence on epsilon will be difficult, especially for problem one. To demonstrate this, consider the problem of approximating the maximum eigenvalue of a sparse (let’s say m = O(n)) PSD matrix with trace at most n. The obvious algorithms to try aren’t fast enough:
power iteration / Lanczos algorithm compute Mkv in time O(mk) using sparse matrix-vector multiplies with polynomial precision in k. Thus, such methods exhibit the undesirable polynomial dependence in epsilon.
repeated squaring computes M2k in time ω(n2k) using dense matrix multiplication with exponential precision in k. Though such methods are logarithmic in k, the dense matrix multiplication steps render the algorithm too slow.
Intuitively, power iteration-style methods exploit sparsity by taking matrix-vector products exclusively, which yields slow (polynomial in epsilon) convergence.
To argue that problem one is difficult, I claim that one natural candidate completion is the all-zero completion, resulting in a sparse matrix. This suggests (but does not prove!) that the problem of determining existence of PSD completions requires determining whether the resulting sparse completion is PSD—equivalently, whether its minimum eigenvalue is nonnegative. The reduction to the max eigenvalue problem then follows by taking spectral shifts (M→aI+bM for a,b∈R).
It’s unclear to me whether problem two has a similar dependence, but given the similarity of the problems I’m skeptical that logarithmic dependence in epsilon is possible. If anyone has ideas for a similar reduction I’d love to discuss!
note: Som and I have discussed a similar objection with @paulfchristiano, who seemed ok with solutions with convergence polynomial in epsilon but still O(mn) in the “typical case” (eg. for problem 1, with enough precision to determine (non)existence of PSD completions). After constraining all diagonal elements of A to be 1, we’re somewhat optimistic such weaker convergence is attainable.
I think logarithmic dependence on epsilon will be difficult, especially for problem one. To demonstrate this, consider the problem of approximating the maximum eigenvalue of a sparse (let’s say m = O(n)) PSD matrix with trace at most n. The obvious algorithms to try aren’t fast enough:
power iteration / Lanczos algorithm compute Mkv in time O(mk) using sparse matrix-vector multiplies with polynomial precision in k. Thus, such methods exhibit the undesirable polynomial dependence in epsilon.
repeated squaring computes M2k in time ω(n2k) using dense matrix multiplication with exponential precision in k. Though such methods are logarithmic in k, the dense matrix multiplication steps render the algorithm too slow.
Intuitively, power iteration-style methods exploit sparsity by taking matrix-vector products exclusively, which yields slow (polynomial in epsilon) convergence.
To argue that problem one is difficult, I claim that one natural candidate completion is the all-zero completion, resulting in a sparse matrix. This suggests (but does not prove!) that the problem of determining existence of PSD completions requires determining whether the resulting sparse completion is PSD—equivalently, whether its minimum eigenvalue is nonnegative. The reduction to the max eigenvalue problem then follows by taking spectral shifts (M→aI+bM for a,b∈R).
It’s unclear to me whether problem two has a similar dependence, but given the similarity of the problems I’m skeptical that logarithmic dependence in epsilon is possible. If anyone has ideas for a similar reduction I’d love to discuss!
(comment based on work co-authored w @Somsubhro Bagchi)
note: Som and I have discussed a similar objection with @paulfchristiano, who seemed ok with solutions with convergence polynomial in epsilon but still O(mn) in the “typical case” (eg. for problem 1, with enough precision to determine (non)existence of PSD completions). After constraining all diagonal elements of A to be 1, we’re somewhat optimistic such weaker convergence is attainable.