I should expand on this… Geometric complexity theory is about posing and solving complexity-class problems in a geometric context. For example: Computing the permanent of a matrix is in #P, computing the determinant is in P. Permanents and determinants can be thought of as algebraic subvarieties, and showing that a certain mapping cannot turn determinants into permanents (conjecture 3.2 in the third paper) would show that P is not #P. The idea is to use certain new constructions from mathematical physics (e.g. first paper) to understand such mappings. The space of orbits under the mapping is a quotient of the target space, and there is a history of using techniques from physics to understand these particular quotient spaces. The big development I see brewing is the relation between the “geometric Langlands” program and the quantization of the 5-brane in M-theory, for which a whole new approach to quantization has to be invented. So I’d like to see what happens if you take those new ideas and push them along the chain from first paper to third paper. (But that first paper is just supposed to be representative of the ongoing work, I’m not saying it specifically contains the technically appropriate concepts.)
Meanwhile, for P versus NP, there is a particular barrier to proof which Mulmuley and collaborators hope to get around because of the specialness of the varieties. Right now I’m trying to understand what it is about the algebraic-geometric facts which gives them the right logical and combinatorial properties to evade this “naturalization barrier”. Then I want to look at models of CEV (e.g. for a population of DFT agents) from this perspective, to see if it will help with the difficult problems there (the interplay between self-enhancement, utility function discovery, and utility function renormalization).
Another thing, although they don’t discuss it, it looks like this method also might get around the relativization barrier since the associated varieties when you have an attached oracle are going to look very different.
Yeah, that seems to fit with the impression I got from the papers. I’m not convinced that this can overcome the natural proof barrier but this looks more promising that other attacks I’ve seen. (Unfortunately this is potentially far enough from my own area of expertise that evaluating it in any great detail is probably going to be very difficult.)
Won’t work to separate P from NP.
Thanks., Reading those now.
I should expand on this… Geometric complexity theory is about posing and solving complexity-class problems in a geometric context. For example: Computing the permanent of a matrix is in #P, computing the determinant is in P. Permanents and determinants can be thought of as algebraic subvarieties, and showing that a certain mapping cannot turn determinants into permanents (conjecture 3.2 in the third paper) would show that P is not #P. The idea is to use certain new constructions from mathematical physics (e.g. first paper) to understand such mappings. The space of orbits under the mapping is a quotient of the target space, and there is a history of using techniques from physics to understand these particular quotient spaces. The big development I see brewing is the relation between the “geometric Langlands” program and the quantization of the 5-brane in M-theory, for which a whole new approach to quantization has to be invented. So I’d like to see what happens if you take those new ideas and push them along the chain from first paper to third paper. (But that first paper is just supposed to be representative of the ongoing work, I’m not saying it specifically contains the technically appropriate concepts.)
Meanwhile, for P versus NP, there is a particular barrier to proof which Mulmuley and collaborators hope to get around because of the specialness of the varieties. Right now I’m trying to understand what it is about the algebraic-geometric facts which gives them the right logical and combinatorial properties to evade this “naturalization barrier”. Then I want to look at models of CEV (e.g. for a population of DFT agents) from this perspective, to see if it will help with the difficult problems there (the interplay between self-enhancement, utility function discovery, and utility function renormalization).
Another thing, although they don’t discuss it, it looks like this method also might get around the relativization barrier since the associated varieties when you have an attached oracle are going to look very different.
Yeah, that seems to fit with the impression I got from the papers. I’m not convinced that this can overcome the natural proof barrier but this looks more promising that other attacks I’ve seen. (Unfortunately this is potentially far enough from my own area of expertise that evaluating it in any great detail is probably going to be very difficult.)
I took it to MathOverflow after Witten’s latest paper. It would be crazy if string theory was the key to proving that P is not NP!