Given that you bring up the n-body problem twice (speciously, IMO, because we’re not dealing with infinitesimal points in a Newtonian astronomical context, but with constrained molecules of nonzero size in solids/fluids/atmosphere, you might as well say ‘A* pathfinding can’t work because it requires solving the n-body problem!’; and you ignore approximations), you may be interested to know that the n-body problem is in fact exactly soluble: see “The Solution of the n-body Problem”.
I was specifically referring to the difficulty of solving it for determining electron position, which, in the classical limit is exactly analogous to infinitesimal points moving around based on forces that obey an inverse square law, there’s a sign difference in the Hamiltonian since electrons repel, but it’s essentially the same problem, with essentially the same difficulties. We can solve the Hydrogen atom (2 body problem), we have some solutions for the Helium atom (3 body problem) and we sort of give up after that.
As for the solution to the n-body problem, I assume you’re referring to the inifinite series solution which is known to converge very slowly. I’ll try and read Quidong Wang’s book and check and see if this is true. We (and by we I mean Poincare) have proven you can’t solve it with algebra and integrals, and computers are know to be bad at derivatives. I think this may weaken my argument if calculating an infinite series solution to the S.E. is possible, because it would allow you in principle to numerically solve quantum mechanics programs to arbitrary accuracy, which right now we’re incapable of. I’ll need to look at how the solution behaves as a function on accuracy and n.
I will say I’m much happier with the tentative statement, “An AI may be able to devise novel solutions for coupled differential equations” than “An AI will get nanotechnology”. Reducing the latter statement toward the former I think could give us much tighter bounds on what we expect to happen.
I was specifically referring to the difficulty of solving it for determining electron position, which, in the classical limit is exactly analogous to infinitesimal points moving around based on forces that obey an inverse square law.
I thought you were talking about quantum difficulties in arranging on the nano scale? I am not sure what your classical limit is, but I am not sure it would apply to the argument you originally made.
Your reference also claims that there is a solution, but that has a million terms and is sensitive to round off error and thus impractical to use in any sort of numerical work, so it does not, off the top of my head, substantially affect my line of reasoning.
Indeed, the exact solution is worse than the known approximation methods. That we know the exact solution, but choose not to use it, is still interesting and relevant… I’ll remind you of your first use of the n-body problem:
Solving the Schrodinger equation is essentially impossible. We can solve it more or less exactly for the Hydrogen atom, but things get very very difficult from there. This is because we don’t have a simple solution for the three-body problem, much less the n-body problem. Approximately, the difficulty is that because each electron interacts with every other electron, you have a system where to determine the forces on electron 1, you need to know the position of electrons 2 through N, but the position of each of those electrons depends somewhat on electron 1. We have some tricks and approximations to get around this problem, but they’re only justified empirically.
To pick an obvious observation: if we have an exact solution, however inefficient, does that not immediately give us both theoretical and empirical ways to justify the fast approximations by comparing them to the exact answers using only large amounts of computing power—and never appealing to experiments?
I was specifically referring to the difficulty of solving it for determining electron position, which, in the classical limit is exactly analogous to infinitesimal points moving around based on forces that obey an inverse square law.
I thought you were talking about quantum difficulties in arranging on the nano scale? I am not sure what your classical limit is, but I am not sure it would apply to the argument you originally made.
The Hamiltonians for the 2 systems are essentially identical. If you treat electrons as having a well-defined position and momentum (hence the classical limit) then the problem of atomic bonding is exactly the same as the gravitational n-body problem (plus a sign change to handle repulsion). I’ll have to sit down and do a bunch of math before I can say how exactly the quantum aspects affect the infinite series solution presented. But my general statement that
is trivially true, and this is why I introduced solving the many-body S.E. as approximately equivalent to the n-body problem.
To pick an obvious observation: if we have an exact solution, however inefficient, does that not immediately give us both theoretical and empirical ways to justify the fast approximations by comparing them to the exact answers using only large amounts of computing power—and never appealing to experiments?
Exactly why I think you’ve made a good point. I need to look at the approximation and see if it’s possible. If it has 10^24 derivatives to get chemical accuracy, and scales poorly with respect to n, then it’s probably not useful in practice, but the argument you make here explicitly is exactly the argument I understood implicitly from your previous post.
is trivially true, and this is why I introduced solving the many-body S.E. as approximately equivalent to the n-body problem.
Alright, I will take your word for it. I had never seen anyone say that the classical Newtonian-mechanical sort of n-body problem was almost identical to a quantum intra-atomic version, though.
Exactly why I think you’ve made a good point. I need to look at the approximation and see if it’s possible. If it has 10^24 derivatives to get chemical accuracy, and scales poorly with respect to n, then it’s probably not useful in practice, but the argument you make here explicitly is exactly the argument I understood implicitly from your previous post.
If nothing else, it’s an interesting example of a data/computation tradeoff.
(To expand for people not following: in the OP, he claims that an algorithm/AI which wants to design effective MNT must deal with problems equivalent to the n-body problem; however, since there is no solution to the n-body problem, it must use approximations; but by the nature of approximations, it’s hard to know whether one has made a mistake, one wants experimental data confirming the accuracy of the approximation in the areas one wants to use it; hence an AI must engage in possibly a great deal of experimentation before it could hope to even design MNT. I pointed out that there is a proven exact solution to the n-body problem contrary to popular belief; however, this solution is itself extremely inefficient and one would never design using it; but since this solution is perfect, it does mean that a few chosen calculations of it can replace the experimental data one is using to test approximations. This means that in theory, with enough computing power, an AI could come up with efficient approximations for the n-body problem and get on with all the other tasks involved in designing MNT without ever running experiments. Of course, whether any of this matters in practice depends on how much experimenting or how much computing power you think is available in realistic scenarios and how wedded you are to a particular hard-takeoff-using-MNT scenario; if you’re willing to allow years for takeoff, obviously both experimentation and computing power are much more abundant.)
Alright, I will take your word for it. I had never seen anyone say that the classical Newtonian-mechanical sort of n-body problem was almost identical to a quantum intra-atomic version, though.
There are differences and complications because of things like Uncertainty, magnetism, and the Pauli exclusion principle, but to first order the dominant effect on an individual atomic particle is the Coulomb force and the form of that is identical to the Gravitational force. The symmetry in the force laws may be more obvious than the Hamiltonian formulation I gave before.
F_G=frac{Gm_1m_2}{r2}:and:F_C=frac{k_eq_1q_2}{r2}
The particularly interesting point is that even without doing any quantum mechanics at all, even if atomic bonding were only a consequence of classical electrostatic forces, we still wouldn’t be able to solve the problem. The difficulty generated by the n-body problem is in many ways much greater than the difficulty generated by quantum mechanics.
This is sort of true. The fact that it turns into the n-body problem prevents us from being able to do quantum mechanics analytically. Once we’re stuck doing it numerically, then all the issues of sampling density of the wave function et al. crop up, and they make it very difficult to solve numerically.
Thanks for pointing this out. These numerical difficulties are also a big part of the problem, albeit less accessible to people who aren’t comfortable with the concept of high-dimensional Hilbert spaces. A friend of mine had a really nice write-up in his thesis on this difficulty. I’ll see if I can dig it up.
Why do we have to solve it? In his latest book, he states that he calculates you can get the thermal noise down to 1⁄10 the diameter of a carbon atom or less if you use stiff enough components.
Furthermore, you can solve it empirically. Just build a piece of machinery that tries to accomplish a given task, and measure it’s success rate. Systematically tweak the design and measure the performance of each variant. Eventually, you find a design that meets spec. That’s how chemists do it today, actually.
A strong AI should be better than humans at pretty much every facet of reasoning, essentially as a starting premise. It’s not like humans aren’t computers, we’re just wetware computers built very differently from our own current technology. “As good as the best humans” should be the absolute floor if we’re positing the abilities of an optimally designed computer.
Humans are also bad at numerical derivatives. Derivatives are really messy when we don’t have a closed analytical form for the function f’. Basically the problem is that the derivative formula
-f(x)}{h})
involves subtracting nearly equal numbers and then dividing by almost zero. Both of these things destroy numerical accuracy very very quickly, because it takes very tiny errors and turns them into very large numbers. As long as the solution to the n-body problem is expressed in terms of a differential Taylor series without analytic components, it’s going to be very very difficult to solve accurately.
For practical problems, where we don’t know the initial state of the system to infinite accuracy this is a big problem. It also forces you to use of lots and lots of memory storing all your numbers accurately, because you burn through that accuracy really quickly.
Side note—finite differencing (which, you’re right, typically throws away half of your precision) isn’t the only way to get a computer to take a derivative. Automatic differentiation packages will typically get you the derivative of an explicitly defined function to roughly the accuracy with which you can evaluate the function itself.
I’m not familiar with the n-body problem series solution, though; there’s lots of other ways that could turn out to be impractical to evaluate.
Given that you bring up the n-body problem twice (speciously, IMO, because we’re not dealing with infinitesimal points in a Newtonian astronomical context, but with constrained molecules of nonzero size in solids/fluids/atmosphere, you might as well say ‘A* pathfinding can’t work because it requires solving the n-body problem!’; and you ignore approximations), you may be interested to know that the n-body problem is in fact exactly soluble: see “The Solution of the n-body Problem”.
Impossibility proofs are tricky things.
I was specifically referring to the difficulty of solving it for determining electron position, which, in the classical limit is exactly analogous to infinitesimal points moving around based on forces that obey an inverse square law, there’s a sign difference in the Hamiltonian since electrons repel, but it’s essentially the same problem, with essentially the same difficulties. We can solve the Hydrogen atom (2 body problem), we have some solutions for the Helium atom (3 body problem) and we sort of give up after that.
As for the solution to the n-body problem, I assume you’re referring to the inifinite series solution which is known to converge very slowly. I’ll try and read Quidong Wang’s book and check and see if this is true. We (and by we I mean Poincare) have proven you can’t solve it with algebra and integrals, and computers are know to be bad at derivatives. I think this may weaken my argument if calculating an infinite series solution to the S.E. is possible, because it would allow you in principle to numerically solve quantum mechanics programs to arbitrary accuracy, which right now we’re incapable of. I’ll need to look at how the solution behaves as a function on accuracy and n.
I will say I’m much happier with the tentative statement, “An AI may be able to devise novel solutions for coupled differential equations” than “An AI will get nanotechnology”. Reducing the latter statement toward the former I think could give us much tighter bounds on what we expect to happen.
Thanks! Great contribution.
I thought you were talking about quantum difficulties in arranging on the nano scale? I am not sure what your classical limit is, but I am not sure it would apply to the argument you originally made.
Indeed, the exact solution is worse than the known approximation methods. That we know the exact solution, but choose not to use it, is still interesting and relevant… I’ll remind you of your first use of the n-body problem:
To pick an obvious observation: if we have an exact solution, however inefficient, does that not immediately give us both theoretical and empirical ways to justify the fast approximations by comparing them to the exact answers using only large amounts of computing power—and never appealing to experiments?
The Hamiltonians for the 2 systems are essentially identical. If you treat electrons as having a well-defined position and momentum (hence the classical limit) then the problem of atomic bonding is exactly the same as the gravitational n-body problem (plus a sign change to handle repulsion). I’ll have to sit down and do a bunch of math before I can say how exactly the quantum aspects affect the infinite series solution presented. But my general statement that
H_{quantum}=sum_ifrac{p_i2}{2m_i} sum_{j,ineqj}frac{q_iq_j}{|r|}approxH_{Grav}=sum_ifrac{p_i2}{2m_i} sum_{j,ineqj}frac{m_im_j}{|r|}
is trivially true, and this is why I introduced solving the many-body S.E. as approximately equivalent to the n-body problem.
Exactly why I think you’ve made a good point. I need to look at the approximation and see if it’s possible. If it has 10^24 derivatives to get chemical accuracy, and scales poorly with respect to n, then it’s probably not useful in practice, but the argument you make here explicitly is exactly the argument I understood implicitly from your previous post.
Alright, I will take your word for it. I had never seen anyone say that the classical Newtonian-mechanical sort of n-body problem was almost identical to a quantum intra-atomic version, though.
If nothing else, it’s an interesting example of a data/computation tradeoff.
(To expand for people not following: in the OP, he claims that an algorithm/AI which wants to design effective MNT must deal with problems equivalent to the n-body problem; however, since there is no solution to the n-body problem, it must use approximations; but by the nature of approximations, it’s hard to know whether one has made a mistake, one wants experimental data confirming the accuracy of the approximation in the areas one wants to use it; hence an AI must engage in possibly a great deal of experimentation before it could hope to even design MNT. I pointed out that there is a proven exact solution to the n-body problem contrary to popular belief; however, this solution is itself extremely inefficient and one would never design using it; but since this solution is perfect, it does mean that a few chosen calculations of it can replace the experimental data one is using to test approximations. This means that in theory, with enough computing power, an AI could come up with efficient approximations for the n-body problem and get on with all the other tasks involved in designing MNT without ever running experiments. Of course, whether any of this matters in practice depends on how much experimenting or how much computing power you think is available in realistic scenarios and how wedded you are to a particular hard-takeoff-using-MNT scenario; if you’re willing to allow years for takeoff, obviously both experimentation and computing power are much more abundant.)
There are differences and complications because of things like Uncertainty, magnetism, and the Pauli exclusion principle, but to first order the dominant effect on an individual atomic particle is the Coulomb force and the form of that is identical to the Gravitational force. The symmetry in the force laws may be more obvious than the Hamiltonian formulation I gave before.
F_G=frac{Gm_1m_2}{r2}:and:F_C=frac{k_eq_1q_2}{r2}
The particularly interesting point is that even without doing any quantum mechanics at all, even if atomic bonding were only a consequence of classical electrostatic forces, we still wouldn’t be able to solve the problem. The difficulty generated by the n-body problem is in many ways much greater than the difficulty generated by quantum mechanics.
Also, nice summary.
I am not a physicist, but this stack exchange answer seems to disagree with your assessment: What are the primary obstacles to solve the many-body problem in quantum mechanics?
This is sort of true. The fact that it turns into the n-body problem prevents us from being able to do quantum mechanics analytically. Once we’re stuck doing it numerically, then all the issues of sampling density of the wave function et al. crop up, and they make it very difficult to solve numerically.
Thanks for pointing this out. These numerical difficulties are also a big part of the problem, albeit less accessible to people who aren’t comfortable with the concept of high-dimensional Hilbert spaces. A friend of mine had a really nice write-up in his thesis on this difficulty. I’ll see if I can dig it up.
Why do we have to solve it? In his latest book, he states that he calculates you can get the thermal noise down to 1⁄10 the diameter of a carbon atom or less if you use stiff enough components.
Furthermore, you can solve it empirically. Just build a piece of machinery that tries to accomplish a given task, and measure it’s success rate. Systematically tweak the design and measure the performance of each variant. Eventually, you find a design that meets spec. That’s how chemists do it today, actually.
Edit : to the −1, here’s a link where a certain chemist that many know is doing exactly this : http://pipeline.corante.com/archives/2013/06/27/sealed_up_and_ready_to_go.php
Can you expand on this?
A strong AI should be better than humans at pretty much every facet of reasoning, essentially as a starting premise. It’s not like humans aren’t computers, we’re just wetware computers built very differently from our own current technology. “As good as the best humans” should be the absolute floor if we’re positing the abilities of an optimally designed computer.
Humans are also bad at numerical derivatives. Derivatives are really messy when we don’t have a closed analytical form for the function f’. Basically the problem is that the derivative formula
-f(x)}{h})involves subtracting nearly equal numbers and then dividing by almost zero. Both of these things destroy numerical accuracy very very quickly, because it takes very tiny errors and turns them into very large numbers. As long as the solution to the n-body problem is expressed in terms of a differential Taylor series without analytic components, it’s going to be very very difficult to solve accurately.
For practical problems, where we don’t know the initial state of the system to infinite accuracy this is a big problem. It also forces you to use of lots and lots of memory storing all your numbers accurately, because you burn through that accuracy really quickly.
Side note—finite differencing (which, you’re right, typically throws away half of your precision) isn’t the only way to get a computer to take a derivative. Automatic differentiation packages will typically get you the derivative of an explicitly defined function to roughly the accuracy with which you can evaluate the function itself.
I’m not familiar with the n-body problem series solution, though; there’s lots of other ways that could turn out to be impractical to evaluate.