Ok. I really want to know why this question got voted down. It seemed like a perfectly reasonable question for someone to ask if they don’t know much about these subjects and have heard that the three body problem is unsolvable.
You can numerically integrate the three-body problem. So there is a linear time algorithm to approximately compute what will happen after linear time. There just isn’t a logarithmic time algorithm (which is what we would normally mean by “closed form”).
Ah, okay. So all that matters is that the right answer is computable given infinite resources. How do you reconcile this with the apparently finite computational resources of the universe?
To be clear—the three-body problem deals with real numbers. Computers can’t work directly with real numbers, since real numbers can’t be finitely represented. Hence when we speak about the computability of a problem “compute f(x)” that calls for a real number answer, we really mean the computability of the problem “compute f(x) to within epsilon” (where now epsilon is another input to the problem).
You can compute what will happen efficiently. If you want to know where the 3 bodies are after t seconds to within some constant error, it only takes you about O(t) steps.
But errors accumulate over time. That means if you want the error to be bounded by a constant at the end of the calculation, the larger t is, the more you have to do along the way to avoid or correct error. If that just means you have to use higher precision calculations, that would give O(t log t), but if it means you have to use finer time steps, that would make the computational effort O(t^2) or worse.
This is primarily a function of how accurately you can do division and multiplication. Even if it isn’t O(t) it is almost probably O(t^(1+epsilon)) for any epsilon>0.
Suppose for the sake of argument you had infinite precision division and multiplication. There would still be finite error due to the use of finite time step size. (Runge-Kutta methods etc. give smaller error for a given step size than the more obvious numerical integration methods, but still not zero.) If you want to reduce the error, you need to use a smaller step size. Generally speaking, the error is a polynomial function of the step size (unlike arithmetic error, which decreases exponentially with the number of digits of precision), so you would expect O(t^(1+epsilon)) to be unattainable. Unless there’s some method of reducing error exponentially with step size for the three body problem that I’m missing?
Runge-Kutta takes O(delta^(-1/4)) time to get an approximation quality of delta, I think. I don’t know if we can yet, but I suspect is is possible to get an approximation quality of delta in time O(delta^(-epsilon)) for any epsilon>0 (in the same sense that I suspect it will eventually possible to multiply two nxn matrices in time O(n^(2 + epsilon)) for any epsilon>0, even though its not practical at all). This would probably imply JoshuaZ’s stated time bound. It doesn’t require exponentially fast error reduction, just arbitrarily good polynomial error reduction.
Anyway, the model I described in the post doesn’t actually have this problem. More precision just comes from using a finer discrete system to approximate the universe (if in fact it is continuous, which I would put a probability of less than 50% on) and still using a linear size circuit to do the simulation. You only pay logarithmically for using a finer grid, in any of the schemes I proposed.
An infinite sequence of algorithms converging on arbitrarily good polynomial error reduction? Fair enough, I certainly can’t rule that out at this stage.
But I don’t understand your last point: how can you pay only logarithmically for using a finer grid?
The post had a concrete complexity measure, which pays logarithmically for a finer grid (that is, doubling the size of the universe is the same as adding one more bit of complexity). The point is, you can only afford to pay logarithmically in the size of the universe (if you want known physical theories to have good complexity as compared to stupid explanations for our observations). Making the grid twice as fine is just making the universe twice as large, so you only pay 1 more bit: the extra bit needed to describe the larger size of the universe. If you disagree with this then you probably disagree fundamentally with my approach. That is obviously valid; I don’t really like my approach that much. But alternative approaches, like the speed prior, seem much worse to me.
Oh, sorry, yes, when your cost measure is complexity, then a finer grid incurs at most a logarithmic penalty, I agree. I also agreed the speed prior is a much worse approach—I would go so far as to say it is flat-out falsified by the observed extreme computational cost of physics.
Well, it’s simple to find a chaotic problem that’s not efficient. I was just trying to understand what “the universe is computable” really means since the universe isn’t exactly computable.
It seems like you and some others in this thread are assuming that real numbers describe some actual behavior of the universe, but that’s begging the question. If the universe is computable, it implies that all quantities are discrete.
Well, if it turns out the universe is continuous, then when we conjecture it to be computable, we typically mean the same thing we mean when we say pi is computable: there exists a fixed length program that could compute it to any desired degree of precision (assuming initial conditions specified to sufficient precision).
You know more about this than me—is the three-body problem computable by the technical definition?
Ok. I really want to know why this question got voted down. It seemed like a perfectly reasonable question for someone to ask if they don’t know much about these subjects and have heard that the three body problem is unsolvable.
You can numerically integrate the three-body problem. So there is a linear time algorithm to approximately compute what will happen after linear time. There just isn’t a logarithmic time algorithm (which is what we would normally mean by “closed form”).
Ah, okay. So all that matters is that the right answer is computable given infinite resources. How do you reconcile this with the apparently finite computational resources of the universe?
To be clear—the three-body problem deals with real numbers. Computers can’t work directly with real numbers, since real numbers can’t be finitely represented. Hence when we speak about the computability of a problem “compute f(x)” that calls for a real number answer, we really mean the computability of the problem “compute f(x) to within epsilon” (where now epsilon is another input to the problem).
You can compute what will happen efficiently. If you want to know where the 3 bodies are after t seconds to within some constant error, it only takes you about O(t) steps.
But errors accumulate over time. That means if you want the error to be bounded by a constant at the end of the calculation, the larger t is, the more you have to do along the way to avoid or correct error. If that just means you have to use higher precision calculations, that would give O(t log t), but if it means you have to use finer time steps, that would make the computational effort O(t^2) or worse.
This is primarily a function of how accurately you can do division and multiplication. Even if it isn’t O(t) it is almost probably O(t^(1+epsilon)) for any epsilon>0.
Suppose for the sake of argument you had infinite precision division and multiplication. There would still be finite error due to the use of finite time step size. (Runge-Kutta methods etc. give smaller error for a given step size than the more obvious numerical integration methods, but still not zero.) If you want to reduce the error, you need to use a smaller step size. Generally speaking, the error is a polynomial function of the step size (unlike arithmetic error, which decreases exponentially with the number of digits of precision), so you would expect O(t^(1+epsilon)) to be unattainable. Unless there’s some method of reducing error exponentially with step size for the three body problem that I’m missing?
Runge-Kutta takes O(delta^(-1/4)) time to get an approximation quality of delta, I think. I don’t know if we can yet, but I suspect is is possible to get an approximation quality of delta in time O(delta^(-epsilon)) for any epsilon>0 (in the same sense that I suspect it will eventually possible to multiply two nxn matrices in time O(n^(2 + epsilon)) for any epsilon>0, even though its not practical at all). This would probably imply JoshuaZ’s stated time bound. It doesn’t require exponentially fast error reduction, just arbitrarily good polynomial error reduction.
Anyway, the model I described in the post doesn’t actually have this problem. More precision just comes from using a finer discrete system to approximate the universe (if in fact it is continuous, which I would put a probability of less than 50% on) and still using a linear size circuit to do the simulation. You only pay logarithmically for using a finer grid, in any of the schemes I proposed.
An infinite sequence of algorithms converging on arbitrarily good polynomial error reduction? Fair enough, I certainly can’t rule that out at this stage.
But I don’t understand your last point: how can you pay only logarithmically for using a finer grid?
The post had a concrete complexity measure, which pays logarithmically for a finer grid (that is, doubling the size of the universe is the same as adding one more bit of complexity). The point is, you can only afford to pay logarithmically in the size of the universe (if you want known physical theories to have good complexity as compared to stupid explanations for our observations). Making the grid twice as fine is just making the universe twice as large, so you only pay 1 more bit: the extra bit needed to describe the larger size of the universe. If you disagree with this then you probably disagree fundamentally with my approach. That is obviously valid; I don’t really like my approach that much. But alternative approaches, like the speed prior, seem much worse to me.
Oh, sorry, yes, when your cost measure is complexity, then a finer grid incurs at most a logarithmic penalty, I agree. I also agreed the speed prior is a much worse approach—I would go so far as to say it is flat-out falsified by the observed extreme computational cost of physics.
Hmm, yes you are correct. I was being stupid.
Well, it’s simple to find a chaotic problem that’s not efficient. I was just trying to understand what “the universe is computable” really means since the universe isn’t exactly computable.
It seems like you and some others in this thread are assuming that real numbers describe some actual behavior of the universe, but that’s begging the question. If the universe is computable, it implies that all quantities are discrete.
Well, if it turns out the universe is continuous, then when we conjecture it to be computable, we typically mean the same thing we mean when we say pi is computable: there exists a fixed length program that could compute it to any desired degree of precision (assuming initial conditions specified to sufficient precision).
Continuous quantities are the simplest explanation for the evidence we have—there are some hints that it could be otherwise, but they’re only hints.