This is primarily a function of how accurately you can do division and multiplication. Even if it isn’t O(t) it is almost probably O(t^(1+epsilon)) for any epsilon>0.
Suppose for the sake of argument you had infinite precision division and multiplication. There would still be finite error due to the use of finite time step size. (Runge-Kutta methods etc. give smaller error for a given step size than the more obvious numerical integration methods, but still not zero.) If you want to reduce the error, you need to use a smaller step size. Generally speaking, the error is a polynomial function of the step size (unlike arithmetic error, which decreases exponentially with the number of digits of precision), so you would expect O(t^(1+epsilon)) to be unattainable. Unless there’s some method of reducing error exponentially with step size for the three body problem that I’m missing?
Runge-Kutta takes O(delta^(-1/4)) time to get an approximation quality of delta, I think. I don’t know if we can yet, but I suspect is is possible to get an approximation quality of delta in time O(delta^(-epsilon)) for any epsilon>0 (in the same sense that I suspect it will eventually possible to multiply two nxn matrices in time O(n^(2 + epsilon)) for any epsilon>0, even though its not practical at all). This would probably imply JoshuaZ’s stated time bound. It doesn’t require exponentially fast error reduction, just arbitrarily good polynomial error reduction.
Anyway, the model I described in the post doesn’t actually have this problem. More precision just comes from using a finer discrete system to approximate the universe (if in fact it is continuous, which I would put a probability of less than 50% on) and still using a linear size circuit to do the simulation. You only pay logarithmically for using a finer grid, in any of the schemes I proposed.
An infinite sequence of algorithms converging on arbitrarily good polynomial error reduction? Fair enough, I certainly can’t rule that out at this stage.
But I don’t understand your last point: how can you pay only logarithmically for using a finer grid?
The post had a concrete complexity measure, which pays logarithmically for a finer grid (that is, doubling the size of the universe is the same as adding one more bit of complexity). The point is, you can only afford to pay logarithmically in the size of the universe (if you want known physical theories to have good complexity as compared to stupid explanations for our observations). Making the grid twice as fine is just making the universe twice as large, so you only pay 1 more bit: the extra bit needed to describe the larger size of the universe. If you disagree with this then you probably disagree fundamentally with my approach. That is obviously valid; I don’t really like my approach that much. But alternative approaches, like the speed prior, seem much worse to me.
Oh, sorry, yes, when your cost measure is complexity, then a finer grid incurs at most a logarithmic penalty, I agree. I also agreed the speed prior is a much worse approach—I would go so far as to say it is flat-out falsified by the observed extreme computational cost of physics.
This is primarily a function of how accurately you can do division and multiplication. Even if it isn’t O(t) it is almost probably O(t^(1+epsilon)) for any epsilon>0.
Suppose for the sake of argument you had infinite precision division and multiplication. There would still be finite error due to the use of finite time step size. (Runge-Kutta methods etc. give smaller error for a given step size than the more obvious numerical integration methods, but still not zero.) If you want to reduce the error, you need to use a smaller step size. Generally speaking, the error is a polynomial function of the step size (unlike arithmetic error, which decreases exponentially with the number of digits of precision), so you would expect O(t^(1+epsilon)) to be unattainable. Unless there’s some method of reducing error exponentially with step size for the three body problem that I’m missing?
Runge-Kutta takes O(delta^(-1/4)) time to get an approximation quality of delta, I think. I don’t know if we can yet, but I suspect is is possible to get an approximation quality of delta in time O(delta^(-epsilon)) for any epsilon>0 (in the same sense that I suspect it will eventually possible to multiply two nxn matrices in time O(n^(2 + epsilon)) for any epsilon>0, even though its not practical at all). This would probably imply JoshuaZ’s stated time bound. It doesn’t require exponentially fast error reduction, just arbitrarily good polynomial error reduction.
Anyway, the model I described in the post doesn’t actually have this problem. More precision just comes from using a finer discrete system to approximate the universe (if in fact it is continuous, which I would put a probability of less than 50% on) and still using a linear size circuit to do the simulation. You only pay logarithmically for using a finer grid, in any of the schemes I proposed.
An infinite sequence of algorithms converging on arbitrarily good polynomial error reduction? Fair enough, I certainly can’t rule that out at this stage.
But I don’t understand your last point: how can you pay only logarithmically for using a finer grid?
The post had a concrete complexity measure, which pays logarithmically for a finer grid (that is, doubling the size of the universe is the same as adding one more bit of complexity). The point is, you can only afford to pay logarithmically in the size of the universe (if you want known physical theories to have good complexity as compared to stupid explanations for our observations). Making the grid twice as fine is just making the universe twice as large, so you only pay 1 more bit: the extra bit needed to describe the larger size of the universe. If you disagree with this then you probably disagree fundamentally with my approach. That is obviously valid; I don’t really like my approach that much. But alternative approaches, like the speed prior, seem much worse to me.
Oh, sorry, yes, when your cost measure is complexity, then a finer grid incurs at most a logarithmic penalty, I agree. I also agreed the speed prior is a much worse approach—I would go so far as to say it is flat-out falsified by the observed extreme computational cost of physics.
Hmm, yes you are correct. I was being stupid.