Well, I dunno, if you describe physics as a Turing machine program, ala Solomonoff induction, special relativity may well be more incredible than god(s), chiefly because Turing machines may well be unable to do exact Lorentz invariance, but can do some kind of god(s), i.e. superintelligences. (Approximate relativity is doable, though).
Solomonoff induction creates models of the universe from the point of view of a single observer. As such, it wouldn’t probably have any particular problem with Einstenian relativity.
On the other hand, if you want a computational model of the universe that is independent from the choice of any particular observer, relativity will get you into trouble.
Solomonoff induction creates models of the universe from the point of view of a single observer. As such, it wouldn’t probably have any particular problem with Einstenian relativity.
On the other hand, if you want a computational model of the universe that is independent from the choice of any particular observer, relativity will get you into trouble.
Relativity doesn’t depend to observer, it depends to reference frame… (or rather, doesn’t depend). I can launch Michalson-Morley experiment into space and have it send data to me, and it’ll need to obey Lorentz invariance and everything else. edit: or just for GPS to work. You have a valid point though, S.I. has a natural preferred frame coinciding with the observer.
Lorentz invariance is a very neat, very elegant property, which as far as we know, only incredibly complicated computations have, and only approximately. This makes me think that algorithmic prior is not a very good idea. Universe needs not be made of elementary components, in the way in which computations are.
Universe needs not be made of elementary components, in the way in which computations are.
Moreover, all computational models assume some sort of global state and absolute time. These assumptions don’t seem to hold in physics, or at least they may hold for a single observer, but may require complex models that don’t respect a natural simplicity prior.
If it were possible to realize a Solomonoff inductor in our universe I would it expect it to be able to learn, but it might not be necessarily optimal.
It can’t do exact relativity but it can do exact general AI? Not to mention that simulating a God that doesn’t include relativity will produce the wrong answer.
It being able to do AI is generally accepted as uncontroversial here. We don’t know what would be the shortest way to encode a very good approximation to relativity either—could be straightforward, could be through a singleton intelligence that somehow arises in a more convenient universe and then proceeds to build very good approximations to more elegant universes (given some hint it discovers). I’m an atheist too, it’s just that given sufficiently bad choice of the way you represent theories, the shortest hypothesis can involve arbitrarily crazy things just to do something fairly basic (e.g. to make a very very good approximation of real numbers). edit: and relativity is fairly unique in just how elegant it is but how awfully inelegant any simulation of it gets.
We don’t know what would be the shortest way to encode a very good approximation to relativity either
The idea is that if humans can come up with approximation of relativity which are good enough for the purpose of predicting their observations, in principle SI can do it too.
The issue is prior probability: since humans use a different prior than SI, it’s not straightforward that SI will not favor shorter models that in practice may perform worse. There are universality theorems which essentially prove that given enough observations, SI will eventually catch up with any semi-computable learner, but the number of observation for this to happen might be far from practical.
For instance, there is a theorem which proves that, for any algorithm, if you sample problem instances according to a Solomonoff distribution, then average case complexity will asymptotically match worst case complexity. If the Solomonoff distribution was a reasonable prior for practical purposes, then we should observe that for all algorithms, for realistic instance distributions, average case complexity was about the same order of magnitude as worst case complexity. Empirically, we observe that this is not necessarily the case, the Simplex algorithm for linear programming, for instance, has exponential time worst case complexity but is usually very efficient (polynomial time) on typical inputs.
Well, I dunno, if you describe physics as a Turing machine program, ala Solomonoff induction, special relativity may well be more incredible than god(s), chiefly because Turing machines may well be unable to do exact Lorentz invariance, but can do some kind of god(s), i.e. superintelligences. (Approximate relativity is doable, though).
Solomonoff induction creates models of the universe from the point of view of a single observer. As such, it wouldn’t probably have any particular problem with Einstenian relativity.
On the other hand, if you want a computational model of the universe that is independent from the choice of any particular observer, relativity will get you into trouble.
Relativity doesn’t depend to observer, it depends to reference frame… (or rather, doesn’t depend). I can launch Michalson-Morley experiment into space and have it send data to me, and it’ll need to obey Lorentz invariance and everything else. edit: or just for GPS to work. You have a valid point though, S.I. has a natural preferred frame coinciding with the observer.
Lorentz invariance is a very neat, very elegant property, which as far as we know, only incredibly complicated computations have, and only approximately. This makes me think that algorithmic prior is not a very good idea. Universe needs not be made of elementary components, in the way in which computations are.
Moreover, all computational models assume some sort of global state and absolute time. These assumptions don’t seem to hold in physics, or at least they may hold for a single observer, but may require complex models that don’t respect a natural simplicity prior.
If it were possible to realize a Solomonoff inductor in our universe I would it expect it to be able to learn, but it might not be necessarily optimal.
It can’t do exact relativity but it can do exact general AI? Not to mention that simulating a God that doesn’t include relativity will produce the wrong answer.
It being able to do AI is generally accepted as uncontroversial here. We don’t know what would be the shortest way to encode a very good approximation to relativity either—could be straightforward, could be through a singleton intelligence that somehow arises in a more convenient universe and then proceeds to build very good approximations to more elegant universes (given some hint it discovers). I’m an atheist too, it’s just that given sufficiently bad choice of the way you represent theories, the shortest hypothesis can involve arbitrarily crazy things just to do something fairly basic (e.g. to make a very very good approximation of real numbers). edit: and relativity is fairly unique in just how elegant it is but how awfully inelegant any simulation of it gets.
The idea is that if humans can come up with approximation of relativity which are good enough for the purpose of predicting their observations, in principle SI can do it too.
The issue is prior probability: since humans use a different prior than SI, it’s not straightforward that SI will not favor shorter models that in practice may perform worse.
There are universality theorems which essentially prove that given enough observations, SI will eventually catch up with any semi-computable learner, but the number of observation for this to happen might be far from practical.
For instance, there is a theorem which proves that, for any algorithm, if you sample problem instances according to a Solomonoff distribution, then average case complexity will asymptotically match worst case complexity.
If the Solomonoff distribution was a reasonable prior for practical purposes, then we should observe that for all algorithms, for realistic instance distributions, average case complexity was about the same order of magnitude as worst case complexity. Empirically, we observe that this is not necessarily the case, the Simplex algorithm for linear programming, for instance, has exponential time worst case complexity but is usually very efficient (polynomial time) on typical inputs.