It can’t do exact relativity but it can do exact general AI? Not to mention that simulating a God that doesn’t include relativity will produce the wrong answer.
It being able to do AI is generally accepted as uncontroversial here. We don’t know what would be the shortest way to encode a very good approximation to relativity either—could be straightforward, could be through a singleton intelligence that somehow arises in a more convenient universe and then proceeds to build very good approximations to more elegant universes (given some hint it discovers). I’m an atheist too, it’s just that given sufficiently bad choice of the way you represent theories, the shortest hypothesis can involve arbitrarily crazy things just to do something fairly basic (e.g. to make a very very good approximation of real numbers). edit: and relativity is fairly unique in just how elegant it is but how awfully inelegant any simulation of it gets.
We don’t know what would be the shortest way to encode a very good approximation to relativity either
The idea is that if humans can come up with approximation of relativity which are good enough for the purpose of predicting their observations, in principle SI can do it too.
The issue is prior probability: since humans use a different prior than SI, it’s not straightforward that SI will not favor shorter models that in practice may perform worse. There are universality theorems which essentially prove that given enough observations, SI will eventually catch up with any semi-computable learner, but the number of observation for this to happen might be far from practical.
For instance, there is a theorem which proves that, for any algorithm, if you sample problem instances according to a Solomonoff distribution, then average case complexity will asymptotically match worst case complexity. If the Solomonoff distribution was a reasonable prior for practical purposes, then we should observe that for all algorithms, for realistic instance distributions, average case complexity was about the same order of magnitude as worst case complexity. Empirically, we observe that this is not necessarily the case, the Simplex algorithm for linear programming, for instance, has exponential time worst case complexity but is usually very efficient (polynomial time) on typical inputs.
It can’t do exact relativity but it can do exact general AI? Not to mention that simulating a God that doesn’t include relativity will produce the wrong answer.
It being able to do AI is generally accepted as uncontroversial here. We don’t know what would be the shortest way to encode a very good approximation to relativity either—could be straightforward, could be through a singleton intelligence that somehow arises in a more convenient universe and then proceeds to build very good approximations to more elegant universes (given some hint it discovers). I’m an atheist too, it’s just that given sufficiently bad choice of the way you represent theories, the shortest hypothesis can involve arbitrarily crazy things just to do something fairly basic (e.g. to make a very very good approximation of real numbers). edit: and relativity is fairly unique in just how elegant it is but how awfully inelegant any simulation of it gets.
The idea is that if humans can come up with approximation of relativity which are good enough for the purpose of predicting their observations, in principle SI can do it too.
The issue is prior probability: since humans use a different prior than SI, it’s not straightforward that SI will not favor shorter models that in practice may perform worse.
There are universality theorems which essentially prove that given enough observations, SI will eventually catch up with any semi-computable learner, but the number of observation for this to happen might be far from practical.
For instance, there is a theorem which proves that, for any algorithm, if you sample problem instances according to a Solomonoff distribution, then average case complexity will asymptotically match worst case complexity.
If the Solomonoff distribution was a reasonable prior for practical purposes, then we should observe that for all algorithms, for realistic instance distributions, average case complexity was about the same order of magnitude as worst case complexity. Empirically, we observe that this is not necessarily the case, the Simplex algorithm for linear programming, for instance, has exponential time worst case complexity but is usually very efficient (polynomial time) on typical inputs.