Sense of incredulity is not a belief, so it’s not covered by those injunctions. A sense of wonder is both pleasant and good for mental health, and diverging to much from the average in deep emotional reactions carries a real cost in less accurate empathic modelling.
Well, I dunno, if you describe physics as a Turing machine program, ala Solomonoff induction, special relativity may well be more incredible than god(s), chiefly because Turing machines may well be unable to do exact Lorentz invariance, but can do some kind of god(s), i.e. superintelligences. (Approximate relativity is doable, though).
Solomonoff induction creates models of the universe from the point of view of a single observer. As such, it wouldn’t probably have any particular problem with Einstenian relativity.
On the other hand, if you want a computational model of the universe that is independent from the choice of any particular observer, relativity will get you into trouble.
Solomonoff induction creates models of the universe from the point of view of a single observer. As such, it wouldn’t probably have any particular problem with Einstenian relativity.
On the other hand, if you want a computational model of the universe that is independent from the choice of any particular observer, relativity will get you into trouble.
Relativity doesn’t depend to observer, it depends to reference frame… (or rather, doesn’t depend). I can launch Michalson-Morley experiment into space and have it send data to me, and it’ll need to obey Lorentz invariance and everything else. edit: or just for GPS to work. You have a valid point though, S.I. has a natural preferred frame coinciding with the observer.
Lorentz invariance is a very neat, very elegant property, which as far as we know, only incredibly complicated computations have, and only approximately. This makes me think that algorithmic prior is not a very good idea. Universe needs not be made of elementary components, in the way in which computations are.
Universe needs not be made of elementary components, in the way in which computations are.
Moreover, all computational models assume some sort of global state and absolute time. These assumptions don’t seem to hold in physics, or at least they may hold for a single observer, but may require complex models that don’t respect a natural simplicity prior.
If it were possible to realize a Solomonoff inductor in our universe I would it expect it to be able to learn, but it might not be necessarily optimal.
It can’t do exact relativity but it can do exact general AI? Not to mention that simulating a God that doesn’t include relativity will produce the wrong answer.
It being able to do AI is generally accepted as uncontroversial here. We don’t know what would be the shortest way to encode a very good approximation to relativity either—could be straightforward, could be through a singleton intelligence that somehow arises in a more convenient universe and then proceeds to build very good approximations to more elegant universes (given some hint it discovers). I’m an atheist too, it’s just that given sufficiently bad choice of the way you represent theories, the shortest hypothesis can involve arbitrarily crazy things just to do something fairly basic (e.g. to make a very very good approximation of real numbers). edit: and relativity is fairly unique in just how elegant it is but how awfully inelegant any simulation of it gets.
We don’t know what would be the shortest way to encode a very good approximation to relativity either
The idea is that if humans can come up with approximation of relativity which are good enough for the purpose of predicting their observations, in principle SI can do it too.
The issue is prior probability: since humans use a different prior than SI, it’s not straightforward that SI will not favor shorter models that in practice may perform worse. There are universality theorems which essentially prove that given enough observations, SI will eventually catch up with any semi-computable learner, but the number of observation for this to happen might be far from practical.
For instance, there is a theorem which proves that, for any algorithm, if you sample problem instances according to a Solomonoff distribution, then average case complexity will asymptotically match worst case complexity. If the Solomonoff distribution was a reasonable prior for practical purposes, then we should observe that for all algorithms, for realistic instance distributions, average case complexity was about the same order of magnitude as worst case complexity. Empirically, we observe that this is not necessarily the case, the Simplex algorithm for linear programming, for instance, has exponential time worst case complexity but is usually very efficient (polynomial time) on typical inputs.
Before remembering the older definition of “incredible” that is presumably meant, I parsed this as “Like all great rationalists you believed in things that were twice as awesome as theology”; and thought “Only twice?”.
That on probabilistic or rational reflection one can come to believe intuitively implausible things that are as or more extraordinary than their theological counterparts. Or to mutilate Hamlet, that there are more things on earth than are dreamt of in heaven.
Most of quantum physics and relativity are certainly intuitively weirder than Jesus turning water into wine, self-replicating bread or a body of water splitting itself to create a passage.
I mean, our physics say it’s technically possible to make machines that do all of this. Without magic. Using energy collected in space and sent to Earth using beams of light. Although we probably wouldn’t use beams of light because that’s inefficient.
I doubt that Laxness means “rationalist” in the LW community sense. In philosophy, a rationalist is defined as distinct from an empiricist, as one who believes knowledge to be arrived at from a priori cogitation, as opposed to experience.
Even after looking the book up on Google, without context, I can’t tell whether the rationalist being spoken of has gone astray through his reason, or has succeeded in finding the truth of something. But I am now interested in reading Laxness.
The mere size of the universe is pretty incredible. I don’t think it gets as much emphasis as it used to. I’m not sure whether people have quit thinking about it or gotten used to it.
― Halldór Laxness, Under the Glacier.
...and then adjusted our senses of the ‘incredible’ accordingly, so that Special Relativity seemed less incredible, and God more so.
Sense of incredulity is not a belief, so it’s not covered by those injunctions. A sense of wonder is both pleasant and good for mental health, and diverging to much from the average in deep emotional reactions carries a real cost in less accurate empathic modelling.
Well, I dunno, if you describe physics as a Turing machine program, ala Solomonoff induction, special relativity may well be more incredible than god(s), chiefly because Turing machines may well be unable to do exact Lorentz invariance, but can do some kind of god(s), i.e. superintelligences. (Approximate relativity is doable, though).
Solomonoff induction creates models of the universe from the point of view of a single observer. As such, it wouldn’t probably have any particular problem with Einstenian relativity.
On the other hand, if you want a computational model of the universe that is independent from the choice of any particular observer, relativity will get you into trouble.
Relativity doesn’t depend to observer, it depends to reference frame… (or rather, doesn’t depend). I can launch Michalson-Morley experiment into space and have it send data to me, and it’ll need to obey Lorentz invariance and everything else. edit: or just for GPS to work. You have a valid point though, S.I. has a natural preferred frame coinciding with the observer.
Lorentz invariance is a very neat, very elegant property, which as far as we know, only incredibly complicated computations have, and only approximately. This makes me think that algorithmic prior is not a very good idea. Universe needs not be made of elementary components, in the way in which computations are.
Moreover, all computational models assume some sort of global state and absolute time. These assumptions don’t seem to hold in physics, or at least they may hold for a single observer, but may require complex models that don’t respect a natural simplicity prior.
If it were possible to realize a Solomonoff inductor in our universe I would it expect it to be able to learn, but it might not be necessarily optimal.
It can’t do exact relativity but it can do exact general AI? Not to mention that simulating a God that doesn’t include relativity will produce the wrong answer.
It being able to do AI is generally accepted as uncontroversial here. We don’t know what would be the shortest way to encode a very good approximation to relativity either—could be straightforward, could be through a singleton intelligence that somehow arises in a more convenient universe and then proceeds to build very good approximations to more elegant universes (given some hint it discovers). I’m an atheist too, it’s just that given sufficiently bad choice of the way you represent theories, the shortest hypothesis can involve arbitrarily crazy things just to do something fairly basic (e.g. to make a very very good approximation of real numbers). edit: and relativity is fairly unique in just how elegant it is but how awfully inelegant any simulation of it gets.
The idea is that if humans can come up with approximation of relativity which are good enough for the purpose of predicting their observations, in principle SI can do it too.
The issue is prior probability: since humans use a different prior than SI, it’s not straightforward that SI will not favor shorter models that in practice may perform worse.
There are universality theorems which essentially prove that given enough observations, SI will eventually catch up with any semi-computable learner, but the number of observation for this to happen might be far from practical.
For instance, there is a theorem which proves that, for any algorithm, if you sample problem instances according to a Solomonoff distribution, then average case complexity will asymptotically match worst case complexity.
If the Solomonoff distribution was a reasonable prior for practical purposes, then we should observe that for all algorithms, for realistic instance distributions, average case complexity was about the same order of magnitude as worst case complexity. Empirically, we observe that this is not necessarily the case, the Simplex algorithm for linear programming, for instance, has exponential time worst case complexity but is usually very efficient (polynomial time) on typical inputs.
Before remembering the older definition of “incredible” that is presumably meant, I parsed this as “Like all great rationalists you believed in things that were twice as awesome as theology”; and thought “Only twice?”.
What does this mean?
That on probabilistic or rational reflection one can come to believe intuitively implausible things that are as or more extraordinary than their theological counterparts. Or to mutilate Hamlet, that there are more things on earth than are dreamt of in heaven.
Most of quantum physics and relativity are certainly intuitively weirder than Jesus turning water into wine, self-replicating bread or a body of water splitting itself to create a passage.
I mean, our physics say it’s technically possible to make machines that do all of this. Without magic. Using energy collected in space and sent to Earth using beams of light. Although we probably wouldn’t use beams of light because that’s inefficient.
I am confused—upvoting this comment is a rejection of this website.
I doubt that Laxness means “rationalist” in the LW community sense. In philosophy, a rationalist is defined as distinct from an empiricist, as one who believes knowledge to be arrived at from a priori cogitation, as opposed to experience.
Even after looking the book up on Google, without context, I can’t tell whether the rationalist being spoken of has gone astray through his reason, or has succeeded in finding the truth of something. But I am now interested in reading Laxness.
The mere size of the universe is pretty incredible. I don’t think it gets as much emphasis as it used to. I’m not sure whether people have quit thinking about it or gotten used to it.