Ph.D. student at UCLA, theoretical biophysics
valentinslepukhin
Yes it is for the observer. You can not deduce Born’s rule from the . No interpretation of quantum mechanics can help you with it.
″ Bell’s Theorem only rules out local hidden variables. ”—ok. Do you prefer non-local theory then?
Well, violation of Laws of Nature is violation of Laws of Nature, whether they applied to the remote drawing without any interactions or to star moving. If Steve can draw pictures on toast remotely he violates the Laws of Nature and the hypothesis that the Universe is completely controlled by the Laws of Nature, without any Higher Power, aliens, the guy who runs a simulation etc—is falsified.
Now, going back to aliens vs God hypothesis.
″ The dragon is at least compatible with what we know of science. Magical powers, not so much. ”
The problem that compatibility of the hypothesis with what we know before is not an argument at all when we are talking about fundamental hypothesis (i.e., not “who stole my car” but the hypothesis explaining the Universe). Indeed, look at the history of Quantum Mechanics. Initially, a lot of scientists hated the idea that the probabilistic description of the Universe is fundamental, so they come up with hidden parameters idea. All they now before was deterministic. If you knew all the velocities and positions of all the molecules, you could predict everything exactly—but you did not, and here the classical probability was coming. So they just suggested the same idea for Quantum Mechanics. That actually everything is still deterministic, we just don’t know hidden variables, and then the observation appears to be probabilistic. You do not need to invent modified Turing machine that would produce different input with different probability, you still good to have good old deterministic Turing machine. Looks much better, right?
Then it appears that actually you can distinguish between these hidden parameters hypothesis and fundamentally probabilistic hypothesis—see Bell’s inequalities. And the experimental test demonstrated, that there are no hidden parameters. QM is fundamentally probabilistic.
Thus, the fact that we need to throw to the trash can all our current assumptions and build the theory based on new assumptions does not mean that we should put small probability to this new theory and need hidden parameters or hidden aliens given the same observations. It just means we maybe were wrong.
Well, my expectations would decrease if some of the miracles I believe would be proven to be fakes or natural events. The miracles that I believe are not those that people believe locally, but those that the Church recognizes globally—usually they send a special commission to check if it is indeed miracle or just natural event (or fake). I would say, I put high probability that the miracles that was approved by this commission are indeed miracles, and if you demonstrate me that they are not it would decrease my probability. The miracles I can name:
-different myrrh-streaming icons, as long as it passed the check by church officials not only on the local level
-witnesses that are collecting for the canonization of saints. Each time when new person is canonized one of the main criteria is whether there are miracles by prayers to him. So it is quite large data of different witnesses. Most of them can be explained by coincidence or natural effects, however, there are more difficult witnesses such as very fast curing from disease that by doctor prognosis should have taken few orders of magnitude longer time (or should not have happen at all).
-relics of saints. In some cases (quite often actually) when after long time the body of the dead person who is considered to be saint is taken back from the ground, it discovered to be not decomposed. It is not necessary condition—there are many saints who does not have it. However, it is interesting question, whether this effect is more often among saints that among usual people (taking into account only the cases when relics were taken from the ground after canonization, to exclude the bias). If it is indeed significantly more often, what can be the reason? Why would the situation be opposite on mount Athos, where non-decomposition of the body is considered to be a bad thing?
I am feeling that our crux is prior probability for God that we are discussing in the other thread. I think that it is a little bit smaller than no God hypothesis, and gilch thinks that it is infinitesimal.
″ God, who can likewise be credited for anything (even what looks like evil—”all part of God’s plan”, or “God works in mysterious ways”, right?) is the same as the witch: no predictive power over “it” whatsoever.”
Not exactly. First, I can predict that if I throw the stone it will fall down and stuff like that. A miracle may happen, but the probability for it to happen from nowhere is very small (also not zero). Second, I give higher probabilities to what is common place for miracles to happen (like myrrh-streaming icon mentioned above, or healing, or answer to the prayers). With no God hypothesis I must put to zero such probabilities, and if there is a God I keep them finite. So, first, such theory can predict something (whether predictions correct or not, it is separate thread discussion, I will go back to it when I have time from this thread). Second, the predictions do not always coincide with no God theory predictions (like deist theory, that there is a God that does not interact with the Universe) - so it is different theory.
″ And worse, God’s complexity cost is not just relatively big like any intelligent mind (such as the witch) would be, but literally infinite if we say that God is omniscient: If God is a “halting oracle”, then God is not even contained in the set of all computer programs, because He is not computable: He can’t even be a hypothesis, only approximated. And to get a better approximation, you must use a longer computer program that encodes more of Chaitin’s constant, which is provably not compressible by any halting program. Better approximations of God get bigger without limit. The approximate God hypothesis has literally infinitesimal probability—you can’t escape it: The better the approximation gets, the less likely it is. ”
Hmmm. Indeed, you are totally right here. I actually never thought that incomprehensibility is directly connected with the omniscience. Thank you very much for this, it make me to reconsider a lot of things.
We indeed can have only approximate knowledge of God. However, this approximate version of the whole hypothesis can be short enough to compete with no God hypothesis (remember, I was talking about the width of the function? ).
So, for example, the zero approximation of the God hypothesis is that God does not interact with the Universe. It basically leads to the same predictions, as no God hypothesis, so it should be eliminated (actually not that simple, I will talk more about it closer to the end of this comment). The first order approximation will be the God very rarely interacting with the Universe, so there are miracles with very low probability. Next orders will be clearer classification of these miracles. You see that these approximations have predictive power, not significantly longer than no God hypothesis, and the set of predictions is not identical—so they are decent competitors.
What is the difference between such approximation and the same approximation for alien teens etc? Why would we prefer the God hypothesis to the alien teens? Well, because to say: “there is a God with such and such attributes” is simpler than to say “there are alien teens who form around us a reality such that it looks like that there is a God with such and such attributes”.
But why do we need to say that there is the omniscient God at all if all we are going to do is to use approximations? Well, let me give you an analogy from mathematical physics. There is such thing as M-theory. Well, to be honest, M-theory is not formulated. However, only the assumption that such theory exists (even though not formulated) leads to some interesting dualities between other theories. The same is here. Assumption of the omniscient God gives fruitful approximations. Whether they are correct or not—it is the discussion on miracles in the different thread. But we can not simply say that they have very low prior probabilities, since they are not significantly longer than no God hypothesis and are within the width of the maximum of the probability function distribution.
We are still discussing :)
Ok, let me repeat more precisely so you would see if I understand all things correctly, and if not you would correct me.
1. We have the Universe, that is like a black box: we can make some experiment (collide the particles, look at the particular region of the sky) and get some data. The Universe can be described as a mapping from the space of all possible inputs (experiments) to all possible outputs (observations). To be very precise, let us discuss not observations of humanity as a whole (since you do not observe them directly), but only your own observations in a particular moment of time (your past experiments and observations are now coming from your memory, so they are outputs from your memory).
2. If there are 2^K possible inputs and 2^M possible outputs, there are totally 2^N = (2^M)^(2^K) possible mappings.
3. We can represent this mapping as an output for the universal Turing machine (UTM), which input will be our hypothesis. There are different realizations of the UTM, so let us pick one of the minimal ones(see Wikipedia).
4. There will be more than one hypothesis giving correct mapping. “Witch did it”, “Dumbledore did it” etc. Let us study the probability that the given hypothesis is the shortest that reproduces correct mapping. (If we have more than one shortest, let’s pick the one that is assigned to a smaller binary number, or just pick randomly). For such a rule, there is only one shortest hypothesis. It exists because there is a correct hypothesis “Witch did it” , that might be not the shortest, so we will just look for those that are shorter.
5. The probability for a hypothesis with length n be the shortest hypothesis for n < N is a priori not larger than 2^(n—N) since there are 2^N possible mappings and only 2^n possible hypothesis.
6. The antropic principle does not help here. You know that you perceive input and produce output, but you can’t assume anything about future input and output—a priori.
7. Now you want to introduce the new principle—predictivity, that you actually can predict stuff. I agree with introducing it. This leads to the strong assumption that actually our mapping is one of such that can be produced by a short hypothesis. So, you redefine the probabilities such that you would have a pick for short hypothesis, and integral still be 1.
8. Let us look closer at ou options. Funny that the Solomonoff’s lightsaber actually does not converge fast enough. Indeed, you have 2^(-n) probability for a particular hypothesis of length n, but there are in total 2^n hypothesis of length n, that give you 1 for all the hypothesis of length n. Thus you integrate 1 from 0 to infinity obtaining divergence. To fix it you can simply say that the probability 2^(- a n) with a > 1.
9. However, is convergence the only a prior thing that we require? I would say no. Indeed, can the input of length 1 to one of the minimal UTMs make it produce an output of the length N>>1 and halt? My probability for this is incredibly low. (Of course, you can construct UTM so that it will make it—but it will not be minimal). Notice that I do not say “complex input” or something like that, I am concerning only about the size. The same I would say for all very small numbers. If you have some free time and good at coding, you can play with the minimal known UTMs to see which smallest input produces large but finite output—this would give an estimation of how small n can be. Let us call it n_0
10. Now we would like to have a function such that it is almost zero at n significantly smaller than n_0, growth fast around n_0 and then decays (fast enough to keep the integral convergent). So it will have a maximum, and this maximum will have some width. What is its width? Is it just a matter of taste? To understand it let us return to the reason why we started the search of this function—the need for predictivity.
11. So, since we basically need to be able to predict future observations, the width of the function is limited by us. If it is too wide and we need to include a highly complicated hypothesis, we fail—simply because it is too hard for us to calculate based on such complicated hypothesis. Thus, we just limit ourselves by hypothesis simple enough to use, and this gives the width of the function.
12. To sum up, if hypothesis B is more complicated than A, but still can be used to give predictions, it should not be discarded by adding very low prior probability to it in comparison with hypothesis A.
Ok, so I have studied the Solomonoff’s lightsaber. I used this blog https://www.lesswrong.com/posts/Kyc5dFDzBg4WccrbK/an-intuitive-explanation-of-solomonoff-induction
Please correct me if I am wrong, but I feel that there is a … well, not mistake… assumption that is not necessarily true. What I mean is the following. Let us consider the space of all possible inputs and the space of all possible outputs for the Turing machine (yeah, both are infinitely dimensional, who cares). The data (our Universe) is in the space of outputs, theory to test in the space of inputs. Now, before any assumptions about data and theory, what is the probability for the arbitrarily chosen input of length n lead to output with length N (since the output is all the observed data from our Universe, N is pretty large) - this is what is prior probability, correct?
Now we remember the simple fact about data compression: the universal algorithm of compression does not exist, otherwise you would have a bijection between the space of all possible sequences with length N and length N1 < N, which is impossible. Therefore, the majority of the outputs with length N can not be produced by the input with length n (basically, only 2^n out of 2^N has any chance to be produced in such way). For the vast majority of these outputs the shortest input producing them will be just the algorithm that copies large part of itself to output—i.e., a priory hypothesis is incredibly long.
The fact that we are looking always for something simpler is an assumption of simplicity. Our Universe apparently happened to be governed by the set of simple laws so it works. However, this is the assumption, or axiom. It is not corollary from some math—from math prior should be awfully complex hypothesis.
If you put this assumption as initial axiom, it is quite logical to set incredibly low priors for God. However, starting from the pure math, the prior for this axiom itself is infinitesimal. The prior for God’s hypothesis is also infinitesimal, no doubts. Well, for my God’s hypothesis, since it is then lead to your axiom (limited by the Universe) as a consequence. For “witch from neighborhood did it” and then copy paste all the Universe data to “it” priors actually should be higher for reason discussed above.
Why don’t we then keep the “witch” hypothesis? Well, because its predictivity strength is zero. So basically we keep simplicity hypothesis in spite of its incredibly low priors because of its predictivity strength. And if we want to compare it with different supernatural hypothesis we should compare the predictivity strength. You can not cast them out just because of priors. They are not lower.
I will leave for a few days—need to do my job and to learn everything you recommended. Thanks to everyone, see you soon!
I thought about it immediately after reading HPMOR :) . It is just seems to me still quite unlikely (if I would one time out of hundred correctly guess the natural number from the interval from 1 to 100 it would be absolutely a bias; but I feel that for me frequency is significantly higher than the probability).
Well, ok we discuss priors later, I need some time to learn it.
why I don’t think it is natural effect—well, simply because of the amount and the longitude. Everything that could been inside should have gone away.
why I don’t think it is hoax. Well, it is more complicated. I would say the probability of it to be hoax is very low, and since my priors are not that low as yours, it works for me. Now why I estimate the probability of the hoax to be low:
1. If you read the story attentively, you see that there was an icon like that before (pretty recently actually, last quarter of XXth century), there also was a person who discovered it and was traveling with this icon everywhere (the same as current person travels now). The previous person was killed and tortured, the icon disappeared. The murderers were not found. It would be quite crazy idea, knowing this story, to make this mystification. To put your life under risk for what? For stupid hoax? You must be crazy to do it. And this person (current keeper) serves in police, so he must have some regular checks of his phychological state. Finally, simply anecdotical evidence—I saw him once, he seems to be normal guy (of course, I am not a specialist, it is just slight decreasing of probability him being a psycho)
2. If I would need to do such a hoax, I would put some source of myrrh inside, and refill it periodically. It can be done of course. I could even believe that it can be done such that observer, taking the icon, would not notice any difference from the usual icon. But is it possible to avoid X-rays somehow? They travel by plane, I bet they do not put icon into luggage (it is too precious). So they must go with it as hand luggage. There the custom, using X-rays, observe small vessels inside the icons and asks what is it. And it is done, the hoax is over.
I will disappear from here for few days—need to do my job, and also learn everything you send to me.
So, regarding the omnibenevolence—again, we first need to clarify what means “good” and “evil”. So, first we need to get, if we have the same understanding of this, otherwise it is the argument about definitions. I am not sure if it is possible to give a precise definition, but let me ask few question to see if we have the same answer to them or we understand it differently.
1) Is “good” only utilitarian (i.e. for some higher purpose—then which one?) or deontological (i.e., there are some good things that are good by default. It is good to bring some joy to the life of the old person even if he is totally useless and senile—like that).
2) Is good only result or there can be some goodness in the process? Is there any goodness in striving and gaining, playing hard game and winning—or only final result is important?
Regarding no need of this hypothesis—somewhere below there is a thread where we argue about miracles.
1) No, it is logically impossible (I think so).
2) 3) I don’t know. I would say “can but will not because of omnibenevolence”.
4) The thing in the Old Testament I understand as sarcasm from God. I would say we can become “lower gods”.
This problem requires the definition “what is good” first basically. For example, is it better to give the gift immediately, or to give the person a difficult task first, knowing that he is capable to do it, and then to give this gift as an award? When the person would feel better?
Also, when you are saying it is your crux, do you mean the statement “omnipotent omnibenevolent God is incompatible with observed Universe” is crux for “There is no omnipotent omnibenevolent God” ? So if this statement is false you would believe that there is a God? :)
Thank you! I will read and try to understand it
I would say that God do not have a God, otherwise we would consider the second one as the true God.
Regarding halting oracle, let me first read it and understand what is it.
Here I need more time.
Well ok I do not mean some kind of visions, voice etc. (I mean it can happen but I did not experience it). For me it is rather answers to my questions in form of quite unlikely coincidence. But this is for me, I would expect that it is individual.
I am not familiar with the miracles of other religions. I would even say I do not have a solid opinion about them. My idea of God does not forbid miracles outside Christianity. (It is not syncretism—it is just that the same God for some reason may do miracles for people outside Christianity too).
I would even agree that really a lot of things that are considered to be miracles are not such. However, I can name a couple things I believe to be actually miracles. This, for example https://www.orthodoxhawaii.org/icons
It would we helpful if there was some algorithm or formula that connects complexity with prior probability. Otherwise, I can say that probability decays logarithmically with complexity, and you will say that it decays exponentially, and we will get totally different prior probabilities and totally different results. Do you know if such thing exists?
“You seem to be arguing that we can bias our prior to accept an approximate God at the very edge of the “width”. I say the rights of Mortimer Q. Snodgrass are being violated.”
No. If you read the comment about the width of the function you can see that my argument is not about God at all, but about what we need from the hypothesis (predictivity).
″ The alien hypothesis dominates the God hypothesis, because God is infinitely improbable, but aliens are only finitely improbable. ”
No. We use the approximation, and approximation has the same size for both of them (we consider the case of comparing hypothesis “There is a God with such and such attributes” and ” There are aliens forge us to believe that there is a God with such and such attributes”). The algorithm of construction of this approximation, though, is simpler for pure God’s hypothesis (using the mere fact of its existence, not formulating the hypothesis itself, like we establish dualities between different types of string theories using that M-theory exists but without formulating it) since it does not require transitional link of “hidden aliens”.
“Why your God,
″
Suppose I tell you soon after the discovery of muon that there is another particle, like the electron, but with the mass 105.6583745 (24) MeV and lifetime 2.19698119(22) microseconds. You would tell me: “Ok I can assume that there is a particle like the electron, although I would put quite low probability to it. But to believe, that its mass is 105.6583745(24) MeV !? No, it is absurd—there is a trillion of other possibilities!”
Of course. A priori possibility for all different gods is approximately the same. In total, they add to the prior probability that there is some God—and I was arguing that this prior probability is finite. Then, after you make an observation, you can discover more attributes of God and come to Allah, Christ, Flying Spaghetti Monster, Aliens or nothing beyond Laws of Nature.