It would we helpful if there was some algorithm or formula that connects complexity with prior probability. Otherwise, I can say that probability decays logarithmically with complexity, and you will say that it decays exponentially, and we will get totally different prior probabilities and totally different results. Do you know if such thing exists?
The simplest explanation for anything is “The lady down the street is a witch; she did it.” Right?
No? How is that explanation any worse than “God did it”? We can at least see that the lady down the street exists.
The magic algorithm is Solomonoff’s lightsaber. It’s not realistically computable, but it does give us a much better sense of what I mean by complexity, and how that should affect priors.
Please correct me if I am wrong, but I feel that there is a … well, not mistake… assumption that is not necessarily true. What I mean is the following. Let us consider the space of all possible inputs and the space of all possible outputs for the Turing machine (yeah, both are infinitely dimensional, who cares). The data (our Universe) is in the space of outputs, theory to test in the space of inputs. Now, before any assumptions about data and theory, what is the probability for the arbitrarily chosen input of length n lead to output with length N (since the output is all the observed data from our Universe, N is pretty large) - this is what is prior probability, correct?
Now we remember the simple fact about data compression: the universal algorithm of compression does not exist, otherwise you would have a bijection between the space of all possible sequences with length N and length N1 < N, which is impossible. Therefore, the majority of the outputs with length N can not be produced by the input with length n (basically, only 2^n out of 2^N has any chance to be produced in such way). For the vast majority of these outputs the shortest input producing them will be just the algorithm that copies large part of itself to output—i.e., a priory hypothesis is incredibly long.
The fact that we are looking always for something simpler is an assumption of simplicity. Our Universe apparently happened to be governed by the set of simple laws so it works. However, this is the assumption, or axiom. It is not corollary from some math—from math prior should be awfully complex hypothesis.
If you put this assumption as initial axiom, it is quite logical to set incredibly low priors for God. However, starting from the pure math, the prior for this axiom itself is infinitesimal. The prior for God’s hypothesis is also infinitesimal, no doubts. Well, for my God’s hypothesis, since it is then lead to your axiom (limited by the Universe) as a consequence. For “witch from neighborhood did it” and then copy paste all the Universe data to “it” priors actually should be higher for reason discussed above.
Why don’t we then keep the “witch” hypothesis? Well, because its predictivity strength is zero. So basically we keep simplicity hypothesis in spite of its incredibly low priors because of its predictivity strength. And if we want to compare it with different supernatural hypothesis we should compare the predictivity strength. You can not cast them out just because of priors. They are not lower.
Please correct me if I am wrong, but I feel that there is a … well, not mistake… assumption that is not necessarily true. What I mean is the following. Let us consider the space of all possible inputs and the space of all possible outputs for the Turing machine (yeah, both are infinitely dimensional, who cares). The data (our Universe) is in the space of outputs, theory to test in the space of inputs. Now, before any assumptions about data and theory, what is the probability for the arbitrarily chosen input of length n lead to output with length N (since the output is all the observed data from our Universe, N is pretty large) - this is what is prior probability, correct?
No? Perhaps you were trying to do something else, but the above is not a description of Solomonoff induction.
Where exactly is the faulty assumption here?
In Solomonoff induction, the observations of the universe (the evidence) are the inputs. We also enumerate all possible algorithms (the hypotheses modeling the universe) and for each algorithm run it to see if it produces the same evidence observed. As we gain new bits of evidence, we discard any hypothesis that contradicts the evidence observed so far, because it is incorrect.
The fact that we are looking always for something simpler is an assumption of simplicity. Our Universe apparently happened to be governed by the set of simple laws so it works. However, this is the assumption, or axiom. It is not corollary from some math—from math prior should be awfully complex hypothesis.
What probability should you assign to the proposition that the next observed bit will be a 1? How should we choose between the infinite remaining models that have not yet contradicted observations? That’s the question of priors. We have to weight them with some probability distribution, and (when normalized), they must sum to a probability of 100%, by definition of “probability”. We obviously can’t give them all equal weight or our sum will be “infinity”. Giving them increasing weights would also blow up. Therefore, in the limit probabilities must decrease as we enumerate the hypotheses.
To be more precise, for every ϵ > 0, there is some length l such that the probability of all programs longer than l is at most ϵ.
Otherwise, I can say that probability decays logarithmically with complexity, and you will say that it decays exponentially, and we will get totally different prior probabilities and totally different results.
Can you? It’s not enough that it decays; it must decay fast enough to not diverge to infinity. Faster than the harmonic series (which is logarithmically divergent), for example.
Solomonoff’s prior is optimal in some sense, but it is not uniquely valid. Other decaying distributions could converge on the correct model, but more slowly. The exact choice of probability distribution is not relevant to our discussion here, as long as we use a valid one.
the majority of the outputs with length N can not be produced by the input with length n (basically, only 2^n out of 2^N has any chance to be produced in such way). For the vast majority of these outputs the shortest input producing them will be just the algorithm that copies large part of itself to output—i.e., a prior[i] hypothesis is incredibly long.
The fact that we are looking always for something simpler is an assumption of simplicity. Our Universe apparently happened to be governed by the set of simple laws so it works. However, this is the assumption, or axiom. It is not corollary from some math—from math prior should be awfully complex hypothesis.
If the observation is in no way compressible, then there is no model simpler than the observation itself, and your prediction for the next bit can be no better than chance. Maybe you haven’t observed enough yet, and future bits will compress.
But there can be no agents in a totally random universe, because there is no way to predict the consequences of potential actions. We can rule that case out for our universe by the anthropic principle.
If you put this assumption as initial axiom, it is quite logical to set incredibly low priors for God.
That’s right. So what is your alternative? Give up on induction altogether? That’s completely untenable.
Ok, let me repeat more precisely so you would see if I understand all things correctly, and if not you would correct me.
1. We have the Universe, that is like a black box: we can make some experiment (collide the particles, look at the particular region of the sky) and get some data. The Universe can be described as a mapping from the space of all possible inputs (experiments) to all possible outputs (observations). To be very precise, let us discuss not observations of humanity as a whole (since you do not observe them directly), but only your own observations in a particular moment of time (your past experiments and observations are now coming from your memory, so they are outputs from your memory).
2. If there are 2^K possible inputs and 2^M possible outputs, there are totally 2^N = (2^M)^(2^K) possible mappings.
3. We can represent this mapping as an output for the universal Turing machine (UTM), which input will be our hypothesis. There are different realizations of the UTM, so let us pick one of the minimal ones(see Wikipedia).
4. There will be more than one hypothesis giving correct mapping. “Witch did it”, “Dumbledore did it” etc. Let us study the probability that the given hypothesis is the shortest that reproduces correct mapping. (If we have more than one shortest, let’s pick the one that is assigned to a smaller binary number, or just pick randomly). For such a rule, there is only one shortest hypothesis. It exists because there is a correct hypothesis “Witch did it” , that might be not the shortest, so we will just look for those that are shorter.
5. The probability for a hypothesis with length n be the shortest hypothesis for n < N is a priori not larger than 2^(n—N) since there are 2^N possible mappings and only 2^n possible hypothesis.
6. The antropic principle does not help here. You know that you perceive input and produce output, but you can’t assume anything about future input and output—a priori.
7. Now you want to introduce the new principle—predictivity, that you actually can predict stuff. I agree with introducing it. This leads to the strong assumption that actually our mapping is one of such that can be produced by a short hypothesis. So, you redefine the probabilities such that you would have a pick for short hypothesis, and integral still be 1.
8. Let us look closer at ou options. Funny that the Solomonoff’s lightsaber actually does not converge fast enough. Indeed, you have 2^(-n) probability for a particular hypothesis of length n, but there are in total 2^n hypothesis of length n, that give you 1 for all the hypothesis of length n. Thus you integrate 1 from 0 to infinity obtaining divergence. To fix it you can simply say that the probability 2^(- a n) with a > 1.
9. However, is convergence the only a prior thing that we require? I would say no. Indeed, can the input of length 1 to one of the minimal UTMs make it produce an output of the length N>>1 and halt? My probability for this is incredibly low. (Of course, you can construct UTM so that it will make it—but it will not be minimal). Notice that I do not say “complex input” or something like that, I am concerning only about the size. The same I would say for all very small numbers. If you have some free time and good at coding, you can play with the minimal known UTMs to see which smallest input produces large but finite output—this would give an estimation of how small n can be. Let us call it n_0
10. Now we would like to have a function such that it is almost zero at n significantly smaller than n_0, growth fast around n_0 and then decays (fast enough to keep the integral convergent). So it will have a maximum, and this maximum will have some width. What is its width? Is it just a matter of taste? To understand it let us return to the reason why we started the search of this function—the need for predictivity.
11. So, since we basically need to be able to predict future observations, the width of the function is limited by us. If it is too wide and we need to include a highly complicated hypothesis, we fail—simply because it is too hard for us to calculate based on such complicated hypothesis. Thus, we just limit ourselves by hypothesis simple enough to use, and this gives the width of the function.
12. To sum up, if hypothesis B is more complicated than A, but still can be used to give predictions, it should not be discarded by adding very low prior probability to it in comparison with hypothesis A.
I’m not sure if this is all correct. What you’re describing doesn’t exactly sound like Solomonoff induction, but you do seem to have a grasp of the principles involved.
Solomonoff induction does not discard any program that is consistent with the observation so far. But for any observation string there are an infinite number of programs that produce that string. There is a shortest one, an infinite class of that program prefixed by some whole number of no-operations (computations that undo themselves). And compilers implementing that same program in encodings of other programming languages. And interpreters implementing that same program in encodings of other programming languages. And entire universes containing people who happen to be simulating one of these (which may be considered an unreliable type of interpreter). And arbitrary nestings of any of the above any number of times. None of this is discarded. But again, the set is infinite, so no matter what distribution you choose, the probabilities after some point must decrease for it to converge.
The point about the witch isn’t that “witch” is a complex cost to encode in a program (although it is), but that “she did it” fails to compress the data at all, because you still have to encode what the pronoun “it” is referring to. Because a “witch” can be blamed for literally anything, adding a “witch” to the uncompressed hypothesis “it” adds no predictive power whatsoever. (If you can compress “it” some other way, then you can make predictions without the witch and she is useless to your model.)
God, who can likewise be credited for anything (even what looks like evil—”all part of God’s plan”, or “God works in mysterious ways”, right?) is the same as the witch: no predictive power over “it” whatsoever. And worse, God’s complexity cost is not just relatively big like any intelligent mind (such as the witch) would be, but literally infinite if we say that God is omniscient: If God is a “halting oracle”, then God is not even contained in the set of all computer programs, because He is not computable: He can’t even be a hypothesis, only approximated. And to get a better approximation, you must use a longer computer program that encodes more of Chaitin’s constant, which is provably not compressible by any halting program. Better approximations of God get bigger without limit. The approximate God hypothesis has literally infinitesimal probability—you can’t escape it: The better the approximation gets, the less likely it is.
And the true God hypothesis is not even in the running. It literally cannot be proved by induction at all. Nor can you take God as an axiom. (I will dismiss it as the fallacy of special pleading: applying this privilege lets us prove anything, even false gods.) The only hope then is proving deductively from some logical necessity, or giving up on omniscience as defined, which of course, opens the possibility of there being beings greater than whichever God you choose.
″ God, who can likewise be credited for anything (even what looks like evil—”all part of God’s plan”, or “God works in mysterious ways”, right?) is the same as the witch: no predictive power over “it” whatsoever.”
Not exactly. First, I can predict that if I throw the stone it will fall down and stuff like that. A miracle may happen, but the probability for it to happen from nowhere is very small (also not zero). Second, I give higher probabilities to what is common place for miracles to happen (like myrrh-streaming icon mentioned above, or healing, or answer to the prayers). With no God hypothesis I must put to zero such probabilities, and if there is a God I keep them finite. So, first, such theory can predict something (whether predictions correct or not, it is separate thread discussion, I will go back to it when I have time from this thread). Second, the predictions do not always coincide with no God theory predictions (like deist theory, that there is a God that does not interact with the Universe) - so it is different theory.
″ And worse, God’s complexity cost is not just relatively big like any intelligent mind (such as the witch) would be, but literally infinite if we say that God is omniscient: If God is a “halting oracle”, then God is not even contained in the set of all computer programs, because He is not computable: He can’t even be a hypothesis, only approximated. And to get a better approximation, you must use a longer computer program that encodes more of Chaitin’s constant, which is provably not compressible by any halting program. Better approximations of God get bigger without limit. The approximate God hypothesis has literally infinitesimal probability—you can’t escape it: The better the approximation gets, the less likely it is. ”
Hmmm. Indeed, you are totally right here. I actually never thought that incomprehensibility is directly connected with the omniscience. Thank you very much for this, it make me to reconsider a lot of things.
We indeed can have only approximate knowledge of God. However, this approximate version of the whole hypothesis can be short enough to compete with no God hypothesis (remember, I was talking about the width of the function? ).
So, for example, the zero approximation of the God hypothesis is that God does not interact with the Universe. It basically leads to the same predictions, as no God hypothesis, so it should be eliminated (actually not that simple, I will talk more about it closer to the end of this comment). The first order approximation will be the God very rarely interacting with the Universe, so there are miracles with very low probability. Next orders will be clearer classification of these miracles. You see that these approximations have predictive power, not significantly longer than no God hypothesis, and the set of predictions is not identical—so they are decent competitors.
What is the difference between such approximation and the same approximation for alien teens etc? Why would we prefer the God hypothesis to the alien teens? Well, because to say: “there is a God with such and such attributes” is simpler than to say “there are alien teens who form around us a reality such that it looks like that there is a God with such and such attributes”.
But why do we need to say that there is the omniscient God at all if all we are going to do is to use approximations? Well, let me give you an analogy from mathematical physics. There is such thing as M-theory. Well, to be honest, M-theory is not formulated. However, only the assumption that such theory exists (even though not formulated) leads to some interesting dualities between other theories. The same is here. Assumption of the omniscient God gives fruitful approximations. Whether they are correct or not—it is the discussion on miracles in the different thread. But we can not simply say that they have very low prior probabilities, since they are not significantly longer than no God hypothesis and are within the width of the maximum of the probability function distribution.
Hmmm. Indeed, you are totally right here. I actually never thought that incomprehensibility is directly connected with the omniscience. Thank you very much for this, it make me to reconsider a lot of things.
An update of beliefs! We are making progress.
However, this approximate version of the whole hypothesis can be short enough to compete with no God hypothesis
So are you weakening the original claim? You are no longer trying to persuade me of an omniscient being, but only a sufficiently knowledgeable one?
What is the difference between such approximation and the same approximation for alien teens etc? Why would we prefer the God hypothesis to the alien teens?
Yeah, at this point, I think we may be talking about aliens, not God, but we’re going to use your definitions of the terms. I personally wouldn’t expect omniscience of a small-g “god”.
Well, because to say: “there is a God with such and such attributes” is simpler than to say “there are alien teens who form around us a reality such that it looks like that there is a God with such and such attributes”.
I don’t really agree with that, and here is an illustration of why:
Suppose I tell you that I have an aunt that owns a dog. I think most people would just believe me. Aunts are not at all rare, and neither are dogs. Maybe I could be lying to prove a point, but dogs are so common, that I probably could have picked another relative with no need to lie about it.
Now, suppose I tell you that I have an uncle who owns a tiger. I think most people would not just believe that easily. There certainly are people who own tigers though. So maybe you’d be persuaded with a little more proof. Maybe I could show you a picture. That might help until you realize that you’ve only ever met me online, and have no idea what I look like. Maybe I’m not the man in the photo (I could be a woman for all you know), and maybe the owner is not my uncle. Maybe I could do a video chat with you and you could see I have the same face. That would help, but maybe I used Photoshop on the tiger picture to insert my face. At some point though, the evidence would be good enough, or you’d call my bluff.
Now, suppose I tell you that I have a nephew who has a pet purple martian dragon. Your first impression might be, is that a Pokemon? A toy? (Even understanding what another person is saying requires some shared priors.) “No, I mean it’s literally an alien creature from Mars,” I say. Did he tell you that? Kids have wild imaginations. “No, no, I saw it.” OK, we know life exists on Earth, there’s no physical reason why it couldn’t exist on other planets. It’s not outside the realm of possibility, but you’re going to need a lot more evidence than for the tiger.
Now, suppose I tell you that I have a niece with a pet genie. He can turn invisible and follows her to school. She gives him lamp rubs and sometimes he grants her minor wishes using magical powers when he’s in a good mood. Does this seem more or less likely than the purple dragon? The dragon is at least compatible with what we know of science. Magical powers, not so much.
The above stories are an illustration of how you, or people in general are already using priors. The lower the prior, the more evidence is required to overcome that prior.
Now suppose I tell you that I have an internet acquaintance who has an invisible friend named Steve, whom he communicates with via mental telepathy, although Steve seems oddly reluctant to answer sometimes. Steve has phenomenal cosmic magical powers and can rearrange stars and stuff. “Have you ever seen Steve do this?”, we ask. “No, but I’ve seen him remotely draw pictures of his mother on toast.” I don’t know about you, but I’m gonna need a little more proof than that. Right? Does this sound more or less likely than my niece’s genie? Even the genie could explain the toast. Not only is Steve invisible, he has stronger magic? Wouldn’t we need at least as much evidence as for the genie? Or for the dragon? The tiger?
Oops, hold on. My acquaintance tells me I got the name wrong it—was a glitch in Google Translate. His real name is not Steve—it’s Jesus. (. . .) I guess that settles it.
But why do we need to say that there is the omniscient God at all if all we are going to do is to use approximations? Well, let me give you an analogy from mathematical physics.
Past some point these cases are indistinguishable with the finite amount of available evidence anyway, so I would argue that the difference is meaningless, at least from the perspective of induction on evidence: it makes no difference to the resulting predictions.
However, the difference may still matter to arguments of logical necessity, and if your faith has some creed that cares about the distinction, the weakened definition of God may still be a problem for you.
Well, violation of Laws of Nature is violation of Laws of Nature, whether they applied to the remote drawing without any interactions or to star moving. If Steve can draw pictures on toast remotely he violates the Laws of Nature and the hypothesis that the Universe is completely controlled by the Laws of Nature, without any Higher Power, aliens, the guy who runs a simulation etc—is falsified.
Now, going back to aliens vs God hypothesis.
″ The dragon is at least compatible with what we know of science. Magical powers, not so much. ”
The problem that compatibility of the hypothesis with what we know before is not an argument at all when we are talking about fundamental hypothesis (i.e., not “who stole my car” but the hypothesis explaining the Universe). Indeed, look at the history of Quantum Mechanics. Initially, a lot of scientists hated the idea that the probabilistic description of the Universe is fundamental, so they come up with hidden parameters idea. All they now before was deterministic. If you knew all the velocities and positions of all the molecules, you could predict everything exactly—but you did not, and here the classical probability was coming. So they just suggested the same idea for Quantum Mechanics. That actually everything is still deterministic, we just don’t know hidden variables, and then the observation appears to be probabilistic. You do not need to invent modified Turing machine that would produce different input with different probability, you still good to have good old deterministic Turing machine. Looks much better, right?
Then it appears that actually you can distinguish between these hidden parameters hypothesis and fundamentally probabilistic hypothesis—see Bell’s inequalities. And the experimental test demonstrated, that there are no hidden parameters. QM is fundamentally probabilistic.
Thus, the fact that we need to throw to the trash can all our current assumptions and build the theory based on new assumptions does not mean that we should put small probability to this new theory and need hidden parameters or hidden aliens given the same observations. It just means we maybe were wrong.
Thus, the fact that we need to throw to the trash can all our current assumptions and build the theory based on new assumptions does not mean that we should put small probability to this new theory and need hidden parameters or hidden aliens given the same observations.
The alien hypothesis dominates the God hypothesis, because God is infinitely improbable, but aliens are only finitely improbable.
However, this approximate version of the whole hypothesis can be short enough to compete with no God hypothesis (remember, I was talking about the width of the function? ).
You seem to be arguing that we can bias our prior to accept an approximate God at the very edge of the “width”. I say the rights of Mortimer Q. Snodgrass are being violated.
Why your God,
Rather than Allah, the Flying Spaghetti Monster, or a trillion other gods no less complicated—never mind the space of naturalistic explanations![?]
“You seem to be arguing that we can bias our prior to accept an approximate God at the very edge of the “width”. I say the rights of Mortimer Q. Snodgrass are being violated.”
No. If you read the comment about the width of the function you can see that my argument is not about God at all, but about what we need from the hypothesis (predictivity).
″ The alien hypothesis dominates the God hypothesis, because God is infinitely improbable, but aliens are only finitely improbable. ”
No. We use the approximation, and approximation has the same size for both of them (we consider the case of comparing hypothesis “There is a God with such and such attributes” and ” There are aliens forge us to believe that there is a God with such and such attributes”). The algorithm of construction of this approximation, though, is simpler for pure God’s hypothesis (using the mere fact of its existence, not formulating the hypothesis itself, like we establish dualities between different types of string theories using that M-theory exists but without formulating it) since it does not require transitional link of “hidden aliens”.
“Why your God,
Rather than Allah, the Flying Spaghetti Monster, or a trillion other gods no less complicated—never mind the space of naturalistic explanations![?]
″
Suppose I tell you soon after the discovery of muon that there is another particle, like the electron, but with the mass 105.6583745 (24) MeV and lifetime 2.19698119(22) microseconds. You would tell me: “Ok I can assume that there is a particle like the electron, although I would put quite low probability to it. But to believe, that its mass is 105.6583745(24) MeV !? No, it is absurd—there is a trillion of other possibilities!”
Of course. A priori possibility for all different gods is approximately the same. In total, they add to the prior probability that there is some God—and I was arguing that this prior probability is finite. Then, after you make an observation, you can discover more attributes of God and come to Allah, Christ, Flying Spaghetti Monster, Aliens or nothing beyond Laws of Nature.
No. We use the approximation, and approximation has the same size for both of them (we consider the case of comparing hypothesis “There is a God with such and such attributes” and ” There are aliens forge us to believe that there is a God with such and such attributes”). The algorithm of construction of this approximation, though, is simpler for pure God’s hypothesis (using the mere fact of its existence, not formulating the hypothesis itself, like we establish dualities between different types of string theories using that M-theory exists but without formulating it) since it does not require transitional link of “hidden aliens”.
I’m not understanding this part. If we already assume that aliens and God exist (which is not allowed because it’s begging the question) then of course it’s simpler to assume God explains the evidence than to introduce the additional hypothesis that the aliens are also trying to fool us.
But without committing the fallacy of begging the question, we are left with the conjunctive hypothesis of “aliens exist” and “they are trying to fool us” that dominates “there is an omniscient being” (which must have an infinitesimal prior), never mind all the other attributes of your particular God.
“that dominates “there is an omniscient being” (which must have an infinitesimal prior)”
It must not, because the theory does not completely describe omniscient being, but states its existence. If your theory claim that the Universe is infinite (which can be true, we might live in the open Universe) it does not mean that your theory is infinite.
Once again, how did you distribute priors? By how easy you can use the theory to make predictions. In both cases, hidden parameters or hidden aliens, you say: ok, let us keep our old assumptions, but introduce hidden thing Y that works such that our observations can be explained by X. X alone is not good—it requires to go from deterministic to random Turing machine (QM) or acknowledge that the theory exactly describing our observations can be infinitely large, while we can only approximate it. Y gives some hope to resolve it—to stay within deterministic Turing machine, or within finite though large theory of everything. However, in both cases using of Y is just “Y simulates X”. Well, in my opinion you even do not need a Solomonoff’s lightsaber here—simple Occam razor is enough to see that Y is redundant.
Equivocation. The algorithmic (Kolmogorov) complexity cost of the conjunction of “simulated X” and Y is finite, but the “real X” is infinite, therefore, the former must be preferred by Occam’s razor. “Simulated X” is a deception by aliens and is not a full halting oracle, but a finite approximation of one. It can’t do everything the “real X” could.
I do not believe that aliens are performing miracles, just that that explanation is infinitely more probable on priors than an omniscient God. The miracles you have pointed to so far are best explained as natural accidents or hoaxes, not nearly enough evidence to even suggest aliens.
Ok. Looks like we started to go on circle, sorry for not being clear enough. Let me try to explain once again.
You have a lot of observation data. You have significantly more potential observation data you can gather. I was considered before that all the potential observation data as finite—however, I understood that it is not so, for example, if scientific breakthrough, aliens or God will turn us into the immortal creatures with every year increasing ability to gather, remember, and process information.
So, you want to find a theory based on already observed data, that would predict the data that is not yet observed. I bet we both believe that it is possible to do, but with some limitations.
1. Does finite theory exactly predicting all data exists? (In a sense of the Turing machine). Since all the data is infinite, a prior probability for such theory would be zero—without any other assumptions. You can introduce strong assumption of predictivity, basically stating that such theory exists. However, I think that this assumption is too strong (based on a posteriori results of quantum mechanics where you can predict only the probability of the observation but not the definite outcome—so you can recover with your theory only part of the observed data). Instead I would suggest weak predictivity assumption:
2. The theory exactly predicting all the data is infinite (such infinite theories exist—for example “witch did it” where “it” is “all the data to be observed”); however, its finite approximations can predict some part of the data with some precision.
You can try to make it stricter, saying: “Among all the finite approximations there is one with maximal predictive power” but I do not see any arguments for it. The prior expectations tells that you can increase the precision by increasing the length of the theory.
Now, we would like to classify the finite approximations based on their precision and length. First, is just the reference to existence of exact infinite theory makes the theory under consideration infinite too? No—otherwise we need to acknowledge that Tegmark theory of mathematical multiverse (all the mathematically consistent worlds) is infinite. It refers to the existence of all the possible worlds, not describing each of them. The same, theory stating that the God knows everything does not state what exactly He knows. Thus, our approximation of infinite theory of omniscient God is just “God exists with such and such attributes” and it is finite. The approximation “aliens fake God with such and such attributes” is also finite but longer. It may seems better because “aliens faking God” can potentially be an approximation of the finite exact theory, predicting everything—however, as we discussed before, there is no reason to assume that such finite theory exists, and think that “aliens fake God” dominates “God exists” because the first is approximation of finite theory, and the second of the infinite. We compare the lengths of the approximations, not of the full theories, and the approximation “God exists” is shorter and thus should be preferred.
″ I do not believe that aliens are performing miracles, just that that explanation is infinitely more probable on priors than an omniscient God. The miracles you have pointed to so far are best explained as natural accidents or hoaxes, not nearly enough evidence to even suggest aliens. ”
Let us first fix the priors and then move to discussing miracles, ok?
Yes, I just started to notice that after re-reading this thread. It seems like we’re talking past each other without understanding. For Double Crux to work, we’re not supposed to aim for direct persuasion until after we’ve identified the double crux, or we’ll get “lost in the weeds” discussing the parts that aren’t important to us. Have we found it yet? I think we have not, and that’s what went wrong here.
I have yet to identify a single crux, but part of that might be because I don’t understand your concept of God. I don’t know what crux could possibly convince me your God exists, because I still don’t know what “God” means (to you).
I’m honestly not that familiar with the Eastern Orthodox tradition. Protestant sects are more common in my country. The God concept worshiped by the average churchgoer here seems laughably naiive, and logically incoherent, but it does have some differences with what you’ve described so far. And the apologists, even in my country, seem to have a different definition that the average churchgoer (in my country), probably because the naiive definition is so indefensible. It’s motte-and-bailey rhetoric—a combination of bait-and-switch with equivocation.
So I’ll ask again: Is omniscience a crux for you? That is, if a source you would consider authoritative (the bishops, the Patriarch, archeology, visions from God, whatever it takes) explained to you that omniscience was not an attribute of God as He revealed Himself, but a later misrepresentation made by sinful philosophers, would you then say your God does not exist?
If you answer, “Then my God still exists and is not quite omniscient as I had once believed,” then omniscience is not a necessary attribute for your God definition, and there is no need to discuss it further, because it is not a crux.
But, if you answer, “A ‘God’ that is not omniscient is no God of mine,” then omniscience is a crux for you and we need to nail down what that means, because it might be closely related to a crux of mine.
For Double Crux to work, we’re not supposed to aim for direct persuasion until after we’ve identified the double crux, or we’ll get “lost in the weeds” discussing the parts that aren’t important to us.
I’m not sure this is part of the authoritative definition of doublecrux, but FYI the way I personally think of it is “Debate is when you try to persuade the other person [or third parties] that you’re right and they’re wrong. Doublecrux is when you try to persuade _yourself_ that they’re right and you’re wrong, and your collective role as a team is to help each other with that.” (I don’t think this is quite right, obviously the goal is for both of you to move towards the truth together, whatever that may be, but I think the distinction I just made can sometimes be helpful for shaking yourself out of debate mode)
I’m not sure this is part of the authoritative definition of doublecrux,
I’m not sure if anyone has an authoritative definition of doublecrux yet. But as this is my first real attempt at it, I appreciate guidance. We did open with the Litany of Tarski, but I might have lost sight of that for a moment. I maintain that I at least need to understand what my interlocutor is saying before I can conclude that he is right.
you try to persuade yourself that they’re right and you’re wrong … (I don’t think this is quite right, obviously the goal is for both of you to move towards the truth together …
Again, the Litany of Tarski: If a God exists, I desire to believe that is the case. An update for either side is a victory. But the goal is not to fool myself or give up, or give in to confusion. The update must be an honest one, or the whole exercise is empty.
Wouldn’t the prior probability of God to exist be a crux for you? I.e., if you change your prior probability from infinitesimal to somewhat not negligibly small, would it change your position? At least the infinitesimal probability is a crux for me.
Let me also notice that our positions do not completely cover all the spectrum of possible answers (it not exactly “A” or “not A” ). I.e., as far as I get you think the world is completely controlled by laws of nature, I think there is a God as Eastern Orthodoxy describe it. In between there are many other options:
-simulation
-aliens
-Higher Power (includes my believe as particular case)
-the world that is not describable by math fully but only approximately
-and whatever else that just does not come to my mind
Wouldn’t the prior probability of God to exist be a crux for you? I.e., if you change your prior probability from infinitesimal to somewhat not negligibly small, would it change your position? At least the infinitesimal probability is a crux for me.
Getting past an infinitesimal prior to a tiny finite one is a long way from “more likely than not”.
But more simply, my prior is my position. If you get my prior belief for the proposition “God exists” over 50%, then you’ve won: at that point I’ve become a theist by definition (though maybe not a very confident one). This isn’t a crux—It’s the original proposition!
Errr not completely—you have prior and you have experience. For example, suppose you agree after long discussion that probability of God to exist is not infinitesimal but 0.01% . Ok, you are still more atheist than a theist. Then if you observe miracles you update it to much higher probability—but you can’t do it if your prior is infinitesimal as now.
What would you put your priors now for the following:
-the Universe is completely describable by a finite set of laws, no other reality behind
-the Universe is approximately describable by the finite set of laws, approximation improves with the length of the theory (need infinite theory to full description)
the Universe is completely describable by a finite set of laws, no other reality behind
I don’t consider this question well-posed. Physics seems to be working pretty well. But what do you mean by “Universe”? The part we can observe? Surely there’s more to it than that. And “laws” can be dependent on context. The law that objects accelerate downward at 9.8m/s/s doesn’t apply on Mars, but there’s a similar law with a lower number and an underlying law of gravity connecting both cases. Laws that seem to be “fundamental” now are probably dependent on local conditions. The “symmetry breaking” observed in particle physics indicates this. And very simple rules like Conway’s Life can produce very complex behavior, with emergent “laws”, like “gliders travel diagonally”. Is this law from a reality behind or in front of Life?
Ok, let us put it more strict. What are your priors that there exist a finite theory that can predict all our potential future observations exactly? And what are your priors that such theory does not exist and we can only use approximations?
N.B. By all observations I mean ALL observations, including the results of measurements in QM (not just their probabilities, we observe results too, right?)
I still don’t understand. Are you asking if the universe is deterministic?
Which sense of “exist” do you mean? Mathematically, where we can “have” imaginary things like infinite uncomputable sets, or physically, where we obviously can’t construct an object corresponding to such a thing?
Solomonoff induction cannot be run on real physics. It’s an abstracted ideal that can only be approximated. Maybe quantum field theory predicts the motion of particles to an accuracy of eleven digits, but that doesn’t mean you can use it to predict the weather. You don’t have enough computing power, and you don’t know the initial conditions to that precision anyway.
Even AIXI, an ideal agent using Solomonoff induction (which can’t be physically built), can only make probabilistic predictions based on observations made so far. There’s always an infinite class of universes (hypotheses) that have produced the observations thus far, and they always disagree on the next bit.
There’s no need to invoke quantum physics here. Given what we already know of relativistic physics, it’s always possible that a particle could approach at the speed of light and mess up your plans. Because it’s moving at light speed, there’s no way in principle you could have observed it to take it into account in advance. Even AIXI can be “surprised” by low-probability events like this, even in a deterministic universe (because it has only observed a small part of the universe so far), and it has infinite computing power!
Well, of course I do not suggest to predict the weather from the laws of QFT, I mean mathematically. Let us consider all possible future observations as a data. Do you think that it can be exactly generated by the theory of the finite length (as an output of the universal Turing machine with the theory as an input), or you would require an infinite length of the theory for the exact reproducing?
The observable universe probably has a finite number of possible states.
The laws of physics appear to be deterministic and Turing computable.
Therefore, an infinite theory would never be required. (And this makes me sympathetic to the ultrafinitists.) The laws of physics can be mapped to a Turing machine, and the initial conditions to a (large, but) finite integer. There is nothing else.
But I’m not sure that “all possible future observations” means what you think it means.
In the MWI, any observer is going to have multiple future Everett branches. That’s the indexical uncertainty. Before the timeline splits, there is simply no fact of the matter as to which “one” future you are going to experience: all of them will happen, but the branches won’t be aware of each other afterwards.
And MWI isn’t even required for indexical uncertainty to apply. A Tegmark level I multiverse is sufficient: if the universe is sufficiently large, whatever pattern in matter constitutes “you” will have multiple identical instances. There is no fact of the matter as to which “one” you are. The patterns are identical, so you are all of them. When you make a choice, you choose for all of them, because they are identical, they have no ability to be different. Atoms are waves in quantum fields and don’t have any kind of individual identity. You are your pattern, not your atoms. But, when they encounter external environmental differences, their timelines will diverge.
if the universe is sufficiently large, whatever pattern in matter constitutes “you” will have multiple identical instances. There is no fact of the matter as to which “one” you are. The patterns are identical, so you are all of them. When you make a choice, you choose for all of them, because they are identical, they have no ability to be different.
Copies of you that arise purely from the size of the universe will have the same counterfacutal or funcitonal behaviour, that is they will do the same thing under the same circumstances...but they will not, in general, do the same thing because they are not in the same circumstances. (There is also the issue that being in different circumstances and making different decisions will feed back into your personality and alter it).
The observable universe probably has a finite number of possible states.”
Not so sure about that. For this you need at least
1. The Universe to be finite (i.e. you can not have open Universe, only the surface of 4d sphere). It is possible, the measured curvature of the Universe is approximately on the boundary, but the open is also possible.
2. The Universe to be discrete on microscale. Again, according to some theories it is the case, according to the others, it is not.
So, I would say: “maybe yes, it is finite, but the prior probability is far from being 1 ”.
Side note: the Universe with finite number of states is quite depressive picture since it means that inevitably everything will just end up in the highest entropy state, so, the inevitable end of humanity. Of course, it contradicts nothing, but in this model any discussion of existential threats for the humanity (like superintelligence quite popular here) makes no sense since the end is unavoidable.
″ And MWI isn’t even required for indexical uncertainty to apply. A Tegmark level I multiverse is sufficient: if the universe is sufficiently large, whatever pattern in matter constitutes “you” will have multiple identical instances. There is no fact of the matter as to which “one” you are. The patterns are identical, so you are all of them. When you make a choice, you choose for all of them, because they are identical, they have no ability to be different. Atoms are waves in quantum fields and don’t have any kind of individual identity. You are your pattern, not your atoms. But, when they encounter external environmental differences, their timelines will diverge. ”
Could you please explain it in more details? I am confused. If I measure the spin of the electron that is in the superposition of spin up and spin down, I obtain with probability p spin up and with probability 1-p spin down. How to exactly predict using Tegmark multiverse when I see spin up and when I see spin down?
Could you please explain it in more details? I am confused. If I measure the spin of the electron that is in the superposition of spin up and spin down, I obtain with probability p spin up and with probability 1-p spin down. How to exactly predict using Tegmark multiverse when I see spin up and when I see spin down?
I’m not saying that a Tegmark I multiverse is equivalent to MWI, that’s actually Tegmark III. I’m saying that Tegmark I is sufficient to have indexical uncertainty, which looks like branching timelines, even if MWI is not true. See Nick Bostrom’s Anthropic Bias for more on this topic.
Only if you’re interested. I haven’t actually read the whole book myself, but I have read LessWrong discussions based on it. I think the Sleeping Beauty problem illustrates the important parts we were talking about.
Ah, I think I got the point, thank you. However, it does not resolve all questions.
1. You can’t deduce Born’s rule—only postulate it.
2. Most important, it does not give you a prediction what YOU will observe (unlike hidden parameters—they at least could do it). Yes, you know that some copies will see X, and some will see Y, but it is not an ideal predictor, because you can’t say beforehand what you will see, in which copy you will end up. So all your future observed data can not be predicted, only the probability distribution can be.
Can’t you? Carroll calls it “self-locating uncertainty”, which is a synonym for the “indexical uncertainty” we’ve been talking about. I’ll admit I don’t know enough quantum physics to follow all the math in that paper.
Most important, it does not give you a prediction what YOU will observe (unlike hidden parameters—they at least could do it). Yes, you know that some copies will see X, and some will see Y, but it is not an ideal predictor, because you can’t say beforehand what you will see, in which copy you will end up.
Yeah, in this scenario, the “YOU” doesn’t exist. Before the split, there’s one “you”, after, two. But even after the split happens, you don’t know which branch you’re in until after you see the measurement. Even an ideal reasoner that has computed the whole wavefunction can’t know which branch he’s on without some information indicating which.
So all your future observed data can not be predicted, only the probability distribution can be.
More or less. You can compute all the branches in advance, but don’t necessarily know where you are after you get there. The past timeline is linear, and the future one branches.
″ Can’t you? Carroll calls it “self-locating uncertainty”, which is a synonym for the “indexical uncertainty” we’ve been talking about. I’ll admit I don’t know enough quantum physics to follow all the math in that paper. ”
That was super cool, thank you a lot for this link!
Side note: the Universe with finite number of states is quite depressive picture since it means that inevitably everything will just end up in the highest entropy state, so, the inevitable end of humanity.
Yes, according to our best current understanding of cosmology, the universe itself will eventually die (i.e. become unable to sustain life).
Of course, it contradicts nothing, but in this model any discussion of existential threats for the humanity (like superintelligence quite popular here) makes no sense since the end is unavoidable.
Again the laws of physics are what they are and don’t care what I want.
But in the most likely scenarios, this will take a very long time. The Stelliferous Era (when the stars shine) is predicted to last 100 trillion years, and we’re not even 14 billion years into it. Civilization may continue to extract energy from black holes for a time many orders of magnitude longer than that.
It’s not completely hopeless. Maybe in that time we’ll figure out how to make basement universes and transfer civilization into a new one, as Nick Bostrom et al have argued may be possible.
But even if we ultimately can’t, shouldn’t we try? Shouldn’t we do the best we can? Wouldn’t you rather live for over 100 trillion years than die at 120 at best?
″ It’s not completely hopeless. Maybe in that time we’ll figure out how to make basement universes and transfer civilization into a new one, as Nick Bostrom et al have argued may be possible. ”
Yeah, you see then all the future possible observations data becomes infinite.
″ But even if we ultimately can’t, shouldn’t we try? Shouldn’t we do the best we can? Wouldn’t you rather live for over 100 trillion years than die at 120 at best? ”
Of course, we should try—because there is a chance that we can. Not because we would live 10^14 years and all die. We will count that we survive forever, or it will be pretty miserable 10^14 years without any hope.
The observable universe probably has a finite number of possible states.”
Not so sure about that.
Not sure either, which is why I said “probably”.
For this you need at least
The Universe to be finite (i.e. you can not have open Universe, only the surface of 4d sphere). It is possible, the measured curvature of the Universe is approximately on the boundary, but the open is also possible.
Note that I said “observable universe”, not “multiverse” or “cosmos”. There are regions of the universe that are not accessible because they are too far away, the universe is expanding, and the speed of light is finite. This limit is called the Cosmic event horizon
The Universe to be discrete on microscale. Again, according to some theories it is the case, according to the others, it is not.
I think it is sufficient to say that the information content of the observable universe is finitely bounded. Space doesn’t necessarily have to be made of pixels like some cellular automaton for this to hold. The Bekenstein bound is proven from Quantum Field Theory. How true QFT is, is another question, but experimental evidence proves that it is very true.
″ Note that I said “observable universe”, not “multiverse” or “cosmos”. There are regions of the universe that are not accessible because they are too far away, the universe is expanding, and the speed of light is finite. This limit is called the Cosmic event horizon ”
On the one hand side, you are totally correct about it—assuming cosmological constant (lambda-term ) stays what it is. There are nuances however:
-if we are forever in the de Sitter space (lambda dominated, as now) the universe is explicitly not time-invariant (simply because it extending). There is non-zero particle production rate, for example (analog of Hawking radiation). It means that we potentially can construct a “first kind perpetuum mobile” which means that we can get to any energy—infinite space for the observations. Unless this will start to have a screening effect on lambda-term.
-If lambda desreases (or screened) the expansion may go back from lambda-dominated to matter-dominated, leading to its slowing down. In this case we can start observing areas of the universe that used to be beyond the horizon.
Anyway, there are a lot of speculations what can be and what can not. Can we maybe agree that both prior probabilities: that all our possible future observations are finite and that they are infinite are not negligible? What about 1⁄2 for each, to start with?
Anyway, there are a lot of speculations what can be and what can not.
I worry we may be getting lost in the weeds again. We need to try and find cruxes. Is this related to a crux of yours? What exactly are you getting at?
prior probabilities: that all our possible future observations are finite and that they are infinite are not negligible?
Even if time could be extended infinitely without the universe dying, there is no time at which the infinity has been completed. It’s always finite so far.
An “immortal” being with finite memory in infinite time will eventually forget enough things to repeat itself in a loop, living the same life over and over again.
Can this be avoided? There are limits to any physical realization of memory. If you try to pack too many bits in a given volume of space, it will collapse into a black hole. And then adding anything more will make the event horizon bigger. Infinite memory requires infinite space and energy. Maybe with basement universes it could be done. They might have to communicate through wormholes or something. This is all very speculative, so I don’t know.
Well we can also make infinite memory (as you suggested). But, ok, what would you put as prior probability that the theoretically possible observation data is infinite? Looks like you are not strongly against it, so what about something between 0.5 and 0.1? (Of course we can’t strictly prove it right now). If you say “yes, this works” we can move on. If you would claim that this probability is also supertiny, like 10^(-1000) , I will continue to argue (well, yes, if we can not at all observe in all the infinite future infinite data, it does not make sense to talk about omniscient God).
To show you what I am leading to:
-if the total possible observation data is infinite, what is prior probability that it is exactly reproduced by finite hypothesis? I argue that it is infinitesimal
-what is the probability that there exist such infinite hypothesis? I argue that 1, for example, “witch did (copypaste all the data)”. Predictive force of this hypothesis is zero
-we need predictivity so we assume that there are finite approximations that can partially reproduce the data. Such assumption is less strong than assumption of the finite exact hypothesis so it should be preferred.
-therefore, we should use Solomonoff’s lightsaber not on full theories, but on approximmations
-consider two classes of approximations. The first gives exact predictions where it can and predicts nothing whee it can not. The second is weaker, it sometimes gives wrong predictions. Since the second is weaker, the priors for this are significantly higher. So, I would say, if observable data is infinite, most of our approximate theory have from time to time give wrong predictions
-this does not say, of course, how often are these wrong predictions. If they are too often, such approximation is useless.
-Basically, since predictions are laws of nature, wrong predictions are miracles. We should expect to them to exist but to be rare.
-Talking about aliens. Infinite hypothesis “God with such attributes exists” can be used only as approximation (that is, basically, our understanding of it). The finite hypothesis “aliens fake us to believe that God with such attributes exists” also can be used only as approximation, (that is our understanding of God + assumption that it is faked by aliens). Thus such approximation is longer and should be given smaller probability.
well, yes, if we can not at all observe in all the infinite future infinite data, it does not make sense to talk about omniscient God
You are not a future hyper-mind made of basement universes and wormholes. You’re a mortal human like me, with a lifespan measured in mere decades so far. Yet you claim to have knowledge of an infinite God. How did you come to this conclusion? By what method can you make such an assertion? Is this special pleading for a special case or do you use this method for anything else? Why should I consider that method sound and reliable?
My best guess: you were indoctrinated in childhood by your parents and community, long before you were old enough to develop critical thinking skills of your own. For obvious survival reasons, children are very inclined to learn from their parents and elders. The memeplex of any of the old religions must be self-sustaining, or they wouldn’t still be here. They include psychological tricks to produce fake evidence, to stop questions, to make empty threats. They include answers to your questions or at least pretend to. It became part of your identity. You later learned of the methods of science, but they didn’t become a part of you the same way. You compartmentalized the lessons and didn’t use them to update your old thinking. You sought out evidence to support your belief instead of trying to disprove it to see if it would hold up, like a scientist.
Most people seem to use this method. You are not alone. And that’s exactly the problem with it. People are using the same methods to believe in other religions that you already know to be false. How can that method be reliable if it so reliably produces the wrong answers? What makes you any different from them? Accident of birth. That’s it. Your methods are the same.
Maybe that’s a crux for me. If it could be shown that a God belief was founded on a sound epistemology that reliably produced good results, instead of these obvious fallacies, I would have a much harder time dismissing the proposition as a fraud.
” You sought out evidence to support your belief instead of trying to disprove it to see if it would hold up, like a scientist. ”
1. If I would do this I would never go to this website discussing this with you. Assume good intentions.
2. As you said, for infinitesimal prior probability no evidence is enough. That is what I am arguing here. If I get persuaded that probability is indeed infinitesimal, all my evidence are nothing. I can see resurrection of the deads and still it won’t be enough then.
3. I can blame the same thing on you. I am not going to guess but there are so many stories of atheists who became atheists just because God didn’t do what they asked. “I do not want to deal with such God that does not do what I want, therefore there is no God.”
Ok, let us go back to our business if you don’t mind.
″ If it could be shown that a God belief was founded on a sound epistemology that reliably produced good results, instead of these obvious fallacies, I would have a much harder time dismissing the proposition as a fraud. ”
First, could you review the previous comment to see if you agree with the logic, and if not, what do you disagree in particular.
Second, if you agree with this logic, you should acknowledge that there is not negligible prior probability that miracles exist in principle. You can claim that they are rare, and each time you do not observe the miracle you can say it is even more rare.
Third, if you acknowledge that the miracles can happen, it is worth looking at the cases when someone claim them to happen in particular. If you have large organised religion (like Catholic, Anglican, Russian churches for example) you very often have special commitee (usually with scientists inside) that check if the miracle that people claim to be miracle, is indeed miracle. Very often they found it to be hoax or natural effect, but sometimes they acknowledge that this is indeed miracle. Other religions may also have miracles, as well as just something outside religion, but there may be no developed institution of miracle verification.
If I would do this I would never go to this website discussing this with you.
A fair point. But I still think you are compartmentalizing.
for infinitesimal prior probability no evidence is enough.
It’s never enough for induction, performed correctly. But an a priori deductive argument maybe could work. I’ve heard theists attempt these arguments, but have not found them convincing.
I can blame the same thing on you.
I am trying to find cruxes, not blame. I would rather leave our identities out of it and examine the question as objectively and impartially as possible. But your epistemology is extremely relevant in this case. It’s the rights of Mortimer Q. Snodgrass again. I don’t think the God hypothesis has enough going for it to even justify raising it to our attention. If we had started with a good scientific epistemology, this would not even be a question. Instead we started with a biased indoctrination, and have to dig ourselves out of it.
I am not going to guess but there are so many stories of atheists who became atheists just because God didn’t do what they asked. “I do not want to deal with such God that does not do what I want, therefore there is no God.”
It’s the availability heuristic again. Who have you heard these stories from? It’s probably not the atheists themselves! You can’t trust the clergy to be honest about this topic. They believe atheism is damnation, and so must present it as a sin. But for those raised atheist with a scientific worldview, believing in God seems as silly as believing in Santa Claus or the Tooth Fairy.
In my case, I was raised as a believer. My perspective changed due to an accumulation of a number of factors. The Problem of Evil was apparent to me in childhood. It introduced a doubt that I could not resolve. The biblical creation story also didn’t align with what I read of science as a child.
When I expressed my misgivings, my church told me that God was a God of Truth, and the teachings of the Church could not possibly contradict the Truth, once it was properly understood. So I withheld judgement until I could learn more. I held both the religious and the scientific worldview in my mind at once, in the hope that they could eventually be unified. I was compartmentalizing, but I was conscious that I was doing so. I could speculate and philosophize in either religious or scientific modes, and I knew which was which. I saw the fruits of science. Computers and rocket ships and vaccines. I had church-related experiences I could only describe as spiritual. Surely they both had to be true?
I studied my faith in depth. I was warned of the sin of pride. I was uncertain how to interpret that, but after study, concluded that the problem with pride was an unwillingness to learn from error. I resolved to always be honest with myself. God was a God of Truth, after all, so honesty could not be wrong. I learned to think more critically. I found many satisfying answers, but my doubts on these points, and more, only deepened. There was evidence against the faith, that was for certain. Doubts remained, but abandoning my faith would mean damnation and I could never convince myself it was false beyond a reasonable doubt.
Then I learned that civil cases were judged according to the preponderance of the evidence, rather than beyond a reasonable doubt. In my commitment to honesty, I judged my faith again by this standard. Suddenly, many of the faith-promoting stories I had considered “evidence” no longer appeared that way. They were indistinguishable from no God at all. Once seen, I could not unsee it. Why was God pretending so hard not to exist? So we are less culpable for sins? Then why have a church at all? My faith was shaken (and not for the first time), but still I believed. I resolved to study more, to try and rebuild what I had lost.
In my church, we brethren sometimes minister to the other members, usually in pairs. I was usually too shy to participate, but I had studied enough to know answers from the scriptures. When ministering to one poor sister who was struggling, I went into religious mode and spouted off the relevant doctrine. This happened to be a point I had doubts about. And then the realization struck me: I didn’t believe a word of it. I sounded that confident, and I didn’t believe a word. I had lied to her. And worse I had lied to myself, the exact thing I had resolved not to do. I had so easily broken my commitment to honesty, just by studying doctrine. And if I could do it, so could any of the other members! They could sound so convicted, and yet not know! The testimony of the others I had been relying on may have been founded on nothing but air.
I still had my spiritual experiences, but they had always resisted critical examination. I finally understood that what I thought was the witness of the Holy Spirit, was only those around me interpreting my emotions for me in a certain way. They were spouting off doctrine memorized by repetition, the same as I had done to that poor sister. In another context, the same emotions could have been a witness for a completely different god. My faith was shattered to its very core.
My church regards all others as apostates. I had rejected them long ago. There was nowhere to turn. For a time, I considered myself agnostic. I told my story to a confidante, and she replied with something like, “so you’re an atheist then”. And in that moment, I realized it was true. I’m an atheist. I can’t believe in God anymore, even if I try.
And after reading the Sequences, and understanding Bayes, I realized that the faith-promoting stories I had thought were evidence, and then eventually no evidence at all, were actually evidence against the church. The church had actually been preemptively preaching some of its worst stories, so we would learn to think of them in the best possible light, before we had a chance to hear a more critical presentation from anyone else.
-Basically, since predictions are laws of nature, wrong predictions are miracles. We should expect to them to exist but to be rare.
Due to indexical uncertainty, we can always be surprised by low probability events. I don’t see these as evidence of God though.
-Talking about aliens. Infinite hypothesis “God with such attributes exists” can be used only as approximation (that is, basically, our understanding of it). The finite hypothesis “aliens fake us to believe that God with such attributes exists” also can be used only as approximation, (that is our understanding of God + assumption that it is faked by aliens). Thus such approximation is longer and should be given smaller probability.
Around in circles again, but is there a difference this time? Do we agree “fake alien God hypothesis” dominates “infinite God hypothesis”? When using induction? You don’t seem to be disputing it. But is “approximate God” simpler than “fake alien God”? That depends! How good is your approximation of “infinite”? How complex are your aliens?
But if you want to argue for a non-infinite God, that’s OK with me, but even if you convince me, it won’t be the infinite God you have convinced me of, but the finite approximation: Something more powerful than mankind, but not infinitely powerful. Something more knowledgeable than mankind, but not infinitely knowing… this sounds like you’re describing advanced aliens. They’re the same thing. I would then argue that the aliens are the reality and the “infinite God” is the approximation of them made by ignorant humans.
Even I would be willing to call such aliens “gods” given certain conditions, but we’re using your definition of “God”.
Can you convince me of approximately-God aliens? Maybe. My prior is not zero, but like the pet purple dragon from Mars, it would take a lot of evidence to convince me.
First, could you review the previous comment to see if you agree with the logic, and if not, what do you disagree in particular.
It feels like we are going around in circles at this point. I’m not sure where the disconnect is.
-if the total possible observation data is infinite, what is prior probability that it is exactly reproduced by finite hypothesis? I argue that it is infinitesimal
The set of all natural numbers is infinite, yet can be enumerated by a finite computer program (when run on an infinite computer, AKA, a Turing machine). There are many many other examples of infinite patterns enumerable by finite programs. And some of them, like “compute the digits of pi” seem pretty chaotic, yet their Kolmogorov complexity is small.
One wrinkle, which you might be alluding to, is that no program with infinite output ever halts. This is true, but there are halting programs that can compute any finite prefix of pi. And like I said before, at no point is your observation infinite. It’s always finite so far. The infinity is never completed.
So the hypothesis “these are the digits of pi” is considered by Solomonoff induction, but maybe it looks like a weighted sum of a class of programs that say “compute pi up to the nth digit” for some n. These still compress quite well, (especially for compressible n’s) so their Kolmogorov complexity is small. I don’t think this is an obstacle for Solomonoff induction.
Has Solomonoff induction got it wrong? Close but not quite? I would argue no. I don’t believe uncomputable sets can physically exist. There are no perfect circles. The abstraction called pi is the approximation, for whatever algorithm physics is actually running, which Solomonoff induction would eventually find.
Errr not completely—you have prior and you have experience.
The posterior becomes the next prior when updating again, so we still call it a “prior” even though this is not the same prior as before. Sorry for the confusion. My current prior is my current level of belief/confidence.
Then if you observe miracles you update it to much higher probability
Higher, yes, but (say) ten times almost nothing is still almost nothing. And that’s only if the likelihood ratio for the evidence favors the hypothesis by that much, which it doesn’t.
but you can’t do it if your prior is infinitesimal as now.
That’s right. No finite amount of evidence can overcome an infinitesimal prior.
Your example “miracles” are evidence in favor of miracles existing (because we can hardly expect reports of miracles to be less common if miracles exist) but the likelihood ratio is very close to 1 because false positives (accidents, hallucinations, and hoaxes) are so common. On priors, these explanations are far more likely. That means your “miracle” reports are extremely weak evidence.
I cannot lower my epistemic standards on this, or I would invite in flat-Earthers, UFO-ologists and various other conspiracy theorists, not to mention all the other religions who have similarly dubious paranormal claims. Why should I favor your paranormal claims over theirs? It’s special pleading.
Strong enough evidence can overcome a very low prior, yes. And this doesn’t have to take very many observations.
But more instances do not necessarily stack like that. That can only happen to the degree they are independent sources. For example, suppose you write a dubious claim in a book, then you make nine more copies of the book. Does that make the claim ten times more likely to be true? What if it’s a hundred thousand copies? Did that help?
Of course it doesn’t! You’re re-counting the same evidence. The contribution of the nine books is completely screened off by the first; the new books have no new information.
I think the cases of miracle reports like weeping icons are similarly not independent enough. A thousand weeping icons is barely more evidence than one. It just means that the hoaxers copied each other’s scam.
Furthermore, we already know that some similar instances of miracles were hoaxes. Shouldn’t every new hoax report lower my prior that miracles are real?
OK. What does “omniscience” mean? The root words translate to something like “all knowing”. But what is “all”, and what is “knowing”? What’s the minimum qualification? Each successive option seems harder to prove:
Option A: (sufficiently advanced aliens) God’s knowledge isn’t infinite or anything, just far beyond our current level. “Omniscience” is more metaphorical than literal.
Option B: (semi-omniscient simulator) God can look up any past event in the world simulation, but isn’t simultaneously conscious of all of them and cannot predict the future short of actually simulating it. He does not know all the logical implications of His knowledge and can be surprised by events. (Janet from The Good Place might be at this level.) Although perhaps he can rewind the simulation and try a different timeline, if He makes any changes, He can’t always predict what would happen without actually trying it. He may also be ignorant of events in His native plane, outside of the world simulation.
Option C: (halting oracle of the first degree) God is a halting oracle machine able to solve the halting problem for any Turing machine, but is unable to solve the halting problem for halting oracle machines like Himself.
Option D: (higher-order halting oracle) God is a halting oracle machine able to solve the halting problem for any Turing machine, and halting oracle machines of some finite degree less than His own, but is unable to solve the halting problem for higher-order halting oracle machines like Himself, or those of any higher degree. There may possibly be beings of greater degree that know things God doesn’t.
Options A, and maybe B seem at least possible, but very very far from proven. Option C seems unprovable using any finite amount of evidence, but probably has a logically coherent definition. Option D seems unprovable even with infinite evidence, but again seems coherent.
Or did you have some other option in mind? I don’t know how to get past Option D without self-referential paradoxes invalidating the whole definition, but perhaps you have some new math for me?
Yes, I have option E: Everything. God just know everything, all the possible universes, - not calculating, just having them in His memory that is infinite.
As I stated in the previous comment , there is no reason for the exact theory to be finite, while approximations can be finite (would you like me to copy it or you can find it?).
That’s your crux? Lesser interpretations than E won’t do?
I am not convinced that E is logically coherent. It’s as meaningless as “married bachelor”.
Suppose that God’s memory is the set of “all facts” O.
The set of all subsets (or powerset) of O, we’ll call p(O).
Then, for any given fact f, there is a further fact f ′ stating that it’s either in or not in each subset of O in p(O).
Thus, there must be at least as many facts as there are elements of p(O), which, being the powerset of O, by Cantor’s Theorem must have a strictly greater cardinality than O.
But we assumed that O contains all facts. Contradiction!
And Cantor’s Theorem holds even for infinite sets! Q.E.D.
Good evening. Sorry to bring up this old thread. Your discussion was very interesting. Specifically regarding this comment, one thing confuses me. Isn’t “the memory of an omniscient God” in this thought experiment the same as “the set of all existing objects in all existing worlds”? If your reasoning about the set paradox proves that “the memory of an omniscient God” cannot exist, doesn’t that prove that “an infinite universe” cannot exist either? Or is there a difference between the two? (Incidentally, I would like to point out that the universe and even the multiverse can be finite. Then an omniscient monotheistic God would not necessarily have infinite complexity. But for some reason many people forget this.)
Well, your argument should be able to kill the concept of Tegmark mathematical multiverse then, so you can guess it is not a “silver bullet” :) Two possible answers:
1. You can not just change the word “mathematical universe” to the word “fact” in my definition E. “Mathematical universe stating that...” makes no sense for me.
However, there are different set theory axiomatics. Some of them allow universal sets
OK, that’s a good point. I had not heard of the universal sets that contain themselves, which I thought would lead to contradictions.
should be able to kill the concept of Tegmark mathematical multiverse
I’m really not persuaded by the MUH, but at least it’s based on reasoned a priori arguments. Do you have similar a priori arguments for God? There’s no way for evidence to ever be enough establish omniscience by itself.
Yeah, given New Foundations, I’m no longer confident that “omniscience” is a logical contradiction, but neither am I confident that it isn’t. And I still think it would take an infinite amount of evidence to prove inductively, so you would need some kind of a priori argument for it instead (or why believe it at all?). That’s one obstacle down, but still a long way to go.
I tell you soon after the discovery of muon that there is another particle, like the electron
It took a great deal of evidence to nail down both the existence of new particles and their properties to that degree of precision. It’s already strong enough to overcome a low prior, but due to mathematical symmetries in nature, some particles were even predicted in advance of experimental discovery. In other words, they had a high prior given what was known, which is why scientists were willing to go to the great expense of looking for them.
We do not have any strong evidence for God, and assuming omniscience alone gives Him an infinitesimal prior, which means no amount of evidence could ever be enough.
No. That is not fundamental at all. Bell’s Theorem only rules out local hidden variables. The Many-Worlds Interpretation and De Broglie–Bohm interpretation are deterministic.
Yes, MWI still has indexical uncertainty. This is a property of the observer, not the universe, which remains deterministic. But you can still simulate the wavefunction on a Turing machine and use it to make predictions, which was my point. It’s in the space of hypotheses of Solomonoff induction.
I don’t really prefer non-local theory, but the laws of nature are what they are and don’t care what I want.
Of course, the Universe as a whole is deterministic since it obeys Schreodinger equation. However, the only thing we have access to is observation, and the observation is probabilistic. You can not predict with the deterministic Turing machine, what is the outcome of the observation, only the probabilities for this outcome.
Well, the laws of nature of course what they are. However, you can interpret it in different ways. You can say that there is fundamental probability, wavefunction, and all this stuff, as the most scientist do when they perform calculations. Or you can start introducing hidden non-local variables, that does not improve your predictions but just make theory more complicated. There was an April, 1st paper introducing particles as sentient beings communicating with each other superluminously to deceive experimentalists. It is your choice which representation you prefer, but I thought you wanted the simplest one.
I think you completely missed my point about the toast. I was trying to be humorous by referencing an actual case, but one that I found especially silly.
It’s just pareidolia. It’s the same as seeing animals in clouds. But which animal you see depends on which animals you’re familiar with. Toast patterns are noisy, so are clouds. The human perceptual system is constantly trying to recognize what it knows in what it sees, and seems particularly good at finding faces. And we have a pretty good idea how this works. See DeepDream.
Yes, I can see that the pattern resembles a human face, and a feminine one. But I personally think that the toast looks more like Abby Sciuto from NCIS than most Virgin Mary paintings.
Oh yeah I heard about this stuff too. No I do not consider pareidolia as miracle. Basically, I listed above (replying to what would disprove me) what I assume to be miracles. In short—stuff that not only one old lady claim to be miracle, not only few local priests and bishop, but special committee from the Church (after a certain investigation), and, a result, whole Church.
This point is very important: The theory must make predictions to be knowledge—if your theory is equally good at explaining anything (like the witch), then you have zero knowledge, because it fails to constrain anticipation.
If you apply that consistently, you get instrumentalism. Most people here aren’t instrumentalists, and do care about theories that dont constrain experience., such as [MWI], MUH and the simulation hypothesis. If you are going to reject metaphysics, you should reject all of it.
Not exactly. First, I can predict that if I throw the stone it will fall down and stuff like that. A miracle may happen, but the probability for it to happen from nowhere is very small (also not zero).
OK, so when things behave as normally expected, that’s just laws of nature, but whenever you’re surprised we can blame it on the witch?
So, first, such theory can predict something (like myrrh-streaming icon mentioned above, or healing, or answer to the prayers). Second, the predictions do not always coincide with no God theory predictions
This point is very important: The theory must make predictions to be knowledge—if your theory is equally good at explaining anything (like the witch), then you have zero knowledge, because it fails to constrain anticipation.
the [advance] expectation of the posterior probability, after viewing the evidence, must equal the prior probability. … If you expect a strong probability of seeing weak evidence in one direction, it must be balanced by a weak expectation of seeing strong evidence in the other direction. If you’re very confident in your theory, and therefore anticipate seeing an outcome that matches your hypothesis, this can only provide a very small increment to your belief (it is already close to 1); but the unexpected failure of your prediction would (and must) deal your confidence a huge blow.
In other words, how strong a piece of evidence should appear to you, depends on your priors; strength is not a property of the evidence alone. If you are claiming that your God hypothesis is not equally good at explaining anything (like the witch), and if you are (rationally) very confident that God exists, then you must have a weak expectation of seeing strong evidence the other way. That’s a crux, right? What would be a big surprise to your theory?
This point is very important: The theory must make predictions to be knowledge—if your theory is equally good at explaining anything (like the witch), then you have zero knowledge, because it fails to constrain anticipation
“Laws of nature do not hold 100%” is a prediction. That’s why atheists feel it necessary to argue against miracles.
Well, my expectations would decrease if some of the miracles I believe would be proven to be fakes or natural events. The miracles that I believe are not those that people believe locally, but those that the Church recognizes globally—usually they send a special commission to check if it is indeed miracle or just natural event (or fake). I would say, I put high probability that the miracles that was approved by this commission are indeed miracles, and if you demonstrate me that they are not it would decrease my probability. The miracles I can name:
-different myrrh-streaming icons, as long as it passed the check by church officials not only on the local level
-witnesses that are collecting for the canonization of saints. Each time when new person is canonized one of the main criteria is whether there are miracles by prayers to him. So it is quite large data of different witnesses. Most of them can be explained by coincidence or natural effects, however, there are more difficult witnesses such as very fast curing from disease that by doctor prognosis should have taken few orders of magnitude longer time (or should not have happen at all).
-relics of saints. In some cases (quite often actually) when after long time the body of the dead person who is considered to be saint is taken back from the ground, it discovered to be not decomposed. It is not necessary condition—there are many saints who does not have it. However, it is interesting question, whether this effect is more often among saints that among usual people (taking into account only the cases when relics were taken from the ground after canonization, to exclude the bias). If it is indeed significantly more often, what can be the reason? Why would the situation be opposite on mount Athos, where non-decomposition of the body is considered to be a bad thing?
God’s complexity cost is not just relatively big like any intelligent mind (such as the witch) would be, but literally infinite if we say that God is omniscient: If God is a “halting oracle”,
Is that an argument against the Mathematical Universe Hypothesis? Wouldn’t the ultimate ensemble have to include a halting oracle?
Well, the relationship between infinite extent and infinite complexity is tricky. Everyone in the rationalsphere knows that pi has an infinite decimal expansion, and also that the digits can be generated by a finite program. “Every mathematical entity exists, and only mathematical entities exist” is likewise a brief compression of the MUH.
You can’t prove a halting oracle exists inductively. How could you? Solomonoff induction is doing induction perfectly, and a halting oracle is not even in the hypothesis space, because the space contains only computable functions, and the halting problem is not decidable. And even if it were, what use would that hypothesis be to you? You can’t get any predictions from running a program on a halting oracle machine when you don’t even have one.
Tegmark claims that the hypothesis has no free parameters and is not observationally ruled out. Thus, he reasons, it is preferred over other theories-of-everything by Occam’s Razor. Tegmark also considers augmenting the MUH with a second assumption, the computable universe hypothesis (CUH), which says that the mathematical structure that is our external physical reality is defined by computable functions.
So this is a point that Tegmark himself considers fair. The CUH would not have a halting oracle.
I find the MUH philosophically dubious. I also disagree with Wikipedia’s characterization of the CUH as adding an additional hypothesis on top of MUH (I’m not sure if that’s how Tegmark sees it, or if that was just an interpolation by the editor). Instead, the CUH is throwing out the dubious axiom that allows things like uncomputable sets to exist, which means by Occam’s razor, I think the CUH is the simpler hypothesis. I don’t exactly buy the CUH either, but I don’t have a better idea.
How is that relevant? It is perfectly possible for a mathematical universe to be a form of Platonic realism.
I disagree with your interpretation of “perfectly possible”, but even if I hypothetically grant you that a halting oracle exists, how can an agent ever be rationally justified in believing that it does? It’s something that takes an infinite amount of evidence to prove. The method clearly can’t be induction.
I think you are missing some things that are quite basic: essentially no one believes in things like the Mathematical Universe on the basis of empiricism or induction. Instead, Occams razor is the major factor.
Note that by things like MUH include MWI. It is straightforwardly impossible to prove MWI or any other interpretation on the basis of evidence, because they make the same predictions. So the argument given for MWI is in terms of simplicity and consilience.
Not many people here reject all reasoning of that type. Many reject it selectively.
The simplicity criterion means MUH is preferable to CUH, since CUH has an additional constraint.
It would we helpful if there was some algorithm or formula that connects complexity with prior probability. Otherwise, I can say that probability decays logarithmically with complexity, and you will say that it decays exponentially, and we will get totally different prior probabilities and totally different results. Do you know if such thing exists?
The simplest explanation for anything is “The lady down the street is a witch; she did it.” Right?
No? How is that explanation any worse than “God did it”? We can at least see that the lady down the street exists.
The magic algorithm is Solomonoff’s lightsaber. It’s not realistically computable, but it does give us a much better sense of what I mean by complexity, and how that should affect priors.
Ok, so I have studied the Solomonoff’s lightsaber. I used this blog https://www.lesswrong.com/posts/Kyc5dFDzBg4WccrbK/an-intuitive-explanation-of-solomonoff-induction
Please correct me if I am wrong, but I feel that there is a … well, not mistake… assumption that is not necessarily true. What I mean is the following. Let us consider the space of all possible inputs and the space of all possible outputs for the Turing machine (yeah, both are infinitely dimensional, who cares). The data (our Universe) is in the space of outputs, theory to test in the space of inputs. Now, before any assumptions about data and theory, what is the probability for the arbitrarily chosen input of length n lead to output with length N (since the output is all the observed data from our Universe, N is pretty large) - this is what is prior probability, correct?
Now we remember the simple fact about data compression: the universal algorithm of compression does not exist, otherwise you would have a bijection between the space of all possible sequences with length N and length N1 < N, which is impossible. Therefore, the majority of the outputs with length N can not be produced by the input with length n (basically, only 2^n out of 2^N has any chance to be produced in such way). For the vast majority of these outputs the shortest input producing them will be just the algorithm that copies large part of itself to output—i.e., a priory hypothesis is incredibly long.
The fact that we are looking always for something simpler is an assumption of simplicity. Our Universe apparently happened to be governed by the set of simple laws so it works. However, this is the assumption, or axiom. It is not corollary from some math—from math prior should be awfully complex hypothesis.
If you put this assumption as initial axiom, it is quite logical to set incredibly low priors for God. However, starting from the pure math, the prior for this axiom itself is infinitesimal. The prior for God’s hypothesis is also infinitesimal, no doubts. Well, for my God’s hypothesis, since it is then lead to your axiom (limited by the Universe) as a consequence. For “witch from neighborhood did it” and then copy paste all the Universe data to “it” priors actually should be higher for reason discussed above.
Why don’t we then keep the “witch” hypothesis? Well, because its predictivity strength is zero. So basically we keep simplicity hypothesis in spite of its incredibly low priors because of its predictivity strength. And if we want to compare it with different supernatural hypothesis we should compare the predictivity strength. You can not cast them out just because of priors. They are not lower.
No? Perhaps you were trying to do something else, but the above is not a description of Solomonoff induction.
Where exactly is the faulty assumption here?
In Solomonoff induction, the observations of the universe (the evidence) are the inputs. We also enumerate all possible algorithms (the hypotheses modeling the universe) and for each algorithm run it to see if it produces the same evidence observed. As we gain new bits of evidence, we discard any hypothesis that contradicts the evidence observed so far, because it is incorrect.
What probability should you assign to the proposition that the next observed bit will be a 1? How should we choose between the infinite remaining models that have not yet contradicted observations? That’s the question of priors. We have to weight them with some probability distribution, and (when normalized), they must sum to a probability of 100%, by definition of “probability”. We obviously can’t give them all equal weight or our sum will be “infinity”. Giving them increasing weights would also blow up. Therefore, in the limit probabilities must decrease as we enumerate the hypotheses.
Can you? It’s not enough that it decays; it must decay fast enough to not diverge to infinity. Faster than the harmonic series (which is logarithmically divergent), for example.
Solomonoff’s prior is optimal in some sense, but it is not uniquely valid. Other decaying distributions could converge on the correct model, but more slowly. The exact choice of probability distribution is not relevant to our discussion here, as long as we use a valid one.
If the observation is in no way compressible, then there is no model simpler than the observation itself, and your prediction for the next bit can be no better than chance. Maybe you haven’t observed enough yet, and future bits will compress.
But there can be no agents in a totally random universe, because there is no way to predict the consequences of potential actions. We can rule that case out for our universe by the anthropic principle.
That’s right. So what is your alternative? Give up on induction altogether? That’s completely untenable.
Ok, let me repeat more precisely so you would see if I understand all things correctly, and if not you would correct me.
1. We have the Universe, that is like a black box: we can make some experiment (collide the particles, look at the particular region of the sky) and get some data. The Universe can be described as a mapping from the space of all possible inputs (experiments) to all possible outputs (observations). To be very precise, let us discuss not observations of humanity as a whole (since you do not observe them directly), but only your own observations in a particular moment of time (your past experiments and observations are now coming from your memory, so they are outputs from your memory).
2. If there are 2^K possible inputs and 2^M possible outputs, there are totally 2^N = (2^M)^(2^K) possible mappings.
3. We can represent this mapping as an output for the universal Turing machine (UTM), which input will be our hypothesis. There are different realizations of the UTM, so let us pick one of the minimal ones(see Wikipedia).
4. There will be more than one hypothesis giving correct mapping. “Witch did it”, “Dumbledore did it” etc. Let us study the probability that the given hypothesis is the shortest that reproduces correct mapping. (If we have more than one shortest, let’s pick the one that is assigned to a smaller binary number, or just pick randomly). For such a rule, there is only one shortest hypothesis. It exists because there is a correct hypothesis “Witch did it” , that might be not the shortest, so we will just look for those that are shorter.
5. The probability for a hypothesis with length n be the shortest hypothesis for n < N is a priori not larger than 2^(n—N) since there are 2^N possible mappings and only 2^n possible hypothesis.
6. The antropic principle does not help here. You know that you perceive input and produce output, but you can’t assume anything about future input and output—a priori.
7. Now you want to introduce the new principle—predictivity, that you actually can predict stuff. I agree with introducing it. This leads to the strong assumption that actually our mapping is one of such that can be produced by a short hypothesis. So, you redefine the probabilities such that you would have a pick for short hypothesis, and integral still be 1.
8. Let us look closer at ou options. Funny that the Solomonoff’s lightsaber actually does not converge fast enough. Indeed, you have 2^(-n) probability for a particular hypothesis of length n, but there are in total 2^n hypothesis of length n, that give you 1 for all the hypothesis of length n. Thus you integrate 1 from 0 to infinity obtaining divergence. To fix it you can simply say that the probability 2^(- a n) with a > 1.
9. However, is convergence the only a prior thing that we require? I would say no. Indeed, can the input of length 1 to one of the minimal UTMs make it produce an output of the length N>>1 and halt? My probability for this is incredibly low. (Of course, you can construct UTM so that it will make it—but it will not be minimal). Notice that I do not say “complex input” or something like that, I am concerning only about the size. The same I would say for all very small numbers. If you have some free time and good at coding, you can play with the minimal known UTMs to see which smallest input produces large but finite output—this would give an estimation of how small n can be. Let us call it n_0
10. Now we would like to have a function such that it is almost zero at n significantly smaller than n_0, growth fast around n_0 and then decays (fast enough to keep the integral convergent). So it will have a maximum, and this maximum will have some width. What is its width? Is it just a matter of taste? To understand it let us return to the reason why we started the search of this function—the need for predictivity.
11. So, since we basically need to be able to predict future observations, the width of the function is limited by us. If it is too wide and we need to include a highly complicated hypothesis, we fail—simply because it is too hard for us to calculate based on such complicated hypothesis. Thus, we just limit ourselves by hypothesis simple enough to use, and this gives the width of the function.
12. To sum up, if hypothesis B is more complicated than A, but still can be used to give predictions, it should not be discarded by adding very low prior probability to it in comparison with hypothesis A.
I’m not sure if this is all correct. What you’re describing doesn’t exactly sound like Solomonoff induction, but you do seem to have a grasp of the principles involved.
Solomonoff induction does not discard any program that is consistent with the observation so far. But for any observation string there are an infinite number of programs that produce that string. There is a shortest one, an infinite class of that program prefixed by some whole number of no-operations (computations that undo themselves). And compilers implementing that same program in encodings of other programming languages. And interpreters implementing that same program in encodings of other programming languages. And entire universes containing people who happen to be simulating one of these (which may be considered an unreliable type of interpreter). And arbitrary nestings of any of the above any number of times. None of this is discarded. But again, the set is infinite, so no matter what distribution you choose, the probabilities after some point must decrease for it to converge.
The point about the witch isn’t that “witch” is a complex cost to encode in a program (although it is), but that “she did it” fails to compress the data at all, because you still have to encode what the pronoun “it” is referring to. Because a “witch” can be blamed for literally anything, adding a “witch” to the uncompressed hypothesis “it” adds no predictive power whatsoever. (If you can compress “it” some other way, then you can make predictions without the witch and she is useless to your model.)
God, who can likewise be credited for anything (even what looks like evil—”all part of God’s plan”, or “God works in mysterious ways”, right?) is the same as the witch: no predictive power over “it” whatsoever. And worse, God’s complexity cost is not just relatively big like any intelligent mind (such as the witch) would be, but literally infinite if we say that God is omniscient: If God is a “halting oracle”, then God is not even contained in the set of all computer programs, because He is not computable: He can’t even be a hypothesis, only approximated. And to get a better approximation, you must use a longer computer program that encodes more of Chaitin’s constant, which is provably not compressible by any halting program. Better approximations of God get bigger without limit. The approximate God hypothesis has literally infinitesimal probability—you can’t escape it: The better the approximation gets, the less likely it is.
And the true God hypothesis is not even in the running. It literally cannot be proved by induction at all. Nor can you take God as an axiom. (I will dismiss it as the fallacy of special pleading: applying this privilege lets us prove anything, even false gods.) The only hope then is proving deductively from some logical necessity, or giving up on omniscience as defined, which of course, opens the possibility of there being beings greater than whichever God you choose.
″ God, who can likewise be credited for anything (even what looks like evil—”all part of God’s plan”, or “God works in mysterious ways”, right?) is the same as the witch: no predictive power over “it” whatsoever.”
Not exactly. First, I can predict that if I throw the stone it will fall down and stuff like that. A miracle may happen, but the probability for it to happen from nowhere is very small (also not zero). Second, I give higher probabilities to what is common place for miracles to happen (like myrrh-streaming icon mentioned above, or healing, or answer to the prayers). With no God hypothesis I must put to zero such probabilities, and if there is a God I keep them finite. So, first, such theory can predict something (whether predictions correct or not, it is separate thread discussion, I will go back to it when I have time from this thread). Second, the predictions do not always coincide with no God theory predictions (like deist theory, that there is a God that does not interact with the Universe) - so it is different theory.
″ And worse, God’s complexity cost is not just relatively big like any intelligent mind (such as the witch) would be, but literally infinite if we say that God is omniscient: If God is a “halting oracle”, then God is not even contained in the set of all computer programs, because He is not computable: He can’t even be a hypothesis, only approximated. And to get a better approximation, you must use a longer computer program that encodes more of Chaitin’s constant, which is provably not compressible by any halting program. Better approximations of God get bigger without limit. The approximate God hypothesis has literally infinitesimal probability—you can’t escape it: The better the approximation gets, the less likely it is. ”
Hmmm. Indeed, you are totally right here. I actually never thought that incomprehensibility is directly connected with the omniscience. Thank you very much for this, it make me to reconsider a lot of things.
We indeed can have only approximate knowledge of God. However, this approximate version of the whole hypothesis can be short enough to compete with no God hypothesis (remember, I was talking about the width of the function? ).
So, for example, the zero approximation of the God hypothesis is that God does not interact with the Universe. It basically leads to the same predictions, as no God hypothesis, so it should be eliminated (actually not that simple, I will talk more about it closer to the end of this comment). The first order approximation will be the God very rarely interacting with the Universe, so there are miracles with very low probability. Next orders will be clearer classification of these miracles. You see that these approximations have predictive power, not significantly longer than no God hypothesis, and the set of predictions is not identical—so they are decent competitors.
What is the difference between such approximation and the same approximation for alien teens etc? Why would we prefer the God hypothesis to the alien teens? Well, because to say: “there is a God with such and such attributes” is simpler than to say “there are alien teens who form around us a reality such that it looks like that there is a God with such and such attributes”.
But why do we need to say that there is the omniscient God at all if all we are going to do is to use approximations? Well, let me give you an analogy from mathematical physics. There is such thing as M-theory. Well, to be honest, M-theory is not formulated. However, only the assumption that such theory exists (even though not formulated) leads to some interesting dualities between other theories. The same is here. Assumption of the omniscient God gives fruitful approximations. Whether they are correct or not—it is the discussion on miracles in the different thread. But we can not simply say that they have very low prior probabilities, since they are not significantly longer than no God hypothesis and are within the width of the maximum of the probability function distribution.
An update of beliefs! We are making progress.
So are you weakening the original claim? You are no longer trying to persuade me of an omniscient being, but only a sufficiently knowledgeable one?
Yeah, at this point, I think we may be talking about aliens, not God, but we’re going to use your definitions of the terms. I personally wouldn’t expect omniscience of a small-g “god”.
I don’t really agree with that, and here is an illustration of why:
Suppose I tell you that I have an aunt that owns a dog. I think most people would just believe me. Aunts are not at all rare, and neither are dogs. Maybe I could be lying to prove a point, but dogs are so common, that I probably could have picked another relative with no need to lie about it.
Now, suppose I tell you that I have an uncle who owns a tiger. I think most people would not just believe that easily. There certainly are people who own tigers though. So maybe you’d be persuaded with a little more proof. Maybe I could show you a picture. That might help until you realize that you’ve only ever met me online, and have no idea what I look like. Maybe I’m not the man in the photo (I could be a woman for all you know), and maybe the owner is not my uncle. Maybe I could do a video chat with you and you could see I have the same face. That would help, but maybe I used Photoshop on the tiger picture to insert my face. At some point though, the evidence would be good enough, or you’d call my bluff.
Now, suppose I tell you that I have a nephew who has a pet purple martian dragon. Your first impression might be, is that a Pokemon? A toy? (Even understanding what another person is saying requires some shared priors.) “No, I mean it’s literally an alien creature from Mars,” I say. Did he tell you that? Kids have wild imaginations. “No, no, I saw it.” OK, we know life exists on Earth, there’s no physical reason why it couldn’t exist on other planets. It’s not outside the realm of possibility, but you’re going to need a lot more evidence than for the tiger.
Now, suppose I tell you that I have a niece with a pet genie. He can turn invisible and follows her to school. She gives him lamp rubs and sometimes he grants her minor wishes using magical powers when he’s in a good mood. Does this seem more or less likely than the purple dragon? The dragon is at least compatible with what we know of science. Magical powers, not so much.
The above stories are an illustration of how you, or people in general are already using priors. The lower the prior, the more evidence is required to overcome that prior.
Now suppose I tell you that I have an internet acquaintance who has an invisible friend named Steve, whom he communicates with via mental telepathy, although Steve seems oddly reluctant to answer sometimes. Steve has phenomenal cosmic magical powers and can rearrange stars and stuff. “Have you ever seen Steve do this?”, we ask. “No, but I’ve seen him remotely draw pictures of his mother on toast.” I don’t know about you, but I’m gonna need a little more proof than that. Right? Does this sound more or less likely than my niece’s genie? Even the genie could explain the toast. Not only is Steve invisible, he has stronger magic? Wouldn’t we need at least as much evidence as for the genie? Or for the dragon? The tiger?
Oops, hold on. My acquaintance tells me I got the name wrong it—was a glitch in Google Translate. His real name is not Steve—it’s Jesus. (. . .) I guess that settles it.
Past some point these cases are indistinguishable with the finite amount of available evidence anyway, so I would argue that the difference is meaningless, at least from the perspective of induction on evidence: it makes no difference to the resulting predictions.
However, the difference may still matter to arguments of logical necessity, and if your faith has some creed that cares about the distinction, the weakened definition of God may still be a problem for you.
Well, violation of Laws of Nature is violation of Laws of Nature, whether they applied to the remote drawing without any interactions or to star moving. If Steve can draw pictures on toast remotely he violates the Laws of Nature and the hypothesis that the Universe is completely controlled by the Laws of Nature, without any Higher Power, aliens, the guy who runs a simulation etc—is falsified.
Now, going back to aliens vs God hypothesis.
″ The dragon is at least compatible with what we know of science. Magical powers, not so much. ”
The problem that compatibility of the hypothesis with what we know before is not an argument at all when we are talking about fundamental hypothesis (i.e., not “who stole my car” but the hypothesis explaining the Universe). Indeed, look at the history of Quantum Mechanics. Initially, a lot of scientists hated the idea that the probabilistic description of the Universe is fundamental, so they come up with hidden parameters idea. All they now before was deterministic. If you knew all the velocities and positions of all the molecules, you could predict everything exactly—but you did not, and here the classical probability was coming. So they just suggested the same idea for Quantum Mechanics. That actually everything is still deterministic, we just don’t know hidden variables, and then the observation appears to be probabilistic. You do not need to invent modified Turing machine that would produce different input with different probability, you still good to have good old deterministic Turing machine. Looks much better, right?
Then it appears that actually you can distinguish between these hidden parameters hypothesis and fundamentally probabilistic hypothesis—see Bell’s inequalities. And the experimental test demonstrated, that there are no hidden parameters. QM is fundamentally probabilistic.
Thus, the fact that we need to throw to the trash can all our current assumptions and build the theory based on new assumptions does not mean that we should put small probability to this new theory and need hidden parameters or hidden aliens given the same observations. It just means we maybe were wrong.
The alien hypothesis dominates the God hypothesis, because God is infinitely improbable, but aliens are only finitely improbable.
You seem to be arguing that we can bias our prior to accept an approximate God at the very edge of the “width”. I say the rights of Mortimer Q. Snodgrass are being violated.
Why your God,
“You seem to be arguing that we can bias our prior to accept an approximate God at the very edge of the “width”. I say the rights of Mortimer Q. Snodgrass are being violated.”
No. If you read the comment about the width of the function you can see that my argument is not about God at all, but about what we need from the hypothesis (predictivity).
″ The alien hypothesis dominates the God hypothesis, because God is infinitely improbable, but aliens are only finitely improbable. ”
No. We use the approximation, and approximation has the same size for both of them (we consider the case of comparing hypothesis “There is a God with such and such attributes” and ” There are aliens forge us to believe that there is a God with such and such attributes”). The algorithm of construction of this approximation, though, is simpler for pure God’s hypothesis (using the mere fact of its existence, not formulating the hypothesis itself, like we establish dualities between different types of string theories using that M-theory exists but without formulating it) since it does not require transitional link of “hidden aliens”.
“Why your God,
″
Suppose I tell you soon after the discovery of muon that there is another particle, like the electron, but with the mass 105.6583745 (24) MeV and lifetime 2.19698119(22) microseconds. You would tell me: “Ok I can assume that there is a particle like the electron, although I would put quite low probability to it. But to believe, that its mass is 105.6583745(24) MeV !? No, it is absurd—there is a trillion of other possibilities!”
Of course. A priori possibility for all different gods is approximately the same. In total, they add to the prior probability that there is some God—and I was arguing that this prior probability is finite. Then, after you make an observation, you can discover more attributes of God and come to Allah, Christ, Flying Spaghetti Monster, Aliens or nothing beyond Laws of Nature.
I’m not understanding this part. If we already assume that aliens and God exist (which is not allowed because it’s begging the question) then of course it’s simpler to assume God explains the evidence than to introduce the additional hypothesis that the aliens are also trying to fool us.
But without committing the fallacy of begging the question, we are left with the conjunctive hypothesis of “aliens exist” and “they are trying to fool us” that dominates “there is an omniscient being” (which must have an infinitesimal prior), never mind all the other attributes of your particular God.
“that dominates “there is an omniscient being” (which must have an infinitesimal prior)”
It must not, because the theory does not completely describe omniscient being, but states its existence. If your theory claim that the Universe is infinite (which can be true, we might live in the open Universe) it does not mean that your theory is infinite.
Once again, how did you distribute priors? By how easy you can use the theory to make predictions. In both cases, hidden parameters or hidden aliens, you say: ok, let us keep our old assumptions, but introduce hidden thing Y that works such that our observations can be explained by X. X alone is not good—it requires to go from deterministic to random Turing machine (QM) or acknowledge that the theory exactly describing our observations can be infinitely large, while we can only approximate it. Y gives some hope to resolve it—to stay within deterministic Turing machine, or within finite though large theory of everything. However, in both cases using of Y is just “Y simulates X”. Well, in my opinion you even do not need a Solomonoff’s lightsaber here—simple Occam razor is enough to see that Y is redundant.
Equivocation. The algorithmic (Kolmogorov) complexity cost of the conjunction of “simulated X” and Y is finite, but the “real X” is infinite, therefore, the former must be preferred by Occam’s razor. “Simulated X” is a deception by aliens and is not a full halting oracle, but a finite approximation of one. It can’t do everything the “real X” could.
I do not believe that aliens are performing miracles, just that that explanation is infinitely more probable on priors than an omniscient God. The miracles you have pointed to so far are best explained as natural accidents or hoaxes, not nearly enough evidence to even suggest aliens.
Ok. Looks like we started to go on circle, sorry for not being clear enough. Let me try to explain once again.
You have a lot of observation data. You have significantly more potential observation data you can gather. I was considered before that all the potential observation data as finite—however, I understood that it is not so, for example, if scientific breakthrough, aliens or God will turn us into the immortal creatures with every year increasing ability to gather, remember, and process information.
So, you want to find a theory based on already observed data, that would predict the data that is not yet observed. I bet we both believe that it is possible to do, but with some limitations.
1. Does finite theory exactly predicting all data exists? (In a sense of the Turing machine). Since all the data is infinite, a prior probability for such theory would be zero—without any other assumptions. You can introduce strong assumption of predictivity, basically stating that such theory exists. However, I think that this assumption is too strong (based on a posteriori results of quantum mechanics where you can predict only the probability of the observation but not the definite outcome—so you can recover with your theory only part of the observed data). Instead I would suggest weak predictivity assumption:
2. The theory exactly predicting all the data is infinite (such infinite theories exist—for example “witch did it” where “it” is “all the data to be observed”); however, its finite approximations can predict some part of the data with some precision.
You can try to make it stricter, saying: “Among all the finite approximations there is one with maximal predictive power” but I do not see any arguments for it. The prior expectations tells that you can increase the precision by increasing the length of the theory.
Now, we would like to classify the finite approximations based on their precision and length. First, is just the reference to existence of exact infinite theory makes the theory under consideration infinite too? No—otherwise we need to acknowledge that Tegmark theory of mathematical multiverse (all the mathematically consistent worlds) is infinite. It refers to the existence of all the possible worlds, not describing each of them. The same, theory stating that the God knows everything does not state what exactly He knows. Thus, our approximation of infinite theory of omniscient God is just “God exists with such and such attributes” and it is finite. The approximation “aliens fake God with such and such attributes” is also finite but longer. It may seems better because “aliens faking God” can potentially be an approximation of the finite exact theory, predicting everything—however, as we discussed before, there is no reason to assume that such finite theory exists, and think that “aliens fake God” dominates “God exists” because the first is approximation of finite theory, and the second of the infinite. We compare the lengths of the approximations, not of the full theories, and the approximation “God exists” is shorter and thus should be preferred.
″ I do not believe that aliens are performing miracles, just that that explanation is infinitely more probable on priors than an omniscient God. The miracles you have pointed to so far are best explained as natural accidents or hoaxes, not nearly enough evidence to even suggest aliens. ”
Let us first fix the priors and then move to discussing miracles, ok?
Yes, I just started to notice that after re-reading this thread. It seems like we’re talking past each other without understanding. For Double Crux to work, we’re not supposed to aim for direct persuasion until after we’ve identified the double crux, or we’ll get “lost in the weeds” discussing the parts that aren’t important to us. Have we found it yet? I think we have not, and that’s what went wrong here.
I have yet to identify a single crux, but part of that might be because I don’t understand your concept of God. I don’t know what crux could possibly convince me your God exists, because I still don’t know what “God” means (to you).
I’m honestly not that familiar with the Eastern Orthodox tradition. Protestant sects are more common in my country. The God concept worshiped by the average churchgoer here seems laughably naiive, and logically incoherent, but it does have some differences with what you’ve described so far. And the apologists, even in my country, seem to have a different definition that the average churchgoer (in my country), probably because the naiive definition is so indefensible. It’s motte-and-bailey rhetoric—a combination of bait-and-switch with equivocation.
So I’ll ask again: Is omniscience a crux for you? That is, if a source you would consider authoritative (the bishops, the Patriarch, archeology, visions from God, whatever it takes) explained to you that omniscience was not an attribute of God as He revealed Himself, but a later misrepresentation made by sinful philosophers, would you then say your God does not exist?
If you answer, “Then my God still exists and is not quite omniscient as I had once believed,” then omniscience is not a necessary attribute for your God definition, and there is no need to discuss it further, because it is not a crux.
But, if you answer, “A ‘God’ that is not omniscient is no God of mine,” then omniscience is a crux for you and we need to nail down what that means, because it might be closely related to a crux of mine.
I’m not sure this is part of the authoritative definition of doublecrux, but FYI the way I personally think of it is “Debate is when you try to persuade the other person [or third parties] that you’re right and they’re wrong. Doublecrux is when you try to persuade _yourself_ that they’re right and you’re wrong, and your collective role as a team is to help each other with that.” (I don’t think this is quite right, obviously the goal is for both of you to move towards the truth together, whatever that may be, but I think the distinction I just made can sometimes be helpful for shaking yourself out of debate mode)
I’m not sure if anyone has an authoritative definition of doublecrux yet. But as this is my first real attempt at it, I appreciate guidance. We did open with the Litany of Tarski, but I might have lost sight of that for a moment. I maintain that I at least need to understand what my interlocutor is saying before I can conclude that he is right.
Again, the Litany of Tarski: If a God exists, I desire to believe that is the case. An update for either side is a victory. But the goal is not to fool myself or give up, or give in to confusion. The update must be an honest one, or the whole exercise is empty.
Yes. Omniscience is a crux for me.
Wouldn’t the prior probability of God to exist be a crux for you? I.e., if you change your prior probability from infinitesimal to somewhat not negligibly small, would it change your position? At least the infinitesimal probability is a crux for me.
Let me also notice that our positions do not completely cover all the spectrum of possible answers (it not exactly “A” or “not A” ). I.e., as far as I get you think the world is completely controlled by laws of nature, I think there is a God as Eastern Orthodoxy describe it. In between there are many other options:
-simulation
-aliens
-Higher Power (includes my believe as particular case)
-the world that is not describable by math fully but only approximately
-and whatever else that just does not come to my mind
It means that we can be wrong simultaneously.
Getting past an infinitesimal prior to a tiny finite one is a long way from “more likely than not”.
But more simply, my prior is my position. If you get my prior belief for the proposition “God exists” over 50%, then you’ve won: at that point I’ve become a theist by definition (though maybe not a very confident one). This isn’t a crux—It’s the original proposition!
Errr not completely—you have prior and you have experience. For example, suppose you agree after long discussion that probability of God to exist is not infinitesimal but 0.01% . Ok, you are still more atheist than a theist. Then if you observe miracles you update it to much higher probability—but you can’t do it if your prior is infinitesimal as now.
What would you put your priors now for the following:
-the Universe is completely describable by a finite set of laws, no other reality behind
-the Universe is approximately describable by the finite set of laws, approximation improves with the length of the theory (need infinite theory to full description)
-Universe is simulation
-aliens
-something else
I don’t consider this question well-posed. Physics seems to be working pretty well. But what do you mean by “Universe”? The part we can observe? Surely there’s more to it than that. And “laws” can be dependent on context. The law that objects accelerate downward at 9.8m/s/s doesn’t apply on Mars, but there’s a similar law with a lower number and an underlying law of gravity connecting both cases. Laws that seem to be “fundamental” now are probably dependent on local conditions. The “symmetry breaking” observed in particle physics indicates this. And very simple rules like Conway’s Life can produce very complex behavior, with emergent “laws”, like “gliders travel diagonally”. Is this law from a reality behind or in front of Life?
Ok, let us put it more strict. What are your priors that there exist a finite theory that can predict all our potential future observations exactly? And what are your priors that such theory does not exist and we can only use approximations?
N.B. By all observations I mean ALL observations, including the results of measurements in QM (not just their probabilities, we observe results too, right?)
I still don’t understand. Are you asking if the universe is deterministic?
Which sense of “exist” do you mean? Mathematically, where we can “have” imaginary things like infinite uncomputable sets, or physically, where we obviously can’t construct an object corresponding to such a thing?
Solomonoff induction cannot be run on real physics. It’s an abstracted ideal that can only be approximated. Maybe quantum field theory predicts the motion of particles to an accuracy of eleven digits, but that doesn’t mean you can use it to predict the weather. You don’t have enough computing power, and you don’t know the initial conditions to that precision anyway.
Even AIXI, an ideal agent using Solomonoff induction (which can’t be physically built), can only make probabilistic predictions based on observations made so far. There’s always an infinite class of universes (hypotheses) that have produced the observations thus far, and they always disagree on the next bit.
There’s no need to invoke quantum physics here. Given what we already know of relativistic physics, it’s always possible that a particle could approach at the speed of light and mess up your plans. Because it’s moving at light speed, there’s no way in principle you could have observed it to take it into account in advance. Even AIXI can be “surprised” by low-probability events like this, even in a deterministic universe (because it has only observed a small part of the universe so far), and it has infinite computing power!
Well, of course I do not suggest to predict the weather from the laws of QFT, I mean mathematically. Let us consider all possible future observations as a data. Do you think that it can be exactly generated by the theory of the finite length (as an output of the universal Turing machine with the theory as an input), or you would require an infinite length of the theory for the exact reproducing?
The observable universe probably has a finite number of possible states.
The laws of physics appear to be deterministic and Turing computable.
Therefore, an infinite theory would never be required. (And this makes me sympathetic to the ultrafinitists.) The laws of physics can be mapped to a Turing machine, and the initial conditions to a (large, but) finite integer. There is nothing else.
But I’m not sure that “all possible future observations” means what you think it means.
In the MWI, any observer is going to have multiple future Everett branches. That’s the indexical uncertainty. Before the timeline splits, there is simply no fact of the matter as to which “one” future you are going to experience: all of them will happen, but the branches won’t be aware of each other afterwards.
And MWI isn’t even required for indexical uncertainty to apply. A Tegmark level I multiverse is sufficient: if the universe is sufficiently large, whatever pattern in matter constitutes “you” will have multiple identical instances. There is no fact of the matter as to which “one” you are. The patterns are identical, so you are all of them. When you make a choice, you choose for all of them, because they are identical, they have no ability to be different. Atoms are waves in quantum fields and don’t have any kind of individual identity. You are your pattern, not your atoms. But, when they encounter external environmental differences, their timelines will diverge.
Copies of you that arise purely from the size of the universe will have the same counterfacutal or funcitonal behaviour, that is they will do the same thing under the same circumstances...but they will not, in general, do the same thing because they are not in the same circumstances. (There is also the issue that being in different circumstances and making different decisions will feed back into your personality and alter it).
I’m pretty sure I said that:
I don’t understand your point.
”
The observable universe probably has a finite number of possible states.”
Not so sure about that. For this you need at least
1. The Universe to be finite (i.e. you can not have open Universe, only the surface of 4d sphere). It is possible, the measured curvature of the Universe is approximately on the boundary, but the open is also possible.
2. The Universe to be discrete on microscale. Again, according to some theories it is the case, according to the others, it is not.
So, I would say: “maybe yes, it is finite, but the prior probability is far from being 1 ”.
Side note: the Universe with finite number of states is quite depressive picture since it means that inevitably everything will just end up in the highest entropy state, so, the inevitable end of humanity. Of course, it contradicts nothing, but in this model any discussion of existential threats for the humanity (like superintelligence quite popular here) makes no sense since the end is unavoidable.
″ And MWI isn’t even required for indexical uncertainty to apply. A Tegmark level I multiverse is sufficient: if the universe is sufficiently large, whatever pattern in matter constitutes “you” will have multiple identical instances. There is no fact of the matter as to which “one” you are. The patterns are identical, so you are all of them. When you make a choice, you choose for all of them, because they are identical, they have no ability to be different. Atoms are waves in quantum fields and don’t have any kind of individual identity. You are your pattern, not your atoms. But, when they encounter external environmental differences, their timelines will diverge. ”
Could you please explain it in more details? I am confused. If I measure the spin of the electron that is in the superposition of spin up and spin down, I obtain with probability p spin up and with probability 1-p spin down. How to exactly predict using Tegmark multiverse when I see spin up and when I see spin down?
I’m not saying that a Tegmark I multiverse is equivalent to MWI, that’s actually Tegmark III. I’m saying that Tegmark I is sufficient to have indexical uncertainty, which looks like branching timelines, even if MWI is not true. See Nick Bostrom’s Anthropic Bias for more on this topic.
Mmmm is explanation really that long that I need to read a whole book? Can you maybe summarize it somehow?
Only if you’re interested. I haven’t actually read the whole book myself, but I have read LessWrong discussions based on it. I think the Sleeping Beauty problem illustrates the important parts we were talking about.
Ah, I think I got the point, thank you. However, it does not resolve all questions.
1. You can’t deduce Born’s rule—only postulate it.
2. Most important, it does not give you a prediction what YOU will observe (unlike hidden parameters—they at least could do it). Yes, you know that some copies will see X, and some will see Y, but it is not an ideal predictor, because you can’t say beforehand what you will see, in which copy you will end up. So all your future observed data can not be predicted, only the probability distribution can be.
Can’t you? Carroll calls it “self-locating uncertainty”, which is a synonym for the “indexical uncertainty” we’ve been talking about. I’ll admit I don’t know enough quantum physics to follow all the math in that paper.
Yeah, in this scenario, the “YOU” doesn’t exist. Before the split, there’s one “you”, after, two. But even after the split happens, you don’t know which branch you’re in until after you see the measurement. Even an ideal reasoner that has computed the whole wavefunction can’t know which branch he’s on without some information indicating which.
More or less. You can compute all the branches in advance, but don’t necessarily know where you are after you get there. The past timeline is linear, and the future one branches.
″ Can’t you? Carroll calls it “self-locating uncertainty”, which is a synonym for the “indexical uncertainty” we’ve been talking about. I’ll admit I don’t know enough quantum physics to follow all the math in that paper. ”
That was super cool, thank you a lot for this link!
Yes, according to our best current understanding of cosmology, the universe itself will eventually die (i.e. become unable to sustain life).
Again the laws of physics are what they are and don’t care what I want.
But in the most likely scenarios, this will take a very long time. The Stelliferous Era (when the stars shine) is predicted to last 100 trillion years, and we’re not even 14 billion years into it. Civilization may continue to extract energy from black holes for a time many orders of magnitude longer than that.
It’s not completely hopeless. Maybe in that time we’ll figure out how to make basement universes and transfer civilization into a new one, as Nick Bostrom et al have argued may be possible.
But even if we ultimately can’t, shouldn’t we try? Shouldn’t we do the best we can? Wouldn’t you rather live for over 100 trillion years than die at 120 at best?
″ It’s not completely hopeless. Maybe in that time we’ll figure out how to make basement universes and transfer civilization into a new one, as Nick Bostrom et al have argued may be possible. ”
Yeah, you see then all the future possible observations data becomes infinite.
″ But even if we ultimately can’t, shouldn’t we try? Shouldn’t we do the best we can? Wouldn’t you rather live for over 100 trillion years than die at 120 at best? ”
Of course, we should try—because there is a chance that we can. Not because we would live 10^14 years and all die. We will count that we survive forever, or it will be pretty miserable 10^14 years without any hope.
Not sure either, which is why I said “probably”.
Note that I said “observable universe”, not “multiverse” or “cosmos”. There are regions of the universe that are not accessible because they are too far away, the universe is expanding, and the speed of light is finite. This limit is called the Cosmic event horizon
I think it is sufficient to say that the information content of the observable universe is finitely bounded. Space doesn’t necessarily have to be made of pixels like some cellular automaton for this to hold. The Bekenstein bound is proven from Quantum Field Theory. How true QFT is, is another question, but experimental evidence proves that it is very true.
″ Note that I said “observable universe”, not “multiverse” or “cosmos”. There are regions of the universe that are not accessible because they are too far away, the universe is expanding, and the speed of light is finite. This limit is called the Cosmic event horizon ”
On the one hand side, you are totally correct about it—assuming cosmological constant (lambda-term ) stays what it is. There are nuances however:
-if we are forever in the de Sitter space (lambda dominated, as now) the universe is explicitly not time-invariant (simply because it extending). There is non-zero particle production rate, for example (analog of Hawking radiation). It means that we potentially can construct a “first kind perpetuum mobile” which means that we can get to any energy—infinite space for the observations. Unless this will start to have a screening effect on lambda-term.
-If lambda desreases (or screened) the expansion may go back from lambda-dominated to matter-dominated, leading to its slowing down. In this case we can start observing areas of the universe that used to be beyond the horizon.
Anyway, there are a lot of speculations what can be and what can not. Can we maybe agree that both prior probabilities: that all our possible future observations are finite and that they are infinite are not negligible? What about 1⁄2 for each, to start with?
I worry we may be getting lost in the weeds again. We need to try and find cruxes. Is this related to a crux of yours? What exactly are you getting at?
Even if time could be extended infinitely without the universe dying, there is no time at which the infinity has been completed. It’s always finite so far.
An “immortal” being with finite memory in infinite time will eventually forget enough things to repeat itself in a loop, living the same life over and over again.
Can this be avoided? There are limits to any physical realization of memory. If you try to pack too many bits in a given volume of space, it will collapse into a black hole. And then adding anything more will make the event horizon bigger. Infinite memory requires infinite space and energy. Maybe with basement universes it could be done. They might have to communicate through wormholes or something. This is all very speculative, so I don’t know.
Well we can also make infinite memory (as you suggested). But, ok, what would you put as prior probability that the theoretically possible observation data is infinite? Looks like you are not strongly against it, so what about something between 0.5 and 0.1? (Of course we can’t strictly prove it right now). If you say “yes, this works” we can move on. If you would claim that this probability is also supertiny, like 10^(-1000) , I will continue to argue (well, yes, if we can not at all observe in all the infinite future infinite data, it does not make sense to talk about omniscient God).
To show you what I am leading to:
-if the total possible observation data is infinite, what is prior probability that it is exactly reproduced by finite hypothesis? I argue that it is infinitesimal
-what is the probability that there exist such infinite hypothesis? I argue that 1, for example, “witch did (copypaste all the data)”. Predictive force of this hypothesis is zero
-we need predictivity so we assume that there are finite approximations that can partially reproduce the data. Such assumption is less strong than assumption of the finite exact hypothesis so it should be preferred.
-therefore, we should use Solomonoff’s lightsaber not on full theories, but on approximmations
-consider two classes of approximations. The first gives exact predictions where it can and predicts nothing whee it can not. The second is weaker, it sometimes gives wrong predictions. Since the second is weaker, the priors for this are significantly higher. So, I would say, if observable data is infinite, most of our approximate theory have from time to time give wrong predictions
-this does not say, of course, how often are these wrong predictions. If they are too often, such approximation is useless.
-Basically, since predictions are laws of nature, wrong predictions are miracles. We should expect to them to exist but to be rare.
-Talking about aliens. Infinite hypothesis “God with such attributes exists” can be used only as approximation (that is, basically, our understanding of it). The finite hypothesis “aliens fake us to believe that God with such attributes exists” also can be used only as approximation, (that is our understanding of God + assumption that it is faked by aliens). Thus such approximation is longer and should be given smaller probability.
You are not a future hyper-mind made of basement universes and wormholes. You’re a mortal human like me, with a lifespan measured in mere decades so far. Yet you claim to have knowledge of an infinite God. How did you come to this conclusion? By what method can you make such an assertion? Is this special pleading for a special case or do you use this method for anything else? Why should I consider that method sound and reliable?
My best guess: you were indoctrinated in childhood by your parents and community, long before you were old enough to develop critical thinking skills of your own. For obvious survival reasons, children are very inclined to learn from their parents and elders. The memeplex of any of the old religions must be self-sustaining, or they wouldn’t still be here. They include psychological tricks to produce fake evidence, to stop questions, to make empty threats. They include answers to your questions or at least pretend to. It became part of your identity. You later learned of the methods of science, but they didn’t become a part of you the same way. You compartmentalized the lessons and didn’t use them to update your old thinking. You sought out evidence to support your belief instead of trying to disprove it to see if it would hold up, like a scientist.
Most people seem to use this method. You are not alone. And that’s exactly the problem with it. People are using the same methods to believe in other religions that you already know to be false. How can that method be reliable if it so reliably produces the wrong answers? What makes you any different from them? Accident of birth. That’s it. Your methods are the same.
Maybe that’s a crux for me. If it could be shown that a God belief was founded on a sound epistemology that reliably produced good results, instead of these obvious fallacies, I would have a much harder time dismissing the proposition as a fraud.
” You sought out evidence to support your belief instead of trying to disprove it to see if it would hold up, like a scientist. ”
1. If I would do this I would never go to this website discussing this with you. Assume good intentions.
2. As you said, for infinitesimal prior probability no evidence is enough. That is what I am arguing here. If I get persuaded that probability is indeed infinitesimal, all my evidence are nothing. I can see resurrection of the deads and still it won’t be enough then.
3. I can blame the same thing on you. I am not going to guess but there are so many stories of atheists who became atheists just because God didn’t do what they asked. “I do not want to deal with such God that does not do what I want, therefore there is no God.”
Ok, let us go back to our business if you don’t mind.
″ If it could be shown that a God belief was founded on a sound epistemology that reliably produced good results, instead of these obvious fallacies, I would have a much harder time dismissing the proposition as a fraud. ”
First, could you review the previous comment to see if you agree with the logic, and if not, what do you disagree in particular.
Second, if you agree with this logic, you should acknowledge that there is not negligible prior probability that miracles exist in principle. You can claim that they are rare, and each time you do not observe the miracle you can say it is even more rare.
Third, if you acknowledge that the miracles can happen, it is worth looking at the cases when someone claim them to happen in particular. If you have large organised religion (like Catholic, Anglican, Russian churches for example) you very often have special commitee (usually with scientists inside) that check if the miracle that people claim to be miracle, is indeed miracle. Very often they found it to be hoax or natural effect, but sometimes they acknowledge that this is indeed miracle. Other religions may also have miracles, as well as just something outside religion, but there may be no developed institution of miracle verification.
A fair point. But I still think you are compartmentalizing.
It’s never enough for induction, performed correctly. But an a priori deductive argument maybe could work. I’ve heard theists attempt these arguments, but have not found them convincing.
I am trying to find cruxes, not blame. I would rather leave our identities out of it and examine the question as objectively and impartially as possible. But your epistemology is extremely relevant in this case. It’s the rights of Mortimer Q. Snodgrass again. I don’t think the God hypothesis has enough going for it to even justify raising it to our attention. If we had started with a good scientific epistemology, this would not even be a question. Instead we started with a biased indoctrination, and have to dig ourselves out of it.
It’s the availability heuristic again. Who have you heard these stories from? It’s probably not the atheists themselves! You can’t trust the clergy to be honest about this topic. They believe atheism is damnation, and so must present it as a sin. But for those raised atheist with a scientific worldview, believing in God seems as silly as believing in Santa Claus or the Tooth Fairy.
In my case, I was raised as a believer. My perspective changed due to an accumulation of a number of factors. The Problem of Evil was apparent to me in childhood. It introduced a doubt that I could not resolve. The biblical creation story also didn’t align with what I read of science as a child.
When I expressed my misgivings, my church told me that God was a God of Truth, and the teachings of the Church could not possibly contradict the Truth, once it was properly understood. So I withheld judgement until I could learn more. I held both the religious and the scientific worldview in my mind at once, in the hope that they could eventually be unified. I was compartmentalizing, but I was conscious that I was doing so. I could speculate and philosophize in either religious or scientific modes, and I knew which was which. I saw the fruits of science. Computers and rocket ships and vaccines. I had church-related experiences I could only describe as spiritual. Surely they both had to be true?
I studied my faith in depth. I was warned of the sin of pride. I was uncertain how to interpret that, but after study, concluded that the problem with pride was an unwillingness to learn from error. I resolved to always be honest with myself. God was a God of Truth, after all, so honesty could not be wrong. I learned to think more critically. I found many satisfying answers, but my doubts on these points, and more, only deepened. There was evidence against the faith, that was for certain. Doubts remained, but abandoning my faith would mean damnation and I could never convince myself it was false beyond a reasonable doubt.
Then I learned that civil cases were judged according to the preponderance of the evidence, rather than beyond a reasonable doubt. In my commitment to honesty, I judged my faith again by this standard. Suddenly, many of the faith-promoting stories I had considered “evidence” no longer appeared that way. They were indistinguishable from no God at all. Once seen, I could not unsee it. Why was God pretending so hard not to exist? So we are less culpable for sins? Then why have a church at all? My faith was shaken (and not for the first time), but still I believed. I resolved to study more, to try and rebuild what I had lost.
In my church, we brethren sometimes minister to the other members, usually in pairs. I was usually too shy to participate, but I had studied enough to know answers from the scriptures. When ministering to one poor sister who was struggling, I went into religious mode and spouted off the relevant doctrine. This happened to be a point I had doubts about. And then the realization struck me: I didn’t believe a word of it. I sounded that confident, and I didn’t believe a word. I had lied to her. And worse I had lied to myself, the exact thing I had resolved not to do. I had so easily broken my commitment to honesty, just by studying doctrine. And if I could do it, so could any of the other members! They could sound so convicted, and yet not know! The testimony of the others I had been relying on may have been founded on nothing but air.
I still had my spiritual experiences, but they had always resisted critical examination. I finally understood that what I thought was the witness of the Holy Spirit, was only those around me interpreting my emotions for me in a certain way. They were spouting off doctrine memorized by repetition, the same as I had done to that poor sister. In another context, the same emotions could have been a witness for a completely different god. My faith was shattered to its very core.
My church regards all others as apostates. I had rejected them long ago. There was nowhere to turn. For a time, I considered myself agnostic. I told my story to a confidante, and she replied with something like, “so you’re an atheist then”. And in that moment, I realized it was true. I’m an atheist. I can’t believe in God anymore, even if I try.
And after reading the Sequences, and understanding Bayes, I realized that the faith-promoting stories I had thought were evidence, and then eventually no evidence at all, were actually evidence against the church. The church had actually been preemptively preaching some of its worst stories, so we would learn to think of them in the best possible light, before we had a chance to hear a more critical presentation from anyone else.
Due to indexical uncertainty, we can always be surprised by low probability events. I don’t see these as evidence of God though.
Around in circles again, but is there a difference this time? Do we agree “fake alien God hypothesis” dominates “infinite God hypothesis”? When using induction? You don’t seem to be disputing it. But is “approximate God” simpler than “fake alien God”? That depends! How good is your approximation of “infinite”? How complex are your aliens?
But if you want to argue for a non-infinite God, that’s OK with me, but even if you convince me, it won’t be the infinite God you have convinced me of, but the finite approximation: Something more powerful than mankind, but not infinitely powerful. Something more knowledgeable than mankind, but not infinitely knowing… this sounds like you’re describing advanced aliens. They’re the same thing. I would then argue that the aliens are the reality and the “infinite God” is the approximation of them made by ignorant humans.
Even I would be willing to call such aliens “gods” given certain conditions, but we’re using your definition of “God”.
Can you convince me of approximately-God aliens? Maybe. My prior is not zero, but like the pet purple dragon from Mars, it would take a lot of evidence to convince me.
It feels like we are going around in circles at this point. I’m not sure where the disconnect is.
The set of all natural numbers is infinite, yet can be enumerated by a finite computer program (when run on an infinite computer, AKA, a Turing machine). There are many many other examples of infinite patterns enumerable by finite programs. And some of them, like “compute the digits of pi” seem pretty chaotic, yet their Kolmogorov complexity is small.
One wrinkle, which you might be alluding to, is that no program with infinite output ever halts. This is true, but there are halting programs that can compute any finite prefix of pi. And like I said before, at no point is your observation infinite. It’s always finite so far. The infinity is never completed.
So the hypothesis “these are the digits of pi” is considered by Solomonoff induction, but maybe it looks like a weighted sum of a class of programs that say “compute pi up to the nth digit” for some n. These still compress quite well, (especially for compressible n’s) so their Kolmogorov complexity is small. I don’t think this is an obstacle for Solomonoff induction.
Has Solomonoff induction got it wrong? Close but not quite? I would argue no. I don’t believe uncomputable sets can physically exist. There are no perfect circles. The abstraction called pi is the approximation, for whatever algorithm physics is actually running, which Solomonoff induction would eventually find.
The posterior becomes the next prior when updating again, so we still call it a “prior” even though this is not the same prior as before. Sorry for the confusion. My current prior is my current level of belief/confidence.
Higher, yes, but (say) ten times almost nothing is still almost nothing. And that’s only if the likelihood ratio for the evidence favors the hypothesis by that much, which it doesn’t.
That’s right. No finite amount of evidence can overcome an infinitesimal prior.
Your example “miracles” are evidence in favor of miracles existing (because we can hardly expect reports of miracles to be less common if miracles exist) but the likelihood ratio is very close to 1 because false positives (accidents, hallucinations, and hoaxes) are so common. On priors, these explanations are far more likely. That means your “miracle” reports are extremely weak evidence.
I cannot lower my epistemic standards on this, or I would invite in flat-Earthers, UFO-ologists and various other conspiracy theorists, not to mention all the other religions who have similarly dubious paranormal claims. Why should I favor your paranormal claims over theirs? It’s special pleading.
″ but (say) ten times almost nothing is still almost nothing”
Ok, cool. So if your prior will be one millionnth I will need just six miracles :)
Strong enough evidence can overcome a very low prior, yes. And this doesn’t have to take very many observations.
But more instances do not necessarily stack like that. That can only happen to the degree they are independent sources. For example, suppose you write a dubious claim in a book, then you make nine more copies of the book. Does that make the claim ten times more likely to be true? What if it’s a hundred thousand copies? Did that help?
Of course it doesn’t! You’re re-counting the same evidence. The contribution of the nine books is completely screened off by the first; the new books have no new information.
I think the cases of miracle reports like weeping icons are similarly not independent enough. A thousand weeping icons is barely more evidence than one. It just means that the hoaxers copied each other’s scam.
Furthermore, we already know that some similar instances of miracles were hoaxes. Shouldn’t every new hoax report lower my prior that miracles are real?
OK. What does “omniscience” mean? The root words translate to something like “all knowing”. But what is “all”, and what is “knowing”? What’s the minimum qualification? Each successive option seems harder to prove:
Option A: (sufficiently advanced aliens) God’s knowledge isn’t infinite or anything, just far beyond our current level. “Omniscience” is more metaphorical than literal.
Option B: (semi-omniscient simulator) God can look up any past event in the world simulation, but isn’t simultaneously conscious of all of them and cannot predict the future short of actually simulating it. He does not know all the logical implications of His knowledge and can be surprised by events. (Janet from The Good Place might be at this level.) Although perhaps he can rewind the simulation and try a different timeline, if He makes any changes, He can’t always predict what would happen without actually trying it. He may also be ignorant of events in His native plane, outside of the world simulation.
Option C: (halting oracle of the first degree) God is a halting oracle machine able to solve the halting problem for any Turing machine, but is unable to solve the halting problem for halting oracle machines like Himself.
Option D: (higher-order halting oracle) God is a halting oracle machine able to solve the halting problem for any Turing machine, and halting oracle machines of some finite degree less than His own, but is unable to solve the halting problem for higher-order halting oracle machines like Himself, or those of any higher degree. There may possibly be beings of greater degree that know things God doesn’t.
Options A, and maybe B seem at least possible, but very very far from proven. Option C seems unprovable using any finite amount of evidence, but probably has a logically coherent definition. Option D seems unprovable even with infinite evidence, but again seems coherent.
Or did you have some other option in mind? I don’t know how to get past Option D without self-referential paradoxes invalidating the whole definition, but perhaps you have some new math for me?
Yes, I have option E: Everything. God just know everything, all the possible universes, - not calculating, just having them in His memory that is infinite.
As I stated in the previous comment , there is no reason for the exact theory to be finite, while approximations can be finite (would you like me to copy it or you can find it?).
That’s your crux? Lesser interpretations than E won’t do?
I am not convinced that E is logically coherent. It’s as meaningless as “married bachelor”.
Suppose that God’s memory is the set of “all facts” O.
The set of all subsets (or powerset) of O, we’ll call p(O).
Then, for any given fact f, there is a further fact f ′ stating that it’s either in or not in each subset of O in p(O).
Thus, there must be at least as many facts as there are elements of p(O), which, being the powerset of O, by Cantor’s Theorem must have a strictly greater cardinality than O.
But we assumed that O contains all facts. Contradiction!
And Cantor’s Theorem holds even for infinite sets! Q.E.D.
Did I just disprove God?
Good evening. Sorry to bring up this old thread. Your discussion was very interesting. Specifically regarding this comment, one thing confuses me. Isn’t “the memory of an omniscient God” in this thought experiment the same as “the set of all existing objects in all existing worlds”? If your reasoning about the set paradox proves that “the memory of an omniscient God” cannot exist, doesn’t that prove that “an infinite universe” cannot exist either? Or is there a difference between the two? (Incidentally, I would like to point out that the universe and even the multiverse can be finite. Then an omniscient monotheistic God would not necessarily have infinite complexity. But for some reason many people forget this.)
Well, your argument should be able to kill the concept of Tegmark mathematical multiverse then, so you can guess it is not a “silver bullet” :) Two possible answers:
1. You can not just change the word “mathematical universe” to the word “fact” in my definition E. “Mathematical universe stating that...” makes no sense for me.
2. Cantor’s theorem is based on particular set axiomatic. However, there are different set theory axiomatics. Some of them allow universal sets https://en.wikipedia.org/wiki/Universal_set
OK, that’s a good point. I had not heard of the universal sets that contain themselves, which I thought would lead to contradictions.
I’m really not persuaded by the MUH, but at least it’s based on reasoned a priori arguments. Do you have similar a priori arguments for God? There’s no way for evidence to ever be enough establish omniscience by itself.
″ OK, that’s a good point. I had not heard of the universal sets than contain themselves, which I thought would lead to contradictions. ”
Great, the update of belief :)
Yeah, given New Foundations, I’m no longer confident that “omniscience” is a logical contradiction, but neither am I confident that it isn’t. And I still think it would take an infinite amount of evidence to prove inductively, so you would need some kind of a priori argument for it instead (or why believe it at all?). That’s one obstacle down, but still a long way to go.
It took a great deal of evidence to nail down both the existence of new particles and their properties to that degree of precision. It’s already strong enough to overcome a low prior, but due to mathematical symmetries in nature, some particles were even predicted in advance of experimental discovery. In other words, they had a high prior given what was known, which is why scientists were willing to go to the great expense of looking for them.
We do not have any strong evidence for God, and assuming omniscience alone gives Him an infinitesimal prior, which means no amount of evidence could ever be enough.
No. That is not fundamental at all. Bell’s Theorem only rules out local hidden variables. The Many-Worlds Interpretation and De Broglie–Bohm interpretation are deterministic.
Yes it is for the observer. You can not deduce Born’s rule from the ^HΨ=iℏ∂Ψ∂t. No interpretation of quantum mechanics can help you with it.
″ Bell’s Theorem only rules out local hidden variables. ”—ok. Do you prefer non-local theory then?
Yes, MWI still has indexical uncertainty. This is a property of the observer, not the universe, which remains deterministic. But you can still simulate the wavefunction on a Turing machine and use it to make predictions, which was my point. It’s in the space of hypotheses of Solomonoff induction.
I don’t really prefer non-local theory, but the laws of nature are what they are and don’t care what I want.
Of course, the Universe as a whole is deterministic since it obeys Schreodinger equation. However, the only thing we have access to is observation, and the observation is probabilistic. You can not predict with the deterministic Turing machine, what is the outcome of the observation, only the probabilities for this outcome.
Well, the laws of nature of course what they are. However, you can interpret it in different ways. You can say that there is fundamental probability, wavefunction, and all this stuff, as the most scientist do when they perform calculations. Or you can start introducing hidden non-local variables, that does not improve your predictions but just make theory more complicated. There was an April, 1st paper introducing particles as sentient beings communicating with each other superluminously to deceive experimentalists. It is your choice which representation you prefer, but I thought you wanted the simplest one.
I think you completely missed my point about the toast. I was trying to be humorous by referencing an actual case, but one that I found especially silly.
It’s just pareidolia. It’s the same as seeing animals in clouds. But which animal you see depends on which animals you’re familiar with. Toast patterns are noisy, so are clouds. The human perceptual system is constantly trying to recognize what it knows in what it sees, and seems particularly good at finding faces. And we have a pretty good idea how this works. See DeepDream.
Yes, I can see that the pattern resembles a human face, and a feminine one. But I personally think that the toast looks more like Abby Sciuto from NCIS than most Virgin Mary paintings.
Oh yeah I heard about this stuff too. No I do not consider pareidolia as miracle. Basically, I listed above (replying to what would disprove me) what I assume to be miracles. In short—stuff that not only one old lady claim to be miracle, not only few local priests and bishop, but special committee from the Church (after a certain investigation), and, a result, whole Church.
If you apply that consistently, you get instrumentalism. Most people here aren’t instrumentalists, and do care about theories that dont constrain experience., such as [MWI], MUH and the simulation hypothesis. If you are going to reject metaphysics, you should reject all of it.
Sorry, what’s a MEU?
Typo for MWI
OK, so when things behave as normally expected, that’s just laws of nature, but whenever you’re surprised we can blame it on the witch?
This point is very important: The theory must make predictions to be knowledge—if your theory is equally good at explaining anything (like the witch), then you have zero knowledge, because it fails to constrain anticipation.
The sword of prediction cuts both ways:
In other words, how strong a piece of evidence should appear to you, depends on your priors; strength is not a property of the evidence alone. If you are claiming that your God hypothesis is not equally good at explaining anything (like the witch), and if you are (rationally) very confident that God exists, then you must have a weak expectation of seeing strong evidence the other way. That’s a crux, right? What would be a big surprise to your theory?
A corollary to conservation of evidence: Absence of evidence is evidence of absence, when the observation would be expected.
“Laws of nature do not hold 100%” is a prediction. That’s why atheists feel it necessary to argue against miracles.
Well, my expectations would decrease if some of the miracles I believe would be proven to be fakes or natural events. The miracles that I believe are not those that people believe locally, but those that the Church recognizes globally—usually they send a special commission to check if it is indeed miracle or just natural event (or fake). I would say, I put high probability that the miracles that was approved by this commission are indeed miracles, and if you demonstrate me that they are not it would decrease my probability. The miracles I can name:
-different myrrh-streaming icons, as long as it passed the check by church officials not only on the local level
-witnesses that are collecting for the canonization of saints. Each time when new person is canonized one of the main criteria is whether there are miracles by prayers to him. So it is quite large data of different witnesses. Most of them can be explained by coincidence or natural effects, however, there are more difficult witnesses such as very fast curing from disease that by doctor prognosis should have taken few orders of magnitude longer time (or should not have happen at all).
-relics of saints. In some cases (quite often actually) when after long time the body of the dead person who is considered to be saint is taken back from the ground, it discovered to be not decomposed. It is not necessary condition—there are many saints who does not have it. However, it is interesting question, whether this effect is more often among saints that among usual people (taking into account only the cases when relics were taken from the ground after canonization, to exclude the bias). If it is indeed significantly more often, what can be the reason? Why would the situation be opposite on mount Athos, where non-decomposition of the body is considered to be a bad thing?
Is that an argument against the Mathematical Universe Hypothesis? Wouldn’t the ultimate ensemble have to include a halting oracle?
Well, the relationship between infinite extent and infinite complexity is tricky. Everyone in the rationalsphere knows that pi has an infinite decimal expansion, and also that the digits can be generated by a finite program. “Every mathematical entity exists, and only mathematical entities exist” is likewise a brief compression of the MUH.
You can’t prove a halting oracle exists inductively. How could you? Solomonoff induction is doing induction perfectly, and a halting oracle is not even in the hypothesis space, because the space contains only computable functions, and the halting problem is not decidable. And even if it were, what use would that hypothesis be to you? You can’t get any predictions from running a program on a halting oracle machine when you don’t even have one.
From the Wikipedia article:
So this is a point that Tegmark himself considers fair. The CUH would not have a halting oracle.
How is that relevant? It is perfectly possible for a mathematical universe to be a form of Platonic realism.
Which implies that the MUH might.
I find the MUH philosophically dubious. I also disagree with Wikipedia’s characterization of the CUH as adding an additional hypothesis on top of MUH (I’m not sure if that’s how Tegmark sees it, or if that was just an interpolation by the editor). Instead, the CUH is throwing out the dubious axiom that allows things like uncomputable sets to exist, which means by Occam’s razor, I think the CUH is the simpler hypothesis. I don’t exactly buy the CUH either, but I don’t have a better idea.
I disagree with your interpretation of “perfectly possible”, but even if I hypothetically grant you that a halting oracle exists, how can an agent ever be rationally justified in believing that it does? It’s something that takes an infinite amount of evidence to prove. The method clearly can’t be induction.
I think you are missing some things that are quite basic: essentially no one believes in things like the Mathematical Universe on the basis of empiricism or induction. Instead, Occams razor is the major factor.
Note that by things like MUH include MWI. It is straightforwardly impossible to prove MWI or any other interpretation on the basis of evidence, because they make the same predictions. So the argument given for MWI is in terms of simplicity and consilience.
Not many people here reject all reasoning of that type. Many reject it selectively.
The simplicity criterion means MUH is preferable to CUH, since CUH has an additional constraint.
Thank you! I will read and try to understand it