Simply reducing yourself to the major religions is a difficult one.
I think it’s reasonable not to count any religions that don’t have a single head God. And it’s reasonable not to count religions that have lost—if God really was the god of the Aboriginal dream-time, It would have quit or restarted by now.
I’m also worried that every single part of the argument is completely and utterly isomorphic if one simply swapped Islam and Christianity and made the appropriate mapping.
This isn’t a problem for the argument unless you want to promote Christianity over Islam. It comes up with about the same expected utility for Islam and Christianity. The point of interest to us on LW is that both will be higher than the expected utility for atheism.
But BQP lies in PP and therefore lies in fact in PSPACE.
You totally lost me.
So if quantum mechanics as we understand it is correct, simulating should be really tough. This cannot rule out simulations from universes with other laws of physics but I see no reasonable way to go about even starting to quantify that probability.
Quantum mechanics can be construed as a way to limit computation and avoid arithmetic overflows for a simulation done in a Newtonian universe.
Also given that none of these religions claim that the deity is something like a simulator that should be a serious strike against the claims.
No one plays computer games in which their avatar claims to be the God running the simulation. That wouldn’t be fun—it would break God’s suspension of disbelief.
You’ll forgive me if these numbers look just slightly too convenient.
Quantum mechanics can be construed as a way to limit computation and avoid arithmetic overflows.
You’re missing JoshuaZ’s point. Quantum mechanics at first looks like it avoids arithmetic overflows, by letting you get away with finite precision. But it doesn’t, really. There’s still arbitrarily-precise numbers that the universe “keeps track of”—but they’re amplitudes, not observables.
In quantum mechanics, amplitudes are complex numbers, that seem to be treated as mathematically exact, out to indefinitely much precision. And the possibility of quantum computing suggests that this isn’t a purely an artifact of our models -- you can potentially do useful computation using the low-order bits of the amplitudes.
Question for the physics types here: If amplitudes didn’t add precisely linearly, could we tell? Is there a sensitive test for linearity?
The notion of “a single particle” turns out to be problematic. The thing that requires arbitrarily much information is the amplitude for a particle to be in some particular state.
I should emphasize that amplitudes aren’t just a creature of our models—they are the thing that interfere to give you an diffraction pattern, or the shapes of an electron orbital cloud, or that get manipulated in a quantum computation.
I think it’s reasonable not to count any religions that don’t have a single head God.
That adds a lot more religions. And then one has to treat the different forms of the major religions separately. The extremist Catholic and Jack Chick fanatic both think the other is going to hell. As to being restricted to religions where there is a single head God, have you heard of MMPORGs?
And it’s reasonable not to count religions that have lost—if God really was the god of the Aboriginal dream-time, It would have quit or restarted by now.
Or it is really annoyed at how it lost against the computer played religions and is running the simulation for a few more seconds from its perspective seeing if on an off chance its religion becomes more popular again.
(There’s a fundamental problem with any sort of Pascal’s Wager situation like this. One can keep constructing narratives like the above. And it isn’t at all clear how likely they are.)
But BQP lies in PP and therefore lies in fact in PSPACE.
You totally lost me.
BQP is the set of problems that can be in some sense quickly solved on a quantum computer. BQP lies in the class PP) and therefore lies in the class of PSPACE which is the set of problems which on a classical (deterministic) computer can be solved with memory bounded by a polynomial on the size of the input. Now, if there were some m such that is possible to simulate m+1 particles with m particles (possibly with some slowing things down), then one can by iterating this simulate n particles for any n. So the only restriction of what you can simulate with your m particles is time-based, not space based. In particular, you can use the particles to simulate things that require more space than PSPACE allows.
But I can simulate m particles with an appropriate number of qbits. So, putting it all together, I’d have to conclude that BQP doesn’t lie in PSPACE. That’s a contradiction.
Quantum mechanics can be construed as a way to limit computation and avoid arithmetic overflows.
This seems strange to me. Quantum computational issues are if anything much tougher than standard, deterministic computations. It is very difficult to simulate using normal computers the behavior of even a handful of particles in a quantum mechanical setting. We also know that there are things that a quantum computer can do that an equivalent classical machine simply cannot do in reasonable time. The most prominent example is Grover’s algorithm. I don’t see how quantum mechanics limits computation or avoids arithmetic overflows. If anything, it strongly suggests that we’re not in a classical simulation.
No one plays computer games in which their avatar claims to be the God running the simulation. That wouldn’t be fun—it would break God’s suspension of disbelief.
This assumes an extremely narrow cognitive attitude. Note how many people when playing games like WoW are more than willing to give their characters names which strongly don’t support suspension of disbelief. (Tangent: I sort of see where you are going here. When my brother and I were much younger we played some games like Phantasie and I’d actually go and try to give the characters fanstasy sounding names while he’d name them things like wiz1, wiz2, fig1, fig2, etc.)
You’ll forgive me if these numbers look just slightly too convenient.
I find them highly inconvenient.
What I mean is that you had as one of your inputs a number that by your own description was nearly completely arbitrary and then ended up with a result that was a tiny bit over 1%, which was just the value you needed given your earlier assumptions. This looks a lot like there was an attempt, either conscious or unconscious to pick values that made your argument go through.
We also know that there are things that a quantum computer can do that an equivalent classical machine simply cannot do in reasonable time.
I thought it was still an open question whether there are computations that a classical computer is necessarily slower at than a quantum computer. From Aaronson’s Philosophy/Comp-Complexity paper, p.35 :
More generally, that quantum computers can solve certain problems superpolynomially faster than classical computers is not a theorem, but a (profound, plausible) conjecture. [49] [50]
Footnote 49: A formal version of this conjecture is BPP =/= BQP, where BPP (Bounded-Error Probabilistic Polynomial-Time) and BQP (Bounded-Error Quantum Polynomial-Time) are the classes of problems efficiently solvable by classical randomized algorithms and quantum algorithms respectively. … any proof of the BPP =/= BQP conjecture would be considered almost as great a breakthrough as P =/= NP.
Footnote 50: Complicating matters, there are quantum algorithms that provably achieve exponential speedups over any classical algorithm: one example is Simon’s algorithm [119], an important predecessor of Shor’s algorithm. However, all such algorithms are formulated in the “black-box model” (see Beals et al. [23]), where the resource to be minimized is the number of queries that an algorithm makes to a hypothetical black box. Because it is relatively easy to analyze,the black-box model is a crucial source of insights about what might be true in the conventional Turing machine model. However, it is also known that the black-box model sometimes misleads us about the “real” situation. As a famous example, the complexity classes IP and PSPACE are equal [115], despite the existence of a black box that
separates them (see Fortnow [56] for discussion).
Yeah, so this is a good point. The separation given by Grover’s algorithm is essentially a blackbox model. And Shore’s algorithm being better depends on the claim that factoring is not in P, which is a claim strictly stronger than P != NP (and a claim which is doubted by a lot more people. Henry Cohn for example says there’s no good reason to think that factoring is not in P.) So yes, this is under the black box. system.
There’s significant support not for that per se but that there’s no good reason to think otherwise. I don’t know for example what estimate Henry Cohn would give for a polynomial time factoring algorithm but in conversations (and in at least one talk he gave) he’s pointed out that there’s no complexity theoretic reason to suspect that factoring is tough. Moreover, there’s a polynomial time factoring algorithm for Q[x] where Q is the rationals (if you don’t care about units). The argument for factoring being hard boils down solely to “lots of people have worked on this and haven’t succeeded.” But that’s a very weak argument especially when one looks at how much progress has been made in factoring in the last few years (see especially the number field sieve and elliptic curve sieve). This is in contrast to say the conjectures that P != NP and BPP = P where in order for them to be wrong various things need to happen that seem unlikely.
But normally, problems eventually found to be in P are first found to be in BPP (primality testing being the most prominent example). Shouldn’t there at least be some probabilistic factoring algorithms (whatever that would mean)?
Also, in the likely event that P != NP, that would imply the existence of NP-intermediate problems, and factoring is a good candidate for being in such a class.
(Personally, my hunch is that factoring is indeed in P, but I’m far from qualified to have a lot of confidence in that.)
But normally, problems eventually found to be in P are first found to be in BPP (primality testing being the most prominent example). Shouldn’t there at least be some probabilistic factoring algorithms (whatever that would mean)?
This is an example where moving between a decision problem (returning a yes or no answer) and a function problem (returning a number or list of numbers) may break down. To make factoring equivalent to a decision problem one instead asks the question “Given n, and given a and b, is there are prime factor of n which is between a and b?” If one has a factorization of n one can answer this question quickly. If one has an oracle that can answer this question then one can factor in polynomial time by using a binary search tree of the intervals under n. This allows one to iterate calls to the oracle to find a prime factor in log_2 n time. Each time you find a prime factor p, you store it and then do the same process for n/p. This takes at most (log2 n) times the number of prime factors of n. In general, if calls to your oracle take time f(n), the whole thing takes at most f(n) (log2 n)^2 time. (You can actually do slightly better than that but this is good enough for pretty much all purposes.)
What I’m not sure about is whether the equivalent problem in BPP goes through. Say one has an oracle that answers the interval question with a BPP-like answer. Is there some way to turn that into a process that returns the correct factorization quickly with high probability? That’s not obvious to me. I haven’t really thought about the question that hard.
Also, in the likely event that P != NP, that would imply the existence of NP-intermediate problems, and factoring is a good candidate for being in such a class.
Right. That and and graph isomorphism seem to be the two most common natural problems that are potentially in that intermediate class.
Personally, my hunch is that factoring is indeed in P, but I’m far from qualified to have a lot of confidence in that.
I think you’d be in an extreme minority here. Even people like Henry who think that the evidence for factoring not being in P is small don’t seem to think it is actually in P. If I had to make a guess on the chance that factoring is in P, I’d probably put it at under 20%.
Right. That and and graph isomorphism seem to be the two most common natural problems that are potentially in that intermediate class.
Yes, but I thought there’s much higher confidence that graph isomorphism will be de-randomized, given how hard it is to even find problem instances that are unresolved after a few simple checks (e.g. of invariants of the adjacency matrices).
Yes, graph isomorphism looks a lot more likely to drop straight into P.
Not only is it hard to find difficult problem instances, for almost any nice type of graph (e.g. planar graphs, or interval graphs) there’s a polynomial time algorithm. On the other hand, there about as many specific types of graphs where one can show that being able to solve graph isomorphism for that type of graph means you can solve it for all graphs. Connected graphs are a trivial example but regular graphs are a less trivial example. But the closest rough equivalents for factorization look tough. Primality testing is easy, and testing if n=p^k for some prime is about as easy. But in those cases if the answer is “no” then you can’t factor it easily! And almost any other natural set of integers has factoring be really tough. For example, consider Fermat and Mersenne numbers. In both cases, while factorization does seem to be easier it isn’t that much easier. One can object that these are defined in part in an additive fashion, but this doesn’t help since the most obvious multiplicative set of nice numbers, products of two primes, seems to be the toughest to factor. And that case has had the most work on it because of the applications to cryptography so this isn’t due to drunks at the lampost.
On the other hand, in some respects factorization looks easier. Factorization (as phrased as the decision problem given earlier) lies in both NP and co-NP. In contrast, it is not known whether GI is in co-NP. Moreover, while factoring is in BQP, this has not been shown to be true for GI. (I’m not sure how much of big deal that is, and I really don’t know the quantum end well enough to evaluate how strong that is as evidence although my impression is that a lot of people have tried to show that GI is in BQP.) So in some senses, factorization fits more conditions which suggest being r actually in P.
I guess part of this is probably related to the Catholic church and Protestantism. JW’s believe themselves beneficiaries of the Protestant movement (though their beliefs stray from what Wikipedia says is one of the main tenets of Protestantism...namely, they don’t believe “faith in Jesus” is all that is needed for salvation.)
Additionally, the Catholic Church was behind a large part of the persecution JW’s suffered in the United States during the first half of the 20th century...
With regards to the Mormon Church, a brief search of the index I have of JW publications show mostly articles that take an appreciative tone for much of Mormonism along with “if only they didn’t mis-interpret the Bible and believe that pesky Book of Mormon was also the word of God.” I speculate that this is because the way the Mormon Church functions is very similar to the way the Jehovah’s Witnesses organization functions.
I think it’s reasonable not to count any religions that don’t have a single head God. And it’s reasonable not to count religions that have lost—if God really was the god of the Aboriginal dream-time, It would have quit or restarted by now.
This isn’t a problem for the argument unless you want to promote Christianity over Islam. It comes up with about the same expected utility for Islam and Christianity. The point of interest to us on LW is that both will be higher than the expected utility for atheism.
You totally lost me.
Quantum mechanics can be construed as a way to limit computation and avoid arithmetic overflows for a simulation done in a Newtonian universe.
No one plays computer games in which their avatar claims to be the God running the simulation. That wouldn’t be fun—it would break God’s suspension of disbelief.
I find them decidedly inconvenient.
You’re missing JoshuaZ’s point. Quantum mechanics at first looks like it avoids arithmetic overflows, by letting you get away with finite precision. But it doesn’t, really. There’s still arbitrarily-precise numbers that the universe “keeps track of”—but they’re amplitudes, not observables.
In quantum mechanics, amplitudes are complex numbers, that seem to be treated as mathematically exact, out to indefinitely much precision. And the possibility of quantum computing suggests that this isn’t a purely an artifact of our models -- you can potentially do useful computation using the low-order bits of the amplitudes.
Question for the physics types here: If amplitudes didn’t add precisely linearly, could we tell? Is there a sensitive test for linearity?
So even in quantum mechanics, it takes an infinite amount of information to represent a single particle? That’s a problem.
It’s a problem for us. But the universe doesn’t have to care.
The notion of “a single particle” turns out to be problematic. The thing that requires arbitrarily much information is the amplitude for a particle to be in some particular state.
I should emphasize that amplitudes aren’t just a creature of our models—they are the thing that interfere to give you an diffraction pattern, or the shapes of an electron orbital cloud, or that get manipulated in a quantum computation.
That adds a lot more religions. And then one has to treat the different forms of the major religions separately. The extremist Catholic and Jack Chick fanatic both think the other is going to hell. As to being restricted to religions where there is a single head God, have you heard of MMPORGs?
Or it is really annoyed at how it lost against the computer played religions and is running the simulation for a few more seconds from its perspective seeing if on an off chance its religion becomes more popular again.
(There’s a fundamental problem with any sort of Pascal’s Wager situation like this. One can keep constructing narratives like the above. And it isn’t at all clear how likely they are.)
BQP is the set of problems that can be in some sense quickly solved on a quantum computer. BQP lies in the class PP) and therefore lies in the class of PSPACE which is the set of problems which on a classical (deterministic) computer can be solved with memory bounded by a polynomial on the size of the input. Now, if there were some m such that is possible to simulate m+1 particles with m particles (possibly with some slowing things down), then one can by iterating this simulate n particles for any n. So the only restriction of what you can simulate with your m particles is time-based, not space based. In particular, you can use the particles to simulate things that require more space than PSPACE allows.
But I can simulate m particles with an appropriate number of qbits. So, putting it all together, I’d have to conclude that BQP doesn’t lie in PSPACE. That’s a contradiction.
This seems strange to me. Quantum computational issues are if anything much tougher than standard, deterministic computations. It is very difficult to simulate using normal computers the behavior of even a handful of particles in a quantum mechanical setting. We also know that there are things that a quantum computer can do that an equivalent classical machine simply cannot do in reasonable time. The most prominent example is Grover’s algorithm. I don’t see how quantum mechanics limits computation or avoids arithmetic overflows. If anything, it strongly suggests that we’re not in a classical simulation.
This assumes an extremely narrow cognitive attitude. Note how many people when playing games like WoW are more than willing to give their characters names which strongly don’t support suspension of disbelief. (Tangent: I sort of see where you are going here. When my brother and I were much younger we played some games like Phantasie and I’d actually go and try to give the characters fanstasy sounding names while he’d name them things like wiz1, wiz2, fig1, fig2, etc.)
What I mean is that you had as one of your inputs a number that by your own description was nearly completely arbitrary and then ended up with a result that was a tiny bit over 1%, which was just the value you needed given your earlier assumptions. This looks a lot like there was an attempt, either conscious or unconscious to pick values that made your argument go through.
I thought it was still an open question whether there are computations that a classical computer is necessarily slower at than a quantum computer. From Aaronson’s Philosophy/Comp-Complexity paper, p.35 :
Yeah, so this is a good point. The separation given by Grover’s algorithm is essentially a blackbox model. And Shore’s algorithm being better depends on the claim that factoring is not in P, which is a claim strictly stronger than P != NP (and a claim which is doubted by a lot more people. Henry Cohn for example says there’s no good reason to think that factoring is not in P.) So yes, this is under the black box. system.
Wait, so there’s significant support for the belief that RSA can be broken in polynomial time?
There’s significant support not for that per se but that there’s no good reason to think otherwise. I don’t know for example what estimate Henry Cohn would give for a polynomial time factoring algorithm but in conversations (and in at least one talk he gave) he’s pointed out that there’s no complexity theoretic reason to suspect that factoring is tough. Moreover, there’s a polynomial time factoring algorithm for Q[x] where Q is the rationals (if you don’t care about units). The argument for factoring being hard boils down solely to “lots of people have worked on this and haven’t succeeded.” But that’s a very weak argument especially when one looks at how much progress has been made in factoring in the last few years (see especially the number field sieve and elliptic curve sieve). This is in contrast to say the conjectures that P != NP and BPP = P where in order for them to be wrong various things need to happen that seem unlikely.
But normally, problems eventually found to be in P are first found to be in BPP (primality testing being the most prominent example). Shouldn’t there at least be some probabilistic factoring algorithms (whatever that would mean)?
Also, in the likely event that P != NP, that would imply the existence of NP-intermediate problems, and factoring is a good candidate for being in such a class.
(Personally, my hunch is that factoring is indeed in P, but I’m far from qualified to have a lot of confidence in that.)
This is an example where moving between a decision problem (returning a yes or no answer) and a function problem (returning a number or list of numbers) may break down. To make factoring equivalent to a decision problem one instead asks the question “Given n, and given a and b, is there are prime factor of n which is between a and b?” If one has a factorization of n one can answer this question quickly. If one has an oracle that can answer this question then one can factor in polynomial time by using a binary search tree of the intervals under n. This allows one to iterate calls to the oracle to find a prime factor in log_2 n time. Each time you find a prime factor p, you store it and then do the same process for n/p. This takes at most (log2 n) times the number of prime factors of n. In general, if calls to your oracle take time f(n), the whole thing takes at most f(n) (log2 n)^2 time. (You can actually do slightly better than that but this is good enough for pretty much all purposes.)
What I’m not sure about is whether the equivalent problem in BPP goes through. Say one has an oracle that answers the interval question with a BPP-like answer. Is there some way to turn that into a process that returns the correct factorization quickly with high probability? That’s not obvious to me. I haven’t really thought about the question that hard.
Right. That and and graph isomorphism seem to be the two most common natural problems that are potentially in that intermediate class.
I think you’d be in an extreme minority here. Even people like Henry who think that the evidence for factoring not being in P is small don’t seem to think it is actually in P. If I had to make a guess on the chance that factoring is in P, I’d probably put it at under 20%.
Yes, but I thought there’s much higher confidence that graph isomorphism will be de-randomized, given how hard it is to even find problem instances that are unresolved after a few simple checks (e.g. of invariants of the adjacency matrices).
Yes, graph isomorphism looks a lot more likely to drop straight into P.
Not only is it hard to find difficult problem instances, for almost any nice type of graph (e.g. planar graphs, or interval graphs) there’s a polynomial time algorithm. On the other hand, there about as many specific types of graphs where one can show that being able to solve graph isomorphism for that type of graph means you can solve it for all graphs. Connected graphs are a trivial example but regular graphs are a less trivial example. But the closest rough equivalents for factorization look tough. Primality testing is easy, and testing if n=p^k for some prime is about as easy. But in those cases if the answer is “no” then you can’t factor it easily! And almost any other natural set of integers has factoring be really tough. For example, consider Fermat and Mersenne numbers. In both cases, while factorization does seem to be easier it isn’t that much easier. One can object that these are defined in part in an additive fashion, but this doesn’t help since the most obvious multiplicative set of nice numbers, products of two primes, seems to be the toughest to factor. And that case has had the most work on it because of the applications to cryptography so this isn’t due to drunks at the lampost.
On the other hand, in some respects factorization looks easier. Factorization (as phrased as the decision problem given earlier) lies in both NP and co-NP. In contrast, it is not known whether GI is in co-NP. Moreover, while factoring is in BQP, this has not been shown to be true for GI. (I’m not sure how much of big deal that is, and I really don’t know the quantum end well enough to evaluate how strong that is as evidence although my impression is that a lot of people have tried to show that GI is in BQP.) So in some senses, factorization fits more conditions which suggest being r actually in P.
Agreeing with this. There are quite a few religions that most of us would call Christian that all think the other Christians … aren’t.
I was raised as a Jehovah’s Witness. They literally think that other Christian religions and specifically the Catholic Church are the work of Satan.
More or less so than the Mormon Church?
Moreso.
I guess part of this is probably related to the Catholic church and Protestantism. JW’s believe themselves beneficiaries of the Protestant movement (though their beliefs stray from what Wikipedia says is one of the main tenets of Protestantism...namely, they don’t believe “faith in Jesus” is all that is needed for salvation.)
Additionally, the Catholic Church was behind a large part of the persecution JW’s suffered in the United States during the first half of the 20th century...
With regards to the Mormon Church, a brief search of the index I have of JW publications show mostly articles that take an appreciative tone for much of Mormonism along with “if only they didn’t mis-interpret the Bible and believe that pesky Book of Mormon was also the word of God.” I speculate that this is because the way the Mormon Church functions is very similar to the way the Jehovah’s Witnesses organization functions.