Edward Nelson claims proof of inconsistency in Peano Arithmetic
We’ve discussed Edward Nelson’s beliefs and work before. Now, he claims to have a proof of a contradiction in Peano Arithmetic; which if correct is not that specific to PA but imports itself into much weaker systems. I’m skeptical of the proof but haven’t had the time to look at it in detail. There seem to be two possible weakpoints in his approach. His approach is to construct a system Q_0^* which looks almost but not quite a fragment of PA and then show that PA both proves this system’s consistency and proves its inconsistency.
First, he may be mis-applying the Hilbert-Ackermann theorem-when it applies is highly technical and can be subtle. I don’t know enough to comment on that in detail. The second issue is that in trying to show that he can use finitary methods to show there’s a contradiction in Q_0^* he may have proven something closer to Q_0^* being omega-inconsistent. Right now, I’m extremely skeptical of this result.
If anyone is going to find an actual contradiction in PA or ZFC it would probably be Nelson. There some clearly interesting material here such as using a formalization of the surprise examiation/unexpected hanging to get a new proof of of Godel’s Second Incompleteness Theorem. The exact conditions which this version of Godel’s theorem applies may be different from the conditions under which the standard theorem can be proven.
- 19 Apr 2012 22:54 UTC; 123 points) 's comment on A question about Eliezer by (
- 6 Oct 2011 18:44 UTC; 8 points) 's comment on Rationality Quotes October 2011 by (
Nelson has withdrawn his claim. Link.
Is that a withdrawal of the entire approach or just one part of it?
On the FOM list, he writes:
Thanks, that’s very useful for http://predictionbook.com/predictions/3508
I am interested in how this event has influenced peoples view of the consistency of arithmetic. For those who had a viewpoint on it prior to this, what would you have estimated before and after and with what confidence? I will not state my own view for fear of damaging the results.
It seems important in this context to distinguish between consistency of different systems
Overall, my consistency estimate for PA if anything has gone up in this context. Because if Edward Nelson tries really hard and fails to find a contradiction that’s more evidence that there isn’t one. How much should it go up by? I’m not sure.
On the other hand, this may very well make my estimate for consistency of ZFC go down. That’s because although in this specific case Nelson’s approach didn’t work, it opens up new avenues into demonstrating inconsistency, and part of me could see this sort of thing being more successfully applied to ZFC (although that also seems unlikely). (Also note that there have been attempts by a much larger set of mathematicians in the last few years to find an inconsistency in ZF and that has essentially failed.)
One thing that this has also done is made me very aware in a visceral fashion that number theory and logic are not the same things even when one is talking about subsystems of PA. I already knew that, but maybe had not fully emotionally processed it as much as I should have as when I saw Nelson’s outline, and then the subsequent discussions and had a lot of trouble following the details. The main impact of this is to suggest that when making logic related claims (especially consistency and independence/undecidability issues) I should probably not rate my own expertise as highly as I do. As a working mathematician, I almost certainly have more relevant expertise than a random individual, but I’ve probably been overestimating how much my expertise matters. In both the cases of ZF/ZFC and the case of PA, reducing my confidence means increasing the chance that they are inconsistent. But, I’m not at all sure by how much this should matter. So maybe this should leave everything alone?
So overall I’d say around .995 consistency for PA and .99 consistency for ZF. Not much change from the old values.
More here, including a comment by Terence Tao.
Specifically, Tao’s comment:
He’s such a glorious mathematician. <3
He gave a more detailed comment on the n-Category Café.
This confuses me. Second incompleteness applies to extensions of PA . I’ll bet Terry Tao knows this too. So what does this remark mean?
FTL neutrinos and now a proof of inconsistency in Peano Arithmetic? What next?
I have devised a little proof of inconsistency of the Newtonian mechanics, years ago.
http://critticall.com/alog/Antinomy_inside_mechanics.pdf
Can you spot the error?
Vg vf pbeerpg gung ng g=0, rnpu cnegvpyr unf n yrsgjneqf irybpvgl. Ohg vg vf abg pbeerpg gung gur prager bs tenivgl ng g=0 unf n yrsgjneqf irybpvgl. Guvf vf orpnhfr gur flfgrz orunirf qvfpbagvahbhfyl ng g=0, naq fb gur irybpvgl bs gur prager bs tenivgl pnaabg or qrgrezvarq ol gnxvat n jrvtugrq nirentr bs gur vavgvny irybpvgvrf bs gur pbzcbaragf.
Va zber qrgnvy:
Yrg P(g) or gur cbfvgvba bs gur prager bs tenivgl ng gvzr g.
Gur irybpvgl bs gur prager bs tenivgl ng g=0 vf gur yvzvg nf g → 0 bs (P(g)-P(0))/g.
Ng nal gvzr g, P(g) vf gur fhz bire a bs Z(a)C(a,g), jurer Z(a) vf gur znff bs gur agu cbvag naq C(a,g) vgf cbfvgvba.
Gur nethzrag eryvrf ba pbzchgvat qP/qg nf gur fhz bire a bs Z(a)qC(a,g)/qg. Guvf fjncf gur beqre bs gjb yvzvgvat cebprffrf, fhzzngvba naq qvssreragvngvba, juvpu vf inyvq sbe jryy-orunirq fhzf, ohg abg sbe guvf bar.
For intuition, imagine that there were just the first three particles in the sequence, and that the number 10 is replaced by something much larger. The two leftmost particles will rapidly oscillate about one another, while drifting slowly towards the rightmost. Now imagine the first four particles are present. The two leftmost will oscillate madly about each other, while that system as a whole oscillates rapidly about the third from the left (I’m conveniently ignoring what happens at the instants when particles meet and pass through each other), and the ensemble of the three leftmost drifts towards the rightmost. Repeat.
It’s acceleration that matters, not velocity (the initial velocity of all points is zero, or at least that’s how I thought of it). However, your argument does generalize nicely to acceleration, and could possibly be the correct resolution.
Yes, stupid mistake of mine. Replace velocity by acceleration and position by velocity and the same argument works. The fallacy is in the vagrepunatr bs yvzvgf.
Similar, for those who enjoyed discussing this problem: Did you know that Newtonian mechanics is indeterministic?
These equations don’t make sense dimensionally. Are there supposed to be constants of proportionality that aren’t being mentioned? Are they using the convention c=1? Well, I doubt it’s relevant (scaling things shouldn’t change the result), but...
Edit: Also, perhaps I just don’t know enough differential equations, but it’s not obvious to me that a curve such as he describes exists. I expect it does; it’s easy enough to write down a differential equation for the height, which will give you a curve that makes sense when r>0, but it’s not obvious to me that everything still works when we allow x=0.
That is my guess. The simplest way IMO would be replace the g in eq. 1 by a constant c with units of distance^(-1/2). The differential equation becomes r″ = g c r^1/2, which works dimensionally. The nontrivial solution (eq. 4) is correct with an added (c g)^2 in front.
I’m not sure what you mean here. What could be wrong in principle with a curve h = c r^3/2 describing the shape of a dome, even at r = 0?
Notice “r” here does not mean the usual radial distance but rather the radial distance along the curve itself. I don’t see any obvious barrier to such a thing and I expect it exists, but that it actually does isn’t obvious to me; the resulting differential equation seems to have a problem at h=0, and not knowing much about differential equations I have no idea if it’s removable or not (since getting an explicit solution is not so easy).
You piqued my curiosity, so I sat to play a little with the equation. If R is the usual radial coordinate, I got:
dh/dR = c y^(1/3) / [(1 - c^2 y^(2/3))^(1/2)]
with y = 3h/(2c), using the definition of c in my previous comment. (I got this by switching from dh/dr to dh/dR with the relation between sin and tan, and replaciong r by r(h). Feel free to check my math, I might have made a mistake.) This was easily integrated by Mathematica, giving a result that is too long to write here, but has no particular problem at h = 0, other than dR/Dh being infinite there. That is expected, it just means dh/dR = 0 so the peak of the dome is a smooth maximum.
Vf vg gung gur vavgvny irybpvgl bs gur znggre vf mreb, gur vavgvny naq riraghny nppryrengvba bs gur pragre bs tenivgl vf mreb, ohg gur pragre bs tenivgl vgfrys ortvaf jvgu n yrsgjneq irybpvgl?
This is a good one, but I won’t try doing the math to figure out the specific error with it (other than the obvious problem of (rot13) gur arrq gb pnyphyngr gur rssrpg bs nyy gur znffrf, abg whfg gur vzzrqvngryl ba gur evtug naq gur vzzrqvngryl ba gur yrsg.)
Here’s what my intuition is telling me: Gung jung jvyy npghnyyl unccra vf gung gur znffrf jvyy pbairetr fbzrjurer va gur pragre, jvgu fbzr bs gur znffrf nsgre gur zvqqyr cbvag zbivat evtugjneqf vafgrnq. V vzntvar gung V’q unir gb hfr vagrtengvba bire gur ahzore bs vasvavgr znffrf ba gur yrsg, gb frr gung gur zngu sbepr gur jubyr guvat gb onynape.
An elegant puzzle.
Gur fbyhgvba vf rnfvrfg gb frr vs jr pbafvqre n svavgr ahzore bs vgrengvbaf bs gur cebprqher bs fcyvggvat bss n fznyyre znff. Va rnpu vgrengvba, gur yrsgzbfg znff onynaprf gur obbxf ol haqretbvat irel encvq nppryrengvba gb gur evtug. Va gur yvzvg nf gur ahzore bs vgrengvbaf tbrf gb vasvavgl, jr unir na vasvavgrfvzny znff onynapvat gur obbxf jvgu vasvavgryl ynetr nppryrengvba.
I think the solution I came up with is, in spirit, the same as this one.
Pbafvqre gur nzbhag bs nppryrengvba rnpu cnegvpyr haqretbrf. Gur snegure lbh tb gb gur yrsg, gur terngre gur nppryrengvbaf naq gur fznyyre gur qvfgnaprf, naq, guhf, gur yrff gvzr vg gnxrf orsber n pbyyvfvba unccraf. (V pbhyq or zvfgnxra gurer.) Sbe rirel cbfvgvir nzbhag bs gvzr, pbyyvfvbaf unccra orsber gung nzbhag bs gvzr. Gurersber, gur orunivbe bs gur flfgrz nsgre nal nzbhag bs gvzr vf haqrsvarq.
Jung vs jr fhccbfr gung gur obqvrf pna cnff guebhtu rnpu bgure? Ubj qbrf gur flfgrz orunir nf gvzr cebterffrf?
Vg’f rnfl rabhtu gb rkgraq arjgbavna zrpunavpf gb unaqyr gjb-cnegvpyr pbyyvfvbaf. Whfg cvpx bar bs “rynfgvp pbyyvfvba” be “cnff guebhtu rnpu bgure”, naq nccyl gur pbafreingvba ynjf naq flzzrgevrf. Ohg V’z abg njner bs nal cebcbfnyf gb unaqyr guerr-be-zber-cnegvpyr pbyyvfvbaf: va gung pnfr, pbafreingvba bs raretl naq zbzraghz vfa’g rabhtu gb qrgrezvar gur bhgchg fgngr, lbh’q arrq fbzr npghny qlanzvpf gung ner qrsvarq ba gur vafgnag bs pbyyvfvba.
Do you have a reference for how to extend Newtonian mechanics to collisions or passing-through of point particles subject to gravity? how about people complaining about 3 body collisions?
Bringing in energy and momentum doesn’t sound helpful to me because they are both infinite at the point of collision.
My understanding is that both of the choices has a unique analytic extension to the complex plane away from the point of collision. Most limiting approaches will agree with this one. In particular, Richard Kennaway’s approach (to non-collision) is perturb the particles into another dimension, so that the particles don’t collide. The limit of small perturbation is the passing-through model.
For elastic collisions, I would take the limit of small radius collisions. I think this is fine for two body collisions. In dimension one, I can see a couple ways to do three body collisions, including this one. The other, once you can do two body collisions, is to perturb one of bodies, to get a bunch of two body collisions; in the limit of small perturbation, you get a three body collision. But if you extend this to higher dimension, it results in the third body passing through the collision (which is a bad sign for my claim that most limiting approaches agree). When I started writing, I thought the small radius approach had the same problem, but I’m not sure anymore.
Yes, I think so too.
Gur prager bs znff vf ng gur bevtva naq fgnlf gurer?
Clever! It took me a couple of minutes to get it: Gur yrsgzbfg cbvag (gur 1 xt znff) jvyy zbir gb gur evtug, xrrcvat gur pragre bs tenivgl jurer vg vf. Fb vg’f abg gehr gung “jr pna cebir guvf sbe rirel cbvag.
You are misreading which one’s the rightmost mass. The rightmost mass is the one with 1kg.
Whoops, you’re right. I assumed +1/10 meant +1/10th from +1. I had the whole thing backwards.
Gurer vf ab reebe. Gur cevapvcyr gung gur pragre bs znff pnaabg nppryrengr pbzrf sebz gur snpg gung, sbe nal cbvag znff, gur sbepr vg rkregf ba bgure znffrf vf rknpgyl bccbfvgr gb gur sbepr bgure znffrf rkreg ba vg. Guvf qbrf abg nccyl gb vasvavgr pbyyrpgvbaf bs znffrf sbe gur fbzr ernfba gung na vasvavgr frevrf bs barf qbrf abg nqq gb mreb, rira gubhtu vg vf gur qvssrerapr orgjrra gjb vqragvpny frevrf, rnpu gur fhz bs nyy cbfvgvir vagrtref.
Vs lbh guvax gurer vf n fcrpvsvp qviretrag frevrf, anzr vg.
Gur fhz bs gur gbgny sbeprf ba nyy cnegvpyrf.
vf yrff guna 0 sbe rirel grez va gur bhgre fhz, rira gubhtu lbh pna pnapry bhg nyy gur grezf ol ernpuvat vagb gur vaare fhzf.
To put code blocks in mark down put four spaces before the lines you want to be code. I assume that is what you intended?
Edited, although I’m not sure it’s all that much more readable now.
I was thinking of including a line break or two. ;)
Good idea.
bx, gung vf n ceboyrzngvp erneenatrzrag. Ohg juvpu fvqr vf pbeerpg? Lbh znqr n pynvz nobhg guvf va lbhe bevtvany pbzzrag, ohg V guvax lbh tbg vg onpxjneqf.
My previous attempt at a reply didn’t make much sense. And this is deep enough into the thread that I won’t bother with rot13.
Let’s try again: The description given by Thomas accurately models how each individual particle accelerates (towards the origin). It thus also accurately models how the center of mass accelerates in the non-limiting case.
I think Richard Kennaway’s answer is correct and it seems to me to be opposite to your original answer. The non-limiting case is not interesting. The question is whether there is a coherent extension of Newtonian mechanics to infinitely many particles. It does cover this case and the center of mass still works.
I think this paradox shows that the answer is no.
I can’t figure out what this sentence was intended to mean.
Richard Kennaway’s answer
V guvax gung va gur aba-yvzvgvat pnfr, gur neenatrzrag fubjvat rirelguvat qevsgvat gbjneqf gur bevtva vf pbeerpg naq gur neenatrzrag fubjvat gur pragre bs znff fgnlvat fgvyy vf va reebe. Bs pbhefr, va gur yvzvg nf gur ahzore bs cbvag znffrf vapyhqrq tbrf gb vasvavgl, vg’f gur bgure jnl nebhaq.
Well, for starters we live in an integer universe, so there is a leftmost point.
Even beyond that, though, the third law suggests that for every force pulling mass leftward, you have a force pulling mass rightward- and so the center of mass should not move as the collection collapses.
[edit]In case basic physics is not obvious to you, an explanation of what the second line means can be found here.
[EDIT] What I am trying to say is that the problem is that an infinity is not taken as a limit, and thus the approach to the problem is confused.
I am curious why this received downvotes. Is it because people expected me to rot13 my answer?
I didn’t downvote, but if I had it would have been because you seem to be missing the point a bit.
It may well be the case that we live in a discrete universe without infinities (I assume that’s what you meant by integer) but this is not the case for Newtonian Mechanics, which asserts that both space and time remain continuous at every scale. Thomas never claimed to have a paradox in actual physics, only in Newtonian mechanics, so what you are saying is irrelevant to his claim, you might was well bring up GR or QM as a solution.
Appealing to the third law also doesn’t help, his whole point is that if you do the calculations one way you get one answer and if you do them another way you get another answer, hence the use of the word paradox.
His point is if you only do some of the calculations, you can come up with the wrong answer. I fail to see how that implies a paradox rather than just sloppiness.
If I can do a calculation and stop and then do another calculation and get a contradiction that’s still a contradiction. it doesn’t matter that I can do other calculations that would lead to a non-contradiction.
If he put a single equal sign in his ‘proof’, I would be more charitable. As it is, it’s not clear to me that he actually did any calculations or showed any contradictions.
I’m not sure he did, but I have done the calculations and it seems to check out (although I may have made a mistake). The only laws I used were F=ma and F=Gm_1m_2/(r^2), of which the third law should emerge as an immediate consequence rather than needing to be added in on top.
The reason they “check out” is because you calculate the force caused by N+1 particles on N particles. Because your calculation has an external particle, the CoM has an acceleration. This is entirely an artifact of how the limit is taken, and is thus a sign of sloppiness and incompleteness.
If you did the calculations for the system of N particles, then took the limit as N approached infinity, you would get no CoM acceleration. This really has nothing to do with Newtonian physics.
I don’t think I do this.
Obviously the problem is with an infinity not taken as a limit. If you had said that, instead of saying other irrelevant things, then I doubt anyone would have objected.
Does your leftmost particle have a rightward acceleration which makes the weighted average of acceleration (i.e. CoM acceleration) 0?
I have edited the ancestral post to say that. Hopefully, the superior articulation will cause its karma to rise into the positives.
Since I wasn’t using a limit I didn’t have a leftmost particle.
Then you are calculating the force caused by N+1 particles on N particles. For every particle i, you look at the i-1 to the right, add up their gravitational force, and see that it is dwarfed by force from the particle to the left- particle number i+1.
If you have a finite number of particles, the mistake vanishes. If you have an infinite number of particles but you add particles all at once instead of half of i and half of i+1 at once, the mistake vanishes.
I calculated using all the particles to the left rather than just one, and so every pair got taken into account once for each member of that pair.
Then you were calculating the gravitational effect of an infinite number of external particles on a finite number of particles, which makes things worse.
[Edit]: Note that each pair has to be taken into account twice. The N+1 vs. N error results from having an extra force floating around; the N+Infinity vs. N error results from having an infinite number of extra forces floating around. If you add forces as linked pairs, the error disappears.
Once for each member of the pair = twice
No, I calculated it for every particle, and therefore I calculated it on an infinite number of particles. Obviously my calculations only considered one particle at any time, but the same could be said of calculations in problems involving finitely many particles.
Suppose you have a system of N+1 particles.
You calculate the gravitational force on the first particle. There are N forces on it; you now have N unpaired forces.
Now you calculate the gravitational force on the second particle. There are N forces on it; you now have 2N-2 unpaired forces (you added N more forces, but one of the new ones and one of the old ones are paired).
Then you calculate the gravitational force on the third particle. There are N forces on it; you now have 3N-6 unpaired forces (you added N more forces, and 3 choose 2 are paired).
The number of unpaired forces will grow until it peaks and then shrinks down to zero (as the number of forces that pair up will eventually overcome the added forces).
What happens if you take this approach with an infinite number of particles? Well, at each moment you add an infinite number of new forces, and only a finite number pair up. Thus, as soon as you start the calculations you have an infinite number of unpaired forces and that number only grows.
The right way to do this is the buddy system: never let a force enter the calculations without its reverse force. To do this, you add particles to your consideration one by one. The number of considered forces is N choose 2, and you never have unpaired forces.You let the number of considered particles grow to infinity, and it still works.
It certainly doesn’t grow, it becomes infinity and stays there.
Okay, this is getting into set theory, but its actually the case that adding infinitely many things and then taking finitely many away can eventually leave you with zero.
Suppose at step n you add the following numbers, 2n-1, 2(2n-1), 4(2n-1)… (2^k)(2n-1) for all k. Then you take away the number n.
This procedure adds infinitely many at every step, and only takes one away, so by your argument it should only ever grow and end up as infinity. However, its pretty clear that every number gets added once and then taken away once, so we end up with nothing.
I think its pretty clear that a similar thing is going on here.
Again, I agree that the limits are being done wrong, I just disagree with your description of the N vs N+1 thing.
It’s not clear to me what you mean by “actually the case.” I agree that it could be the convention among set theorists that things work that way, and that there may even be benefits from taking that approach. In my experience, it only serves to foster confusions about infinity.
Consider a simple version of your scenario: start off with a ball labeled 1 in an urn of infinite capacity. Add two balls labeled with the two integers immediately above the largest label in the urn, then remove the ball with the smallest label from the urn. Repeat those two steps an infinite number of times. It is clear that every time you perform the two steps, the number of balls in the urn increases by one, but also that for any individual ball, at some point it will get removed from the urn. I disagree with the conclusion that the set of balls in the urn is empty after an infinite number of steps; I would rather call it ill-defined, as it should have both no members and an infinite size. (If you don’t paint numbers on the balls, and just add two balls then remove one, it’s obvious the urn contains an infinite number of balls after an infinite number of steps. Why would painting numbers on balls make them disappear from the urn? Because you run out of numbers?)
How would you describe it?
I would call it the mistake of taking a theorem about Newtonian physics that was proven by assuming finitely many objects, and generalising it to infinitely many (which would be allowed by limits, but this isn’t using limits.
I would say that if you don’t paint numbers on the balls you could end up with 0, or infinity, or something else, depending on how you do it. If you do it randomly you do have a probability 1 of ending up with infinity, but if you’re careful you can make it less.
I’m confused. Are you saying that the CoM would accelerate in this situation, and thus the proof that CoMs only accelerate under external forces does not apply to this situation, or are you saying that this counterexample is misconstructed, and the CoM would not accelerate?
I thought I was saying the former, although I’m no longer actually sure.
Perhaps someone should start a discussion thread in which any mention of infinity or its synonyms is forbidden.
Why would you want to?
Minor note: this is the Ross-Littlewood paradox.
Technically speaking its a slight modification, since usually the paradox involves adding a finite number than taking away 1, rather than infinitely many.
I didn’t downvote you, however both your objections were rather besides the point. One doesn’t deal with the issue of Newtonian mechanics, and whether they were indeed as obviously flawed from the beginning—it feels a bit like explaining the paradox of Achilles reaching the turtle by saying we’re living in an integer universe. That may be true, but it feels a rather weak dodging-the-issue explanation.
As for the second answer, it doesn’t really state where the calculations in the presentation of the paradox are wrong or are missing something. It just says that they are, so it’s a non-solution.
Maybe this is because I was a physics student, but to me the missing pieces implied by the second answer are so obvious as to make it not worth my time to type them and yours to read them. Apparently I was mistaken, so here they are.
By the principle of superposition, we can break down a N-body problem into (N choose 2) 2-body problems. Consider the mass M and the mass m, a distance R away from each other (with m lighter and at the leftward position x). Their CoM is at x+RM/(M+m). The force FMm, leading to the acceleration am, is GMm/R^2, leading to GM/R^2. The force FmM, leading to the acceleration aM, is -GMm/R^2, leading to -Gm/R^2 (these are negative because it is being pulled leftward). To determine the acceleration of the center of mass, we calculate m*am+M*aM=GMm/R^2-GMm/R^2=0. The CoM of that pair will not move due to forces exerted by that pair. This is independent of M, m, and R. When we add a third mass, that consists of adding two new pair systems- each of which has a CoM acceleration of 0. This can be continued up to arbitrarily high N.
Whenever there’s a paradox that mentions infinity, delete the infinity and see if the paradox still exists. Odds are very high it won’t. A lot of diseased mathematical thinking is the result of not being clear with how you take limits, and so when I find a “paradox” that disappears when you get rid of infinity, that’s enough for me to drop the problem.
This seems like an unproductive attitude. If everyone took this attitude they would never have hammered out the problems in calculus in the 19th century. And physicists would probably not have every discovered renormalization in the 20th century.
A better approach is to try to define rigorously what is happening with the infinities. When you try that, either it becomes impossible (that is there’s something that seems intuitively definable that isn’t definable) or one approach turns out to be correct, or you discover a hidden ambiguity in the problem. In any of those cases one learns a lot more than simply saying that there’s an infinity so one can ignore the problem.
I will make sure to not spread that opinion in the event that I travel back in time.
I agree with you that this is a better approach. However, the problem in question is “find the error” not “how conservation of momentum works,” and so as soon as you realize “hm, they’re not treating this infinity as a limit” then the error is found, the problem is solved, and your curiosity should have annihilated itself.
So by now Nelson’s outline has been challenged by the formidable Terry Tao, and Nelson (himself formidable!) has responded to this challenge and isn’t budging. Link.
The FTL thread has attracted many confident predictions about the ultimate outcome. But this one hasn’t. Is this because people find the subject less interesting? Or because they are less confident?
For what it’s worth, here’s the timeline of my thoughts/beliefs, in silly internal-monologue form. Maybe the numbers shouldn’t be taken too seriously, and I’m not trying to bait anyone into betting 200 grand against my 10 dollars, but I’d be interested to hear what people have been thinking as (or if) they’ve been following this.
A few days ago: “Contrary to almost everybody, I don’t think there is any reason to believe PA is consistent. Someone, e.g. Nelson or the much younger Voevodsky, might show it is inconsistent in my lifetime.” P approx 5%.
Nelson’s announcement: “Holy shit he’s done it.” P approx 40%
Tao’s challenge: “Oh, I guess not, poor guy how embarrassing.” P back down to 5%
Nelson neither ignores nor caves to Tao: “Beats me, maybe he’s done it after all.” P now maybe at 10%.
I’ve put up on Prediction Book a relevant prediction. The main reason I’ve given only a 75% percent chance that it may take a lot of time to study the matter in detail.
If you are putting 10% on the chance that Nelson’s proof will turn out to be correct, then 10% seems way too high for essentially reasons that benelliot touched on below. My own estimate for this is being correct is around 0.5%. (That’s sort of a meta estimate which tries to take into account my own patterns of over and underconfidence for different types of claims.) I’d be willing to bet 100$ to 1$ which would be well within both of our stated estimates.
Interesting. Has your prior for “a contradiction will be found in the next hundred years” moved since Nelson’s announcement?
My 5/40/5/10 numbers are (were) my estimation of the chance that a contradiction will be found in my lifetime. I’m far less certain that Nelson will have it sewed up in a few months, as he’s claiming, but it’s still quite likely he’s made a big breakthrough.
Oh. I see. I thought your numbers were for Nelson’s proof working. Hmm, given that, they seem more reasonable but still too high. I’d need to think more carefully about how likely I’d estimate things.
Depending on the exact conditions, I might be prepared to take that.
The issue is mainly that I don’t think I’m qualified to determine the outcome of the debate, Nelson and Tao are both vastly better at maths than me, so I don’t have much to go on. I suspect others are in a similar predicament.
I’ve been trying to understand their conversation and if I understand correctly Tao is right and Nelson has made a subtle and quite understandable error, but I also estimate P(my understanding is correct) < 0.5 is not very large, so this doesn’t help much, and even if I am right there could easily be something I’m missing.
I would also assign this theorem quite a low prior. Compare it to P =! NP, when a claimed proof of that comes out, mathematicians are usually highly sceptical (and usually right), even if the author is a serious mathematician like Deolalikar rather than a crank. Another example that springs to mind is Cauchy’s failed proof of Fermat’s last theorem (even though that was eventually proven, I still think a low prior was justified in both cases).
This, if correct, would be a vastly bigger result than either of those. I don’t think it would be an exaggeration to call this the single most important theorem in the history of mathematics if correct, so I think it deserves a much lower prior than them. Even more so, since in the case of P =! NP most mathematicians at least think it’s true, even if they are sceptical of most proofs, in this case most mathematicians would probably be happy to bet against it (that counts for something, even if you disagree with them).
I don’t have much money to lose at this point in my life, but I’d be happy to bet $50 and $1 that this is wrong.
I think there’s a salient difference between this and P = NP or other famous open problems. P = NP is something that thousands of people are working on and have worked on over decades, while “PA is inconsistent” is a much lonelier affair. A standard reply is that every time a mathematician proves an interesting theorem without encountering a contradiction in PA, he has given evidence for the consistency of PA. For various reasons I don’t see it that way.
Same question as for JoshuaZ: has your prior for “a contradiction in PA will be found within a hundred years” moved since Nelson’s announcement?
Yes, obviously P(respectable mathematician claims a contradiction | a contradiction exists) > P(respectable mathematician claims a contradiction | no contradiction exists), so it has definitely moved my estimate.
Like yours, it also moved back down when Tao responded, back up a bit when Nelson responded to him, and back down a bit more when Tao responded to him and I finally managed a coherent guess at what they were talking about.
I’m not sure this is an important difference. I think scepticism about P =! NP proofs might well be just a valid even if far fewer people were working on it. If anything it would be more valid, lots of failed proofs gives you lots of chances to learn from the mistakes of others, as well as building avoiding routes which are proven not to work by others in the field. Furthermore, the fact that huge numbers of mathematicians work on P vs NP but have never claimed a proof suggests a selection effect in favour of those who do claim proofs, which is absent in the case of inconsistency.
Furthermore, not wanting to be unfair to Nelson, but the fact he’s working alone on a task most mathematicians consider a waste of time may suggest a substantial ideological axe to grind (what I have heard of him supports this thoery) and sadly it is easier to come up with a fallacious proof for something when you want it to be true.
I’m not sure if this line of debate is a productive one, the issue will be resolved one way or the other by actual mathematicians doing actual maths, not by you and me debating about priors (to put it another way, whatever the answer ends up being, this conversation will have been wasted time in retrospect).
Can you roughly quantify it? Are we talking from million-to-one to million-to-one-point-five, or from million-to-one to hundred-to-one?
Sorry if I gave you a bad impression: I am not trying to start a debate in any adversarial sense. I am just curious.
Of that there’s no doubt, but it speaks well of Nelson that he’s apparently resisted the temptation toward self-deceipt for decades, openly working on this problem the whole time.
The announcement came as a surprise, so the update wasn’t negligible. I probably wouldn’t have gone as low as million-to-one before, but I might have been prepared to estimate a 99.9% chance that arithmetic is consistent. However, I’m not quite sure how much of this change is a Bayesian update and how much is the fact that I got a shock and thought about the issue a lot more carefully.
Cauchy? Don’t you mean Lamé?
Anyway if you lose the bet 1 = 0 therefore you don’t need to transfer any money.
From what gathered both went down a similar route at the same time, although it is possible that I am mistaken in this. I named Cauchy as he seems to be the best-known of the two, and therefore the one with the highest absolutely-positively-not-just-some-crank factor.
If I lose the bet (not that anyone has yet agreed to accept the bet), then payment will be based on restricted number theory without the axiom of induction.
And here I thought there wasn’t anything besides c I’d bet on at 99-to-1 odds.
Two! Two things in the universe I’d bet on at 99-to-1 odds. Though I’m not actually going to do it for more than say $2k if anyone wants it, since I don’t bet more than I have, period.
Would you bet at 99-to-1 odds that I will not win the lottery tomorrow?
Assume for a second that FTL communication is possible and that PA is inconsistent. How could this possibly influence a proof of AI friendliness that has been invented before those discoveries were made and how can one make an AI provably friendly given other possible fundamental errors in our understanding of physics and mathematics?
We’d have to know what a proof of friendliness looks like to answer this.
So this is an interesting set of questions. I don’t think FTL issues should matter in this context. Inconsistency in PA is more of an issue. I’m not terribly worried about an actual inconsistency in PA. So much goes wrong if that’s the case that AI issues might actually be small (no really. It could be that bad.). That said, however, this sort of worry makes a lot more sense for proofs in ZF or ZFC. I’d be a lot more worried about a contradiction showing up in ZF than in PA. So one might want to make sure that any such proofs use as small a segment of ZF as possible.
That is an understandable reaction. But since encountering Nelson’s ideas a while ago, I’ve found myself thinking less and less that it would be that bad at all. There is no very strong evidence that completed infinities need to be grappled with at a physical or real-world level. The main “finitary” thing you lose without PA is superexponentiation. But that shows up surprisingly rarely, in the universe and in math. (Or maybe not so surprisingly!)
Interestingly, superexponentiation shows up very often on less wrong. Maybe that’s a good answer to XiXiDu’s question. If Nelson is right, then people should stop using 3^^^3 in their thought experiments.
...I don’t understand how this can be. 3^^3 is just (3^(3^3))= 3^27 = 7625597484987 And 3^^^3 is just (3^(3^(3^(… 7625597484987 times …))))
Superexponentiation is just made of exponentiation many times. And exponentiation is made of multiplication, and multiplication is made of addition.
How can superexponentiation be made invalid without making invalid even normal addition?
I can say more if you want, but maybe see here and here for an explanation.
Any finite calculation of superexponentiation will be valid. But as I understand it you can’t in general in Nelson’s formulation prove that superexponentation is well-defined in general.
My understanding is that Nelson doesn’t necessarily believe in exponentiation either although his work here doesn’t have the direct potential to wreck that.
The real issue isn’t PA as much as the fact that there are a very large number of theorems that we use regularly that live in axiomatic systems that model PA in a nearly trivial fashion. For example, ZFC minus replacement, choice and foundation. Even much weaker systems where one replaces the power set axiom with some weaker axiom allows you to construct PA. For example if you replace the power set axiom with the claim that for any two sets A and B their cartesian product always exists. Even this is much stronger than you need. Similarly, you can massively reduce the allowed types of input sets and predicates in the axiom schema of specification and still have a lot more than you need. You can for example restrict the allowed predicate sets to at most three quantifiers and still be ok (this may be restrictable to even just two quantifiers but I haven’t worked out the details).
And what Nelson’s proof (if valid) breaks is weaker than PA which means that if anything the situation is worse.
It may be for example that any axiomatic system which lets us talk about both the real numbers and the integers allows Nelson’s problem. Say one for example one has an axiomatic system which includes Nelson’s extreme finitary version of the natural numbers, include the very minimal amount of set theory necessary to talk about sets, include second order reals, and an axiom embedding your “natural numbers” into the reals. Can you do this without getting all sorts of extra stuff thrown in? I don’t know and this looks like an interesting question from a foundational perspective. It may depend a lot on which little bits of set theory you allow. But, I’m going to guess that the answer is no. I’ll need to think about this a bit more to make the claim more precise.
I know this isn’t the main point of your comment but I got carried away looking this stuff up.
In his 1986 book, he defines a predicate exp(x,k,f) which basically means “f is a function defined for i < k and f(i) = x^i”. And he defines another predicate epsilon(k) which is “for all x there exists f such that exp(x,k,f)”. I think you can give a proof of epsilon(k) in his regime in O(log log k) steps. In standard arithmetic that takes O(1) steps using induction, but O(log log k) is pretty cheap too if you’re not going to iterate exponentiation. So that’s a sense in which Nelson does believe in exponentiation.
On the other hand I think it’s accurate to say that Nelson doesn’t believe that exponentiation is total, i.e. that for all n, epsilon(n). Here’s an extract from Nelson’s “Predicative Arithmetic” (Princeton U Press 1986). Chapter heading: “Is exponentiation total?”
This argument applies just as well to addition and multiplication.
See Nelsons paper where he talks about the qualitative difference between types of recursion.
I think I can explain that. Nelson doesn’t believe in any kind of Platonic number, but he does believe in formal number systems and that they can sometimes accurately describe the world. We have essentially two number systems: the tally-mark system (e.g. SSSSS0 and so on), and the positional system (e.g. 2011 and so on). Both of these are successful at modeling aspects of the world.
Most people regard these two number systems as equivalent in some sense. Specifically, there is a procedure for rewriting a positional number as a tally number, and most people believe that this procedure terminates “at least in principle.” (For instance, one can formally prove this in PA using the induction axiom). But Nelson denies it. He believes that the positional number system and the tally number system are different, and that the positional number system is strictly stronger. I think your remark --
-- applies to the tally system, but not to the positional system. But Nelson does not believe that the positional system is inconsistent. So he is happy to adjoin axioms to the tally system that allow for working with addition and multiplication.
My knowledge of complexity theory is very limited, but I thought exponentiation could be done in polynomial time. Is that incorrect, or am I misunderstanding Nelson’s argument (please don’t tell me he also doubts complexity theory, we must be allowed some fun :P).
EDIT: Yes, I was wrong, the claim of polynomial time in the length of b may well be too strong, but it does seem to use less than 2^b. I think I can use the position system to compute 2^b in O(b^2) steps, which seems to lead us straight from accepting multiplication to accepting exponentiation.
I think so, in positional notation. That’s what I was alluding to with my O(log log k) comment. But there is no polynomial time algorithm to reduce 2^n to a tally number (obviously).
I once read something by Nelson about the polynomial hierarchy in complexity theory, but I can’t find it now.
This aspect is very interesting:
It seems easier to me to simply trust Coq than to read through 23 megabytes of proofs by eye. But I’m not entirely certain what the purpose of qea is.
It looks like Qea does verify the proof and generates human-readable text. In that case, my trust in it or Coq would be based on their relative size adjusted for amount of testing these tools received. Coq uses a non-trivially unique proof system (calclulus of coinductive constructions, if I remember correctly), which counts against it. Did anything look what proof system Qea uses?
Yeah, I’ve heard about this “notorious difficulty” in verifying proof checkers before, and I don’t understand what it could mean. Humans are way more unverifiable.
I bet the real purpose of qea is that it allowed its author to avoid learning to use something else. But I find it interesting that he recognizes the importance of computer verification for something like this, and maybe it indicates that this has been in the works for a while. It doesn’t have to be half-baked to be wrong, though.
I think the idea is that for a point this controversial he is well aware that mathematicians may actually object to his trustworthiness (they’ve objected to less questionable things in the past), and want to verify the proof for themselves. I think he may well be right in this. However, I don’t see why he can’t give a full explanation (his current paper isn’t) for humans as well, since this would probably be finished sooner and would probably save a lot of his own time if there is a mistake.
Actually, I would be more interested in the size of checking core of Qea. Quickly inspecting 23 megabytes of proofs sounds like a bad idea. A converter to TSTP first-order proof format (for example) would allow me to use a few different bug-for-bug-independent well-tested checkers for this format that are already out there; that I would trust more than both human inspection and any single-implementation prover.
How do you get to 733 axioms? Maybe I’m being stupid, but doesn’t PA run on just 5?
Strictly speaking, PA uses infinitely many axioms—the induction axiom is actually an axiom schema, one axiom for each predicate you can plug into it. If you actually had it as one axiom quantifying over predicates, that would be second-order.
I think that Nelson denies that there is a completed infinity of predicates that you can plug into the schema.
Well, you’d certainly only need finitely many to prove inconsistency.
I think 733 is counting axioms, definitions, and theorems all.
That would explain it.
It said “733 axioms, definitions, and theorems”
I’m guessing 733 is the sum of the axioms, definitions and theorems, not just the axioms alone.
Here’s a summary and discussion of the affair, with historical comparison to the Gödel results and their reception (as well as comments from several luminaries, and David Chalmers) on a philosophy of mathematics blog whose authors seem to take the position that the reasons for consensus in the mathematical community are mysterious. (It is admitted that “arguably, it cannot be fully explained as a merely socially imposed kind of consensus, due to homogeneous ‘indoctrination’ by means of mathematical education.”) This is a subject that needs to be discussed more on LW, in my opinion.