A lot of things are trivially and obviously true on arbitrarily large but finite domains. People tend to assume that it will naturally transfer to countable domains. It doesn’t look that different. But very often it breaks there.
On finite domains, everything is trivially decidable.
On infinite domains, nearly everything is undecidable.
It doesn’t feel like “does Riemann hypothesis have a proof in less than 1TB” and “does Riemann hypothesis have a proof” are entirely different categories of statements. The first must be true or false. We don’t know, but the truth exists. Just ask a perfect Bayesian.
Drop the space requirement, and now you have to deal with some theorems being true but unprovable. There’s this third emergent state between provably true and provably false. Everyone has their pet way of dealing with it, usually involving a lot of handwaving and then pretending nothing changed. For example—perfect Bayesians are no longer possible. There is just no self-consistent assignment of probabilities possible.
You can handwave some deeper notion of “truth” beyond computability without much clue how it would work like so many, or you can insist uncomputable things don’t exist like intuitionists, or even just say huge numbers don’t exist.
Really? Do you have an example? I’d be interested in seeing one.
Making this statement more precise is difficult. To make things easy I’m going to assume that probabilities live between 0 and 1 and that our general system of axioms is first order R with first order Q.
Now, our probability assignments are consistent iff they don’t lead to a contradiction. So, assign our probabilities, then whatever you do, I note that by Robinson’s theorems, I can describe PA using our axioms (this isn’t precisely true but is close enough to be true for our purposes.) I then invoke the contradiction we have in PA. No matter what initial probability assignment you choose I can do this.
Note that this argument might not work if we just have first order reals as our axiomatic system because that’s not sufficient to define PA in R, assuming that PA is consistent (to see this note that we have a decision procedure for first order reals and so if we could we’d have a way of deciding claims in PA.). I don’t think there’s any way to import contradictions in PA into first order R and I suspect this can’t be done in general. (I suspect that I’m missing some technical details here so take this with a handful of salt.)
A lot of things are trivially and obviously true on arbitrarily large but finite domains. People tend to assume that it will naturally transfer to countable domains. It doesn’t look that different. But very often it breaks there.
On finite domains, everything is trivially decidable. On infinite domains, nearly everything is undecidable.
It doesn’t feel like “does Riemann hypothesis have a proof in less than 1TB” and “does Riemann hypothesis have a proof” are entirely different categories of statements. The first must be true or false. We don’t know, but the truth exists. Just ask a perfect Bayesian.
Drop the space requirement, and now you have to deal with some theorems being true but unprovable. There’s this third emergent state between provably true and provably false. Everyone has their pet way of dealing with it, usually involving a lot of handwaving and then pretending nothing changed. For example—perfect Bayesians are no longer possible. There is just no self-consistent assignment of probabilities possible.
You can handwave some deeper notion of “truth” beyond computability without much clue how it would work like so many, or you can insist uncomputable things don’t exist like intuitionists, or even just say huge numbers don’t exist.
It all sucks one way or the other.
Rather just ask a tree searching algorithm running on a fast machine, in this case.
Really? Do you have an example? I’d be interested in seeing one.
(What do you mean here by “self-consistent”?)
Making this statement more precise is difficult. To make things easy I’m going to assume that probabilities live between 0 and 1 and that our general system of axioms is first order R with first order Q.
Now, our probability assignments are consistent iff they don’t lead to a contradiction. So, assign our probabilities, then whatever you do, I note that by Robinson’s theorems, I can describe PA using our axioms (this isn’t precisely true but is close enough to be true for our purposes.) I then invoke the contradiction we have in PA. No matter what initial probability assignment you choose I can do this.
Note that this argument might not work if we just have first order reals as our axiomatic system because that’s not sufficient to define PA in R, assuming that PA is consistent (to see this note that we have a decision procedure for first order reals and so if we could we’d have a way of deciding claims in PA.). I don’t think there’s any way to import contradictions in PA into first order R and I suspect this can’t be done in general. (I suspect that I’m missing some technical details here so take this with a handful of salt.)