The problem is, no matter how ‘problem-specific’ your proofs are, they aren’t going to be ‘verifiable’ unless you specify them all the way down to some reasonable foundation. That’s the really big undertaking, so you’ll want to unify things as much as possible, if only to share whatever you can and avoid any duplication of effort.
The problem is, no matter how ‘problem-specific’ your proofs are, they aren’t going to be ‘verifiable’ unless you specify them all the way down to some reasonable foundation.
If that’s true then it logically follows that most existing mathematics literature is un-verifiable—a statement that I think mathematicians would take issue with. After all, that’s not how most mathematics literature is presented.
In the future, it would be best to derive everything from the axioms. (Using libraries where the frequently used theorems are already proved.) The problem is, the most simple theorems that we can derive from the axioms quickly are not important enough to pay for the development and use of the software.
So a better approach would be for the system to accept a few theorems as (temporary) axioms. Essentially, if it would be okay to use the Pythagorean theorem in a scientific paper without proving it, then in the first version of the program it would be okay to use the Pythagorean theorem as an axiom—displaying a warning “I have used Pythagorean theorem without having a proof of it”.
This first version would already be helpful at verifying current papers. And there is an option to provide the proof of the Pythagorean theorem from the first principles later. If you add it later, you can re-run the papers and get the results with less warnings. If the Pythagorean theorem happens to be wrong, as long as you have provided the warnings for all papers, you know which ones of them to retract.
Actually, I believe such systems would be super helpful e.g. in set theory, when you want to verify whether the proof you used does rely on the axiom of choice. Because even if you didn’t use it directly, maybe one of them theorems you used was based on it. Generally, using different sets of axioms could become easier.
Yes that’s an insightful way of looking at how computer verification could assist in real mathematics research.
Going back to the CS analogy, programmers started out by writing everything in machine language, then gradually people began to write commonly-used functions as libraries that you could just install and forget about (they didn’t even have to be in the same language) and they wrote higher-level languages that could automatically compile to machine code. Higher and higher levels of abstraction were recognized and implemented over the years (for implementing things like parsers, data structures, databases, etc.) until we got to modern languages like python and java where programming almost feels like simply writing out your thoughts. There was very little universal coordination in all of this; it just grew out of the needs of various people. No one in 1960 sat down and said, “Ok, let’s write python.”
until we got to modern languages like python and java where programming almost feels like simply writing out your thoughts. … No one in 1960 sat down and said, “Ok, let’s write python.”
For a very good reason: let me invite you to contemplate Python performance on 1960-class hardware.
As to “writing out your thoughts”, people did design such a language in 1959...
P.S. Oh, and do your thoughts flow like this..?
public class HelloWorld { public static void main(String[] args) { System.out.println("Hello World"); } }
For a very good reason: let me invite you to contemplate Python performance on 1960-class hardware.
That the implementation of python is fairly slow is a different matter, and high-level languages need not be any slower than, say, C or Fortran, as modern JIT languages demonstrate. It just takes a lot of work to make them fast.
As to “writing out your thoughts”, people did design such a language in 1959...
Lisp was also designed during that same period and probably proves your point even better. But 1960′s Lisp was as bare-bones as it was high-level; you still had to wrote almost everything yourself from scratch.
But 1960′s Lisp was as bare-bones as it was high-level; you still had to wrote almost everything yourself from scratch.
Computerized math is the same today. No one wants to write everything they need from scratch, unless they’re working in a genuinely self-contained (i.e. ‘synthetic’) subfield where the prereqs are inherently manageable. See programming languages (with their POPLmark challenge) and homotopy-type-theory as examples of such where computerization is indeed making quick progress.
Umm… LISP is elegant and expressive—you can (and people routinely do) construct complicated environments including DSLs on top of it. But that doesn’t make it high-level—it only makes it a good base for high-level things.
But if you use “high-level” to mean “abstracted away from the hardware” then yes, it was, but that doesn’t have much to do with “writing out your thoughts”.
For a very good reason: let me invite you to contemplate Python performance on 1960-class hardware.
LISP was definitely a thing in the 1960s, and python is not that different. For a long time, the former was pretty much ‘the one’ very-high-level, application-oriented language. Much like Python or Ruby today.
pretty much ‘the one’ … application-oriented language
Allow me to disagree again. LISP was lambda calculus made flesh and was very popular in academia. Outside of the ivory towers, the suits used COBOL, and the numbers people used Fortran (followed by a whole lot of Algol-family languages) to write their applications.
The problem is, no matter how ‘problem-specific’ your proofs are, they aren’t going to be ‘verifiable’ unless you specify them all the way down to some reasonable foundation. That’s the really big undertaking, so you’ll want to unify things as much as possible, if only to share whatever you can and avoid any duplication of effort.
If that’s true then it logically follows that most existing mathematics literature is un-verifiable—a statement that I think mathematicians would take issue with. After all, that’s not how most mathematics literature is presented.
I agree with that.
In the future, it would be best to derive everything from the axioms. (Using libraries where the frequently used theorems are already proved.) The problem is, the most simple theorems that we can derive from the axioms quickly are not important enough to pay for the development and use of the software.
So a better approach would be for the system to accept a few theorems as (temporary) axioms. Essentially, if it would be okay to use the Pythagorean theorem in a scientific paper without proving it, then in the first version of the program it would be okay to use the Pythagorean theorem as an axiom—displaying a warning “I have used Pythagorean theorem without having a proof of it”.
This first version would already be helpful at verifying current papers. And there is an option to provide the proof of the Pythagorean theorem from the first principles later. If you add it later, you can re-run the papers and get the results with less warnings. If the Pythagorean theorem happens to be wrong, as long as you have provided the warnings for all papers, you know which ones of them to retract.
Actually, I believe such systems would be super helpful e.g. in set theory, when you want to verify whether the proof you used does rely on the axiom of choice. Because even if you didn’t use it directly, maybe one of them theorems you used was based on it. Generally, using different sets of axioms could become easier.
Yes that’s an insightful way of looking at how computer verification could assist in real mathematics research.
Going back to the CS analogy, programmers started out by writing everything in machine language, then gradually people began to write commonly-used functions as libraries that you could just install and forget about (they didn’t even have to be in the same language) and they wrote higher-level languages that could automatically compile to machine code. Higher and higher levels of abstraction were recognized and implemented over the years (for implementing things like parsers, data structures, databases, etc.) until we got to modern languages like python and java where programming almost feels like simply writing out your thoughts. There was very little universal coordination in all of this; it just grew out of the needs of various people. No one in 1960 sat down and said, “Ok, let’s write python.”
For a very good reason: let me invite you to contemplate Python performance on 1960-class hardware.
As to “writing out your thoughts”, people did design such a language in 1959...
P.S. Oh, and do your thoughts flow like this..?
That the implementation of python is fairly slow is a different matter, and high-level languages need not be any slower than, say, C or Fortran, as modern JIT languages demonstrate. It just takes a lot of work to make them fast.
Lisp was also designed during that same period and probably proves your point even better. But 1960′s Lisp was as bare-bones as it was high-level; you still had to wrote almost everything yourself from scratch.
Computerized math is the same today. No one wants to write everything they need from scratch, unless they’re working in a genuinely self-contained (i.e. ‘synthetic’) subfield where the prereqs are inherently manageable. See programming languages (with their POPLmark challenge) and homotopy-type-theory as examples of such where computerization is indeed making quick progress.
Umm… LISP is elegant and expressive—you can (and people routinely do) construct complicated environments including DSLs on top of it. But that doesn’t make it high-level—it only makes it a good base for high-level things.
But if you use “high-level” to mean “abstracted away from the hardware” then yes, it was, but that doesn’t have much to do with “writing out your thoughts”.
LISP was definitely a thing in the 1960s, and python is not that different. For a long time, the former was pretty much ‘the one’ very-high-level, application-oriented language. Much like Python or Ruby today.
8-0 Allow me to disagree.
Allow me to disagree again. LISP was lambda calculus made flesh and was very popular in academia. Outside of the ivory towers, the suits used COBOL, and the numbers people used Fortran (followed by a whole lot of Algol-family languages) to write their applications.