How far might we get with computational proofs / how broadly can they be applied?
In the post I wrote “Computational proofs could be constructed based on the inference-rules that reviewers agree with, without the reviewers having to affirm every step of inference”. This is potentially an important point, but one that I didn’t really belabor.
I won’t discuss it much here either, but I do hope to write more about computational proofs at some later stage. I think there potentially could be powerful synergies/interplays if AI-constructed computational proofs are integrated with the kinds of techniques described above.
Some points I hope to maybe write more about in the future:
What is the difference between proofs and rigorous arguments more generally? Is there a clear difference, or is the distinction a blurry one?
Computational proofs are typically used for mathematics. Could the scope of their use be extended?
Could it be that existing proof assistants are limiting our conceptions of what’s possible when it comes to user-friendly computational proofs?
Suppose we want to construct proofs that deal with not just mathematical concepts, but also real-world concepts. And suppose that any human interactions with these proofs will happen on computers. Would existing formalisms like predicate logic or dependent type theory be the way to go? Or might it be more convenient to come up with new formalisms?
If we come up with new formalisms for computational proofs, might it make sense that such formalisms are designed to be easily convertible into human language (but where clauses are demarcated and there is no syntactic ambiguity)?
Some logical formalisms can express statements about probability and be used to reason about probability. Sort of analogously to this, could there be merit to having formalisms that are designed to be able to deal with ambiguity and cluster-like concepts from within themselves?
There is a discongruity between formal computational proofs and ambiguous cluster-like concepts. What are some possible techniques for dealing with this discongruity?
Are there theoretical constructs that could be useful for “converting” between human concepts and abstractions used by an AI-system / referencing human concepts in an “indirect” way?
Might it make sense to design formalisms that are designed to reason about code from within themselves? By this I mean being able to reference and “talk about” source code. And also being able to reference behavior and output that results from running said source code.
The idea of extending the scope of what “proofs” are used for is not new. Here, for example, are some quotes from Gottfried Wilhelm Leibniz (1646-1716):
“We should be able to reason in metaphysics and morals in much the same way as geometry and analysis.”
“The only way to rectify our reasonings is to make them as tangible as those of the Mathematicians, so that we can find our error at a glance, and when there are disputes among persons, we can simply say: Let us calculate, without further ado, to see who is right.”
“It is true that in the past I planned a new way of calculating suitable for matters which have nothing in common with mathematics, and if this kind of logic were put into practice, every reasoning, even probabilistic ones, would be like that of the mathematician: if need be, the lesser minds which had application and good will could, if not accompany the greatest minds, then at least follow them. For one could always say: let us calculate, and judge correctly through this, as much as the data and reason can provide us with the means for it. But I do not know if I will ever be in a position to carry out such a project, which requires more than one hand; and it even seems that mankind is still not mature enough to lay claim to the advantages which this method could provide.”
It’s hard to know exactly what Gottfried had in mind, as he had limited communication bandwidth. The way many people interpret him, his ideas might sound like an unrealistic pipe dream (be that because they were, or because he is being unintentionally strawmanned). But even if there was some level of naivety to his thinking, that does not rule out him also being onto something.
I would describe myself as agnostic in regards to how widely the scope of computational proofs can be extended (while still being useful, and while still maintaining many of the helpful properties that mathematical proofs have).
I think one reason why we don’t formalize our reasoning to a greater extent is that we don’t bother. We experience it as simply not being worth the time that it takes. But if AIs can do the formalization for us, and AIs are constructing the “proofs”—and if they can predict how we would review the work that they do (without actual humans having to review every step) - well, that might change what is and isn’t feasible.
Screenshot from the proof-assistant Coq.Illustration from my master thesis, which can be found here. It speculates about how we might make more user-friendly tools for people to construct computer-verified inference-chains. (Disclaimer: It’s hastily written and full of spelling mistakes. It was written in 2017, and my thoughts have developed since then.)
How far might we get with computational proofs / how broadly can they be applied?
In the post I wrote “Computational proofs could be constructed based on the inference-rules that reviewers agree with, without the reviewers having to affirm every step of inference”. This is potentially an important point, but one that I didn’t really belabor.
I won’t discuss it much here either, but I do hope to write more about computational proofs at some later stage. I think there potentially could be powerful synergies/interplays if AI-constructed computational proofs are integrated with the kinds of techniques described above.
Some points I hope to maybe write more about in the future:
What is the difference between proofs and rigorous arguments more generally? Is there a clear difference, or is the distinction a blurry one?
Computational proofs are typically used for mathematics. Could the scope of their use be extended?
Could it be that existing proof assistants are limiting our conceptions of what’s possible when it comes to user-friendly computational proofs?
Suppose we want to construct proofs that deal with not just mathematical concepts, but also real-world concepts. And suppose that any human interactions with these proofs will happen on computers. Would existing formalisms like predicate logic or dependent type theory be the way to go? Or might it be more convenient to come up with new formalisms?
If we come up with new formalisms for computational proofs, might it make sense that such formalisms are designed to be easily convertible into human language (but where clauses are demarcated and there is no syntactic ambiguity)?
Some logical formalisms can express statements about probability and be used to reason about probability. Sort of analogously to this, could there be merit to having formalisms that are designed to be able to deal with ambiguity and cluster-like concepts from within themselves?
There is a discongruity between formal computational proofs and ambiguous cluster-like concepts. What are some possible techniques for dealing with this discongruity?
Are there theoretical constructs that could be useful for “converting” between human concepts and abstractions used by an AI-system / referencing human concepts in an “indirect” way?
Might it make sense to design formalisms that are designed to reason about code from within themselves? By this I mean being able to reference and “talk about” source code. And also being able to reference behavior and output that results from running said source code.
The idea of extending the scope of what “proofs” are used for is not new. Here, for example, are some quotes from Gottfried Wilhelm Leibniz (1646-1716):
It’s hard to know exactly what Gottfried had in mind, as he had limited communication bandwidth. The way many people interpret him, his ideas might sound like an unrealistic pipe dream (be that because they were, or because he is being unintentionally strawmanned). But even if there was some level of naivety to his thinking, that does not rule out him also being onto something.
I would describe myself as agnostic in regards to how widely the scope of computational proofs can be extended (while still being useful, and while still maintaining many of the helpful properties that mathematical proofs have).
I think one reason why we don’t formalize our reasoning to a greater extent is that we don’t bother. We experience it as simply not being worth the time that it takes. But if AIs can do the formalization for us, and AIs are constructing the “proofs”—and if they can predict how we would review the work that they do (without actual humans having to review every step) - well, that might change what is and isn’t feasible.
Screenshot from the proof-assistant Coq.Illustration from my master thesis, which can be found here. It speculates about how we might make more user-friendly tools for people to construct computer-verified inference-chains. (Disclaimer: It’s hastily written and full of spelling mistakes. It was written in 2017, and my thoughts have developed since then.)