GÖDEL GOING DOWN
Ever since David Hilbert introduced his programme, a lot of work has gone into examining the question of what conclusions can and cannot be algorithmically extracted from a set of axioms. But very little has been said about whether it is possible to construct a complete set of axioms from an already well-defined body of knowledge. I am starting to suspect that incompleteness lurks here as well.
The problem is, there will always be assumptions which escape notice because they are so blatantly obvious. This is a difficulty which is well known to all aficionados of detective fiction.
Euclid gave five axioms and another five common notions. But are they complete? For instance, why don’t we have an axiom telling us that it is possible for the human mind to conceive of geometric objects? But, that is psychology, and not mathematics, you say? Fine. Then why don’t we have an axiom telling us that the idealists are wrong?
Huh? Didn’t Gödel conclusively prove that the answer to pretty much every meaningful form of your question is “no”?
I don’t think you understand what mathematicians mean by the word “complete.” It means that all theorems which can be stated in the system can also be proven in the system (or something similar).
Roughly speaking, (upward) completeness means that every statement about the system can either be shown to be demonstrable from the axioms of the system or to be in violation with some number of those axioms.
That is not quite the same thing as your statement, but I think it would be a mistake here to argue which interpretation is right. My reluctance is due to the fact that the upward arc of completeness is incidental to the argument I am making. I mentioned the upward arc because many readers of Less Wrong are familiar with it. I hoped that would capture interest as well as providing orientation.
Here, I am interested in the question of whether the downward arc can ever be made complete, even in principle, and I deliberately chose a provocative example to emphasise the point that there will be controversy about what requires explicit mention in the axioms. I had been thinking about mathematics, but any sufficiently complex system would suffer the same difficulty—for instance, a utilitarian moral system, or an economy steered by an artificial intelligence.
I don’t exclude the possibility of an extremely threadbare system which is downward completely. But, I suspect such systems would be very boring.