Roughly speaking, (upward) completeness means that every statement about the system can either be shown to be demonstrable from the axioms of the system or to be in violation with some number of those axioms.
That is not quite the same thing as your statement, but I think it would be a mistake here to argue which interpretation is right. My reluctance is due to the fact that the upward arc of completeness is incidental to the argument I am making. I mentioned the upward arc because many readers of Less Wrong are familiar with it. I hoped that would capture interest as well as providing orientation.
Here, I am interested in the question of whether the downward arc can ever be made complete, even in principle, and I deliberately chose a provocative example to emphasise the point that there will be controversy about what requires explicit mention in the axioms. I had been thinking about mathematics, but any sufficiently complex system would suffer the same difficulty—for instance, a utilitarian moral system, or an economy steered by an artificial intelligence.
I don’t exclude the possibility of an extremely threadbare system which is downward completely. But, I suspect such systems would be very boring.
Roughly speaking, (upward) completeness means that every statement about the system can either be shown to be demonstrable from the axioms of the system or to be in violation with some number of those axioms.
That is not quite the same thing as your statement, but I think it would be a mistake here to argue which interpretation is right. My reluctance is due to the fact that the upward arc of completeness is incidental to the argument I am making. I mentioned the upward arc because many readers of Less Wrong are familiar with it. I hoped that would capture interest as well as providing orientation.
Here, I am interested in the question of whether the downward arc can ever be made complete, even in principle, and I deliberately chose a provocative example to emphasise the point that there will be controversy about what requires explicit mention in the axioms. I had been thinking about mathematics, but any sufficiently complex system would suffer the same difficulty—for instance, a utilitarian moral system, or an economy steered by an artificial intelligence.
I don’t exclude the possibility of an extremely threadbare system which is downward completely. But, I suspect such systems would be very boring.