From my perspective, when I’ve explained why a single AI alone in space would benefit instrumentally from checking proofs for syntactic legality, I’ve explained the point of proofs. Communication is an orthogonal issue, having nothing to do with the structure of mathematics.
I thought of a better way of putting what I was trying to say. Communication may be orthogonal to the point of your question, but representation is not. An AI needs to use an internal language to represent the world or the structure of mathematics—this is the crux of Wittgenstein’s famous “private language argument”—whether or not it ever attempts to communicate. You can’t evaluate “syntactic legality” except within a particular language, whose correspondence to the world is not given a matter of logic (although it may be more or less useful pragmatically).
If your point is that it isn’t necessarily useful to try to say in what sense our procedures “correspond,” “represent,” or “are about” what they serve to model, I completely agree. We don’t need to explain why our model works, although some theory may help us to find other useful models.
But then I’m not sure see what is at stake when you talk about what makes a proof correct. Obviously we can have a valuable discussion about what kinds of demonstration we should find convincing. But ultimately the procedure that guides our behavior either gives satisfactory results or it doesn’t; we were either right or wrong to be convinced by an argument.
The mathematical realist concept of “the structure of mathematics”—at least as separate from the physical world—is problematic once you can no longer describe what that structure might be in a non-arbitrary way. But I see your point. I guess my response would be that the concept of “a proof”—which implies that you have demonstrated something beyond the possibility of contradiction—is not what really matters for your purposes. Ultimately, how an AI manipulates its representations of the world and how it internally represents the world are inextricably related problems. What matters is how well the AI can predict/retrodict/manipulate physical phenomena. Your AI can be a pragmatist about the concept of “truth.”
From my perspective, when I’ve explained why a single AI alone in space would benefit instrumentally from checking proofs for syntactic legality, I’ve explained the point of proofs. Communication is an orthogonal issue, having nothing to do with the structure of mathematics.
I thought of a better way of putting what I was trying to say. Communication may be orthogonal to the point of your question, but representation is not. An AI needs to use an internal language to represent the world or the structure of mathematics—this is the crux of Wittgenstein’s famous “private language argument”—whether or not it ever attempts to communicate. You can’t evaluate “syntactic legality” except within a particular language, whose correspondence to the world is not given a matter of logic (although it may be more or less useful pragmatically).
See my reply to Chappell here and the enclosing thread: http://lesswrong.com/lw/f1u/causal_reference/7phu
If your point is that it isn’t necessarily useful to try to say in what sense our procedures “correspond,” “represent,” or “are about” what they serve to model, I completely agree. We don’t need to explain why our model works, although some theory may help us to find other useful models.
But then I’m not sure see what is at stake when you talk about what makes a proof correct. Obviously we can have a valuable discussion about what kinds of demonstration we should find convincing. But ultimately the procedure that guides our behavior either gives satisfactory results or it doesn’t; we were either right or wrong to be convinced by an argument.
The mathematical realist concept of “the structure of mathematics”—at least as separate from the physical world—is problematic once you can no longer describe what that structure might be in a non-arbitrary way. But I see your point. I guess my response would be that the concept of “a proof”—which implies that you have demonstrated something beyond the possibility of contradiction—is not what really matters for your purposes. Ultimately, how an AI manipulates its representations of the world and how it internally represents the world are inextricably related problems. What matters is how well the AI can predict/retrodict/manipulate physical phenomena. Your AI can be a pragmatist about the concept of “truth.”