Nice post. Being convinced myself of the importance of mathematics both for understanding the world in general and for the specific problems of AI safety, I found it interesting to see what arguments you marshaled in and against this position.
About the unreasonable effectiveness of mathematics, I’d like to throw the “follow-up” statement: The unreasonable ineffectiveness of mathematics beyond physics (for example in biology). The counter argument, at least for biology, is that Wigner was talking a lot about differential equations, which seems somewhat ineffective in biology; but theoretical computer science, which one can see as the mathematical study of computation, and thus somewhat a branch of mathematics, might be betterfitted to biology.
A general comment about your perspective is that you seem to equals mathematics with formal specification and proofs. That’s not necessarily an issue, but most modern mathematicians tend to not be exact formalists, so I thought it important to point out.
For the rest of my comments:
Rather than precise, I would say that mathematics are formal. The difference lies in the fact that a precise statement captures almost exactly an idea, whereas formalization provide an objective description of… something. Given that the main difficulty in applying mathematics and in writing specification for formal methods is this ontological identification between the formalization and the object in the world, I feel that it’s a bit too easy to say that maths captures the ideas precisely.
Similarly, it is not because the definitions themselves are unambiguous (if they are formal) that their interpretation, meaning and use is. I agree that a formal definition is far less ambiguous than a natural language one, but that does not mean that it is completely unambiguous. Many disagreement I had in research were about the interpretation of the formalisms themselves.
Although I agree with the idea of mathematics capturing some concept of simplicity, I would precise that it is about simplicity when all is explicited. That’s rather obvious for rationalists. Formal definitions tend to be full of subtleties and hard to manage, but the explicit versions of the “simpler” models would actually be more complex than that.
Nitpick about the “quantitative”: what of abstract algebra, and all the subfields that are not explicitly quantitative? Are they useful only insofar as they serves for the more quantitative parts of maths, or am I taking this argument too far and you just meant that one use of maths was in the quantitative parts?
The talk about Serial Depth makes me think about deconfusion. I feel it is indeed rather easy to makes someone not confused about making a sandwich, while it is still undone for AI Safety.
The Anthropocentrism arguments feels right to me, but I think it doesn’t apply if one is trying to build prosaic aligned AGI. Then the “most important” is to solve rather anthropocentric models of decision and values, instead of abstracting them away. But I might be wrong on that one.
Nice post. Being convinced myself of the importance of mathematics both for understanding the world in general and for the specific problems of AI safety, I found it interesting to see what arguments you marshaled in and against this position.
About the unreasonable effectiveness of mathematics, I’d like to throw the “follow-up” statement: The unreasonable ineffectiveness of mathematics beyond physics (for example in biology). The counter argument, at least for biology, is that Wigner was talking a lot about differential equations, which seems somewhat ineffective in biology; but theoretical computer science, which one can see as the mathematical study of computation, and thus somewhat a branch of mathematics, might be better fitted to biology.
A general comment about your perspective is that you seem to equals mathematics with formal specification and proofs. That’s not necessarily an issue, but most modern mathematicians tend to not be exact formalists, so I thought it important to point out.
For the rest of my comments:
Rather than precise, I would say that mathematics are formal. The difference lies in the fact that a precise statement captures almost exactly an idea, whereas formalization provide an objective description of… something. Given that the main difficulty in applying mathematics and in writing specification for formal methods is this ontological identification between the formalization and the object in the world, I feel that it’s a bit too easy to say that maths captures the ideas precisely.
Similarly, it is not because the definitions themselves are unambiguous (if they are formal) that their interpretation, meaning and use is. I agree that a formal definition is far less ambiguous than a natural language one, but that does not mean that it is completely unambiguous. Many disagreement I had in research were about the interpretation of the formalisms themselves.
Although I agree with the idea of mathematics capturing some concept of simplicity, I would precise that it is about simplicity when all is explicited. That’s rather obvious for rationalists. Formal definitions tend to be full of subtleties and hard to manage, but the explicit versions of the “simpler” models would actually be more complex than that.
Nitpick about the “quantitative”: what of abstract algebra, and all the subfields that are not explicitly quantitative? Are they useful only insofar as they serves for the more quantitative parts of maths, or am I taking this argument too far and you just meant that one use of maths was in the quantitative parts?
The talk about Serial Depth makes me think about deconfusion. I feel it is indeed rather easy to makes someone not confused about making a sandwich, while it is still undone for AI Safety.
The Anthropocentrism arguments feels right to me, but I think it doesn’t apply if one is trying to build prosaic aligned AGI. Then the “most important” is to solve rather anthropocentric models of decision and values, instead of abstracting them away. But I might be wrong on that one.