Math is interesting in this regard because it is both very precise and there’s no clear-cut way of checking your solution except running it by another person (or becoming so good at math to know if your proof is bullshit).
Programming, OTOH, gives you clear feedback loops.
In programming, that’s true at first. But as projects increase in scope, there’s a risk of using an architecture that works when you’re testing, or for your initial feature set, but will become problematic in the long run.
For example, I just read an interesting article on how a project used a document store database (MongoDB), which worked great until their client wanted the software to start building relationships between data that had formerly been “leaves on the tree.” They ultimately had to convert to a traditional relational database.
Of course there are parallels in math, as when you try a technique for integrating or parameterizing that seems reasonable but won’t actually work.
Yep. Having worked both as a mathematician and a programmer, the idea of objectivity and clear feedback loops starts to disappear as the complexity amps up and you move away from the learning environment. It’s not unusual to discover incorrect proofs out on the fringes of mathematical research that have not yet become part of the cannon, nor is it uncommon (in fact, it’s very common) to find running production systems where the code works by accident due to some strange unexpected confluence of events.
Programming, OTOH, gives you clear feedback loops.
Feedback, yes. Clarity… well, sometimes it’s “yes, it works” today, and “actually, it doesn’t if the parameter is zero and you called the procedure on the last day of the month” when you put it in production.
Math is interesting in this regard because it is both very precise and there’s no clear-cut way of checking your solution except running it by another person (or becoming so good at math to know if your proof is bullshit).
Programming, OTOH, gives you clear feedback loops.
In programming, that’s true at first. But as projects increase in scope, there’s a risk of using an architecture that works when you’re testing, or for your initial feature set, but will become problematic in the long run.
For example, I just read an interesting article on how a project used a document store database (MongoDB), which worked great until their client wanted the software to start building relationships between data that had formerly been “leaves on the tree.” They ultimately had to convert to a traditional relational database.
Of course there are parallels in math, as when you try a technique for integrating or parameterizing that seems reasonable but won’t actually work.
Yep. Having worked both as a mathematician and a programmer, the idea of objectivity and clear feedback loops starts to disappear as the complexity amps up and you move away from the learning environment. It’s not unusual to discover incorrect proofs out on the fringes of mathematical research that have not yet become part of the cannon, nor is it uncommon (in fact, it’s very common) to find running production systems where the code works by accident due to some strange unexpected confluence of events.
Feedback, yes. Clarity… well, sometimes it’s “yes, it works” today, and “actually, it doesn’t if the parameter is zero and you called the procedure on the last day of the month” when you put it in production.
Proof verification is meant to minimize this gap between proving and programming