Fascinating stuff. Here are some largely unrelated hasty generalizations based on my limited experience in the world of software development.
I’l classify the software development work I’ve done as falling in to three broad categories: frontend, backend, and data analysis:
Frontend software development is the most forgiving of mistakes & imperfect code. There’s little opportunity to do permanent damage to production data. It’s typically easy to test your code by clicking around a bunch, so thinking through the problem carefully to ensure correctness by design is often overkill. Also, requirements are more subject to change and code gets thrown away quicker, so code written to be correct by design has less time to accrue benefits over the long term.
Backend software development tends to be less tolerant of mistakes.
Data analysis is arguably deserving of the highest quality code, because it can fail silently. Illustration: At the last company I worked at, I was the maintainer of the company’s email system, including marketing emails. I once found a subtle bug in code written by one of the company’s data scientists that caused it to double or triple count purchases generated as a result of marketing emails. Sometimes you can write test cases for data analysis code, but it was very difficult in the environment we were using, and it can be hard to capture all of the corner cases. However, software developers are perennially overconfident in their ability to produce bug-free code (the more experienced you get as a backend developer, the more paranoid you become), and I wouldn’t be surprised if the industry as a whole has yet to catch on to this problem of analytics code silently giving bad numbers. (Also, much data analysis programming is surprisingly unsophisticated mathematically… it’s mostly sums and averages.)
I’m highly intrigued by Steve Yegge’s liberal/conservative dichotomy of software development cultures. Under this paradigm, liberal cultures (more common at smaller/startup companies or those working on less critical applications) are more tolerant of errors, while conservative cultures are less tolerant.
If anyone is interested, I recently scribbled down some tips on writing reliable software for a friend working in finance who is learning to code to automate aspects of his job. (Code review is something I didn’t mention that works well if you’re working in a group.)
By the way, does anyone have any idea why, as Jonah says, there is so little metacognition among mathematicians? I found the same thing to a surprising degree in software development (by thinking about my thought process while coding, I was frequently able to generate lines of inquiry I hadn’t read or heard about from anyone).
I think there is quite a bit of metacognition, especially in the context of teaching mathematics to others. One issue is there is some evidence that successful mathematicians are quite heterogeneous in how they think (the classic analyst vs algebraist dichotomy).
By the way, does anyone have any idea why, as Jonah says, there is so little metacognition among mathematicians?
This isn’t a complete answer, but I think a significant part of it is that metacognition is not incentivized professionally for researchers who are not at the top of their fields. Few pure mathematicians are capable of doing original work that requires a lot of metacognition. Still, there are question around why mathematicians aren’t metacognitive anyway (professional incentives are not the only thing that drive people’s behavior). I don’t have a great answer to this, but my subsequent articles will shed some light on the situation.
By liberal cultures being more tolerant of errors, do you mean that they design with error robustness in mind, or they don’t assign as much blame for them?
Fascinating stuff. Here are some largely unrelated hasty generalizations based on my limited experience in the world of software development.
I’l classify the software development work I’ve done as falling in to three broad categories: frontend, backend, and data analysis:
Frontend software development is the most forgiving of mistakes & imperfect code. There’s little opportunity to do permanent damage to production data. It’s typically easy to test your code by clicking around a bunch, so thinking through the problem carefully to ensure correctness by design is often overkill. Also, requirements are more subject to change and code gets thrown away quicker, so code written to be correct by design has less time to accrue benefits over the long term.
Backend software development tends to be less tolerant of mistakes.
Data analysis is arguably deserving of the highest quality code, because it can fail silently. Illustration: At the last company I worked at, I was the maintainer of the company’s email system, including marketing emails. I once found a subtle bug in code written by one of the company’s data scientists that caused it to double or triple count purchases generated as a result of marketing emails. Sometimes you can write test cases for data analysis code, but it was very difficult in the environment we were using, and it can be hard to capture all of the corner cases. However, software developers are perennially overconfident in their ability to produce bug-free code (the more experienced you get as a backend developer, the more paranoid you become), and I wouldn’t be surprised if the industry as a whole has yet to catch on to this problem of analytics code silently giving bad numbers. (Also, much data analysis programming is surprisingly unsophisticated mathematically… it’s mostly sums and averages.)
I’m highly intrigued by Steve Yegge’s liberal/conservative dichotomy of software development cultures. Under this paradigm, liberal cultures (more common at smaller/startup companies or those working on less critical applications) are more tolerant of errors, while conservative cultures are less tolerant.
If anyone is interested, I recently scribbled down some tips on writing reliable software for a friend working in finance who is learning to code to automate aspects of his job. (Code review is something I didn’t mention that works well if you’re working in a group.)
By the way, does anyone have any idea why, as Jonah says, there is so little metacognition among mathematicians? I found the same thing to a surprising degree in software development (by thinking about my thought process while coding, I was frequently able to generate lines of inquiry I hadn’t read or heard about from anyone).
I think there is quite a bit of metacognition, especially in the context of teaching mathematics to others. One issue is there is some evidence that successful mathematicians are quite heterogeneous in how they think (the classic analyst vs algebraist dichotomy).
Possibly because metacognition isn’t much like mathematics, so there’s no reason to expect mathematicians to be especially interested in it.
This isn’t a complete answer, but I think a significant part of it is that metacognition is not incentivized professionally for researchers who are not at the top of their fields. Few pure mathematicians are capable of doing original work that requires a lot of metacognition. Still, there are question around why mathematicians aren’t metacognitive anyway (professional incentives are not the only thing that drive people’s behavior). I don’t have a great answer to this, but my subsequent articles will shed some light on the situation.
Because people have a straightforwardly magical/mystical conception of how Doing Math works, much as they do with other forms of cognition.
By liberal cultures being more tolerant of errors, do you mean that they design with error robustness in mind, or they don’t assign as much blame for them?
Second one.