I wonder if this (distrusting imperfect algorithms more than imperfect people) holds for programmers and mathematicians. Indeed, the popular perception seems to be that such folks overly trust algorithms...
I was under the impression that mathematicians are actually too distrusting of imperfect algorithms (compared to their actual error rates). The three examples I ran into myself were:
In analysis, in particular in bifurcation analysis, a (small) parameter epsilon is introduced which determine the size of the perturbation. Analysts always loudly proclaim that ‘there exists an epsilon small enough’ such that their analysis holds (example values are often around 1/1000), but frequently the techniques are valid for values as large as epsilon = 1⁄2 (for example). Analysist who are unwilling to make statements about such large values of epsilon seem to be too mistrusting of their own techniques/algorithms.
Whether or not pi and e are normal are open questions in mathematics, but statistical analysis of the first couple of billion of digits (if I am not mistaken) suggests that pi might be normal whereas e is probably not. Still, many mathematicians seem to be agnostic about these questions, as only a few billion data points have been obtained.
In the study of number fields probabilistic algorithms are implemented to compute certain interesting properties such as the class group (algorithms that are guaranteed to give the right answer exist, but are too slow to be used in anything other than a few test cases). These algorithms generally have a guaranteed error rate of about 0.01% (sometimes this is a tune-able parameter), but I know of a few mathematicians in this field (which makes it a high percentage, since I only know a few mathematicians in this field) who will frequently doubt the outcome of such an algorithm.
Of course these are only my personal experiences, but I’d guess that mathematicians are on the whole too fond of certainty and trust imperfect algorithms too little rather than too much.
I wonder if this (distrusting imperfect algorithms more than imperfect people) holds for programmers and mathematicians. Indeed, the popular perception seems to be that such folks overly trust algorithms...
I was under the impression that mathematicians are actually too distrusting of imperfect algorithms (compared to their actual error rates). The three examples I ran into myself were:
In analysis, in particular in bifurcation analysis, a (small) parameter epsilon is introduced which determine the size of the perturbation. Analysts always loudly proclaim that ‘there exists an epsilon small enough’ such that their analysis holds (example values are often around 1/1000), but frequently the techniques are valid for values as large as epsilon = 1⁄2 (for example). Analysist who are unwilling to make statements about such large values of epsilon seem to be too mistrusting of their own techniques/algorithms.
Whether or not pi and e are normal are open questions in mathematics, but statistical analysis of the first couple of billion of digits (if I am not mistaken) suggests that pi might be normal whereas e is probably not. Still, many mathematicians seem to be agnostic about these questions, as only a few billion data points have been obtained.
In the study of number fields probabilistic algorithms are implemented to compute certain interesting properties such as the class group (algorithms that are guaranteed to give the right answer exist, but are too slow to be used in anything other than a few test cases). These algorithms generally have a guaranteed error rate of about 0.01% (sometimes this is a tune-able parameter), but I know of a few mathematicians in this field (which makes it a high percentage, since I only know a few mathematicians in this field) who will frequently doubt the outcome of such an algorithm.
Of course these are only my personal experiences, but I’d guess that mathematicians are on the whole too fond of certainty and trust imperfect algorithms too little rather than too much.