The rule is less “Don’t divide by zero”, as much as “Don’t perform operations which delete your data.” Dividing by zero doesn’t produce a contradiction, it eliminates meaning in the data. You -can- divide by zero, you just have to do so in a way that maintains all the data you started with.
I completely fail to understand how you got such a doctrine on dividing by zero. Mathematics just doesn’t work like that.
Are you denying this as somebody with strong knowledge of mathematics?
(I need to know what prior I should assign to this conceptualization being wrong. I got it from a mathematics instructor, quite possibly the best I ever had, in his explanation on why canceling out denominators doesn’t fix discontinuities.)
ETA: The problem he was demonstrating it with focused more on the error of -adding- information than removing it, but he did show us how information could be deleted from an equation by inappropriately multiplying by or dividing by zero, showing how discontinuities could be removed or introduced. He also demonstrated a really weird function involving a square root which had two solutions, one of which introduced a discontinuity, one of which didn’t.
I accept that this is some pedagogical half-truth, but I just don’t see how it benefits people to pretend mathematics cares about whether or not you “eliminate meaning in the data.” There’s no meta-theorem that says information in an equation has to be preserved, whatever that means.
I completely fail to understand how you got such a doctrine on dividing by zero. Mathematics just doesn’t work like that.
Are you denying this as somebody with strong knowledge of mathematics?
(I need to know what prior I should assign to this conceptualization being wrong. I got it from a mathematics instructor, quite possibly the best I ever had, in his explanation on why canceling out denominators doesn’t fix discontinuities.)
ETA: The problem he was demonstrating it with focused more on the error of -adding- information than removing it, but he did show us how information could be deleted from an equation by inappropriately multiplying by or dividing by zero, showing how discontinuities could be removed or introduced. He also demonstrated a really weird function involving a square root which had two solutions, one of which introduced a discontinuity, one of which didn’t.
I’m a graduate student, working on my thesis.
I accept that this is some pedagogical half-truth, but I just don’t see how it benefits people to pretend mathematics cares about whether or not you “eliminate meaning in the data.” There’s no meta-theorem that says information in an equation has to be preserved, whatever that means.