So, if you allow me to divide by zero, I can derive a contradiction from the basic rules of arithmetic to the effect that any two numbers are equal. But there’s a rule that I cannot divide by zero. In any other case, it seems like if I can derive a contradiction from basic operations of a system of, say, logic, then the logician is not allowed to say “Well...don’t do that”.
So there must be some other reason for the rule, ‘don’t divide by zero.’ What is it?
You can totally divide by zero, but the ring you get when you do that is the zero ring, and it only has one element. When you start with the integers and try dividing by nonzero stuff, you can say “you can’t do that” or you can move out of the integers and into the rationals, into which the integers embed (or you can restrict yourself to only dividing by some nonzero things—that’s called localization—which is also interesting). The difference between doing that and dividing by zero is that nothing embeds into the zero ring (except the zero ring). It’s not that we can’t study it, but that we don’t want to.
Also, in the future, if you want to ask math questions, ask them on math.stackexchange.com (I’ve answered a version of this question there already, I think).
I mean if you localize a ring at zero you get the zero ring. Equivalently, the unique ring in which zero is invertible is the zero ring. (Some textbooks will tell you that you can’t localize at zero. They are haters who don’t like the zero ring for some reason.)
The theorems work out nicer if you don’t. A field should be a ring with exactly two ideals (the zero ideal and the unit deal), and the zero ring has one ideal.
We often want the field without zero to form a multiplicative group, and this isn’t the case in the ring with one element (because the empty set lacks an identity and hence isn’t a group). Indeed we could take the definition of a field to be
A ring such that the non-zero elements form a multiplicative group.
The rule isn’t that you cannot divide by zero. You need a rule to allow you to divide by a number, and the rule happens to only allow you to divide by nonzero numbers.
There are also lots of things logicians can tell you that you’re not allowed to do. For example, you might prove that (A or B) is equivalent to (A or C). You cannot proceed to cancel the A’s to prove that B and C are equivalent, unless A happens to be false. This is completely analogous to going from AB = AC to B = C, which is only allowed when A is nonzero.
However, {false, true} - {true} has only one member, and so values from it become constant, whereas ℝ - {0} has many members and can therefore remain significant.
For the real numbers, the equation a x = b has infinitely many solutions if a = b = 0, no solutions if a = 0 but b ≠ 0, and exactly one solution whenever a ≠ 0. Because there’s nearly always exactly one solution, it’s convenient to have a symbol for “the one solution to the equation a x = b” and that symbol is b / a; b but you can’t write that if a = 0 because then there isn’t exactly one solution.
Didn’t they do the same with set theory? You can derive a contradiction from the existence of “the set of sets that don’t contain themselves”… therefore, build a system where you just can’t do that.
(of course, coming from the axioms, it’s more like “it wasn’t ever allowed”, like in Kindly’s comment, but the “new and updated” axioms were invented specifically so that wouldn’t happen.)
We divide by zero all the time, actually; derivatives are the long way about dividing by zero. We just work very carefully to cancel the actual zero out of the equation.
The rule is less “Don’t divide by zero”, as much as “Don’t perform operations which delete your data.” Dividing by zero doesn’t produce a contradiction, it eliminates meaning in the data. You -can- divide by zero, you just have to do so in a way that maintains all the data you started with. Multiplying by zero eliminates data, and can be used for the same destructive purpose.
The rule is less “Don’t divide by zero”, as much as “Don’t perform operations which delete your data.” Dividing by zero doesn’t produce a contradiction, it eliminates meaning in the data. You -can- divide by zero, you just have to do so in a way that maintains all the data you started with.
I completely fail to understand how you got such a doctrine on dividing by zero. Mathematics just doesn’t work like that.
Are you denying this as somebody with strong knowledge of mathematics?
(I need to know what prior I should assign to this conceptualization being wrong. I got it from a mathematics instructor, quite possibly the best I ever had, in his explanation on why canceling out denominators doesn’t fix discontinuities.)
ETA: The problem he was demonstrating it with focused more on the error of -adding- information than removing it, but he did show us how information could be deleted from an equation by inappropriately multiplying by or dividing by zero, showing how discontinuities could be removed or introduced. He also demonstrated a really weird function involving a square root which had two solutions, one of which introduced a discontinuity, one of which didn’t.
I accept that this is some pedagogical half-truth, but I just don’t see how it benefits people to pretend mathematics cares about whether or not you “eliminate meaning in the data.” There’s no meta-theorem that says information in an equation has to be preserved, whatever that means.
Not necessarily true. A good rule for introductory math students, but some advanced math requires dividing by zero. (As mentioned, that’s what a derivative is, a division by zero.)
Limits are a way of getting information out of a division by zero, which is why derivatives involve taking the limit.
Division by zero is kind of like the square root of a negative number (something introductory mathematics coursework also tells you not to do). It’s not an invalid operation, it’s just an operation you have to be aware of the ramifications of. (If it seems like zero has unusual behavior, well, the same is true of negative numbers with respect to zero and positive numbers, and again the same is true of positive numbers with respect to zero and negative numbers.)
You’ve got it the wrong way round. “A derivative is a division by zero” is the pedagogical lie for introductory students (probably one that causes more confusion than it solves), and advanced maths doesn’t require it.
What are you expecting me to update on? None of what you’ve sent me contradicts anything except the language I use to describe it.
A derivative -is- a division by zero; infinitesimal calculus, and limits, were invented to try to figure out what the value of a specific division by zero would be. Mathematicians threw a -fit- over infinitesimal calculus and limits, denying that division by zero was valid, and insisting that the work was therefore invalid.
So what exactly is our disageement? That I regard limits as a way of getting information out of a division by zero? Or that I insist, on the basis that we -can- get information out of a division by zero, that a division by zero can be valid? Or is it something else entirely?
Incidentally, even if I were certain exactly what you’re trying to convince me of and it was something I didn’t already agree with, your links are nothing but appeals to authority, and they wouldn’t convince me -anyways-. They lack any kind of proof; they’re just assertions.
The definition of limit:
“lim x → a f(x) = c ” means
for all epsilon > 0, there exists delta > 0 such that for all x, if 0 < |x-a|<delta then |f(x) - c| < epsilon.
The definition of derivative:
f’(x) = lim h → 0 (f(x+h) - f(x))/h
That is, for all epsilon > 0, there exists delta > 0 such that for all h, if 0 < |h| < delta then |(f(x+h) - f(x))/h—f’(x)| < epsilon.
At no point do we divide by 0. h never takes on the value 0.
mstevens’ links have several demonstrations that division by zero leads to contradictions in arithmetic.
my link (singular) demonstrates that the definition of a derivative never requires division by zero.
Qiaochu’s proof in a sibling thread that the only ring in which zero has an inverse is the zero ring.
So what exactly is our disageement?
That you continue to say things like “A derivative -is- a division by zero” and “division by zero can be valid”, as if they were facts. Yes, you may have been taught these things, but that does not make them literally true, as many people have tried to explain to you.
Incidentally, even if I were certain exactly what you’re trying to convince me of and it was something I didn’t already agree with, your links are nothing but appeals to authority, and they wouldn’t convince me -anyways-. They lack any kind of proof; they’re just assertions.
Whose authority am I appealing to in my (singular) link? Doctor Rick? I imagine he’s no more a doctor than Dr. Laura. (I actually knew one of the “doctors” on the math forum once, and he wasn’t a Ph. D. (or even a grad student) either; just a reasonably intelligent person who understood mathematics properly.) The only thing he asserts is the classical definition of a derivative.
Or maybe you were just giving a fully general counterargument, without reading the link.
EDIT: It’s simply logically rude to ask for my credentials, and then treat every single argument you’ve been presented as an argument from authority, using that as a basis for dismissing them out of hand.
I am treating your links as arguments from authority, because they don’t provide proof of their assertions, they simply assert them. As I wrote there, I didn’t ask for your credentials to decide whether or not I was wrong, but to provide a prior probability of being wrong. It started pretty high. It declined; my mathematics instructor provided better arguments than you have, which have simply been assertions that I’m incorrect.
My experience with infinitesimal calculus is limited, so I can’t provide proofs that you’re wrong (and thus have no basis to say you’re wrong), but I haven’t seen proofs that my understanding is wrong, either, and thus have no basis to update in either direction on. At this point I’m tapping out; I don’t see this discussion going anywhere.
The law of cancellation requires that all values being cancelled have an inverse. The inverse of 0 doesn’t exist in the set of real numbers (although it does exist in the hyperreals). This doesn’t mean you can’t multiply a number by the inverse of 0, but the product doesn’t exist in real numbers, either. (Hyperreal numbers don’t cancel out the way real numbers do, however; they can leave behind a hyperreal component [ETA: Or at least that’s my understanding from the way my instructor explained why removable discontinuities couldn’t actually be removed—open to proof otherwise].)
0 doesn’t have an inverse in the hyperreal numbers either (To see why this it true, consider the first-order statement “∀x, x*0 != 1” which is true in the real numbers and therefore also true in the hyperreals by the transfer principle). From this it obviously follows that you can’t multiply a number by the inverse of 0.
Not compared to somebody who specializes in the field of mathematics, no.
But I don’t expect to change paper-machine’s mind, where paper-machine expects to change mine. I expect more than appeals to authority. I have some prior that paper-machine might be right, given that this is their field of expertise. My posterior odds that they have a strong knowledge of this particular subject, however, are shrinking pretty rapidly, since all I’m getting are links that come up early in a Google search.
Limits and calculus isn’t what I think of, at all, when I think of division. I pretty much limit it exclusive to the multiplicative inverse in mathematical systems where addition and multiplication work like you think they ought to. There are axioms that encompass all of “works like you think they ought to”, and a necessary one of them is the multiplicative inverse of zero is not a number.
Thanks, that’s helpful. But I guess my point is that it seems to me to be a problem for a system of mathematics that one can do operations which, as you say, delete the data. In other words, isn’t it a problem that it’s even possible to use basic arithmetical operations to render my data meaningless? If this were possible in a system of logic, we would throw the system out without further ado.
And while I can construct a proof that 2=1 (what I called a contradiction, namely that a number be equal to its sucessor) if you allow me to divide by zero, I cannot do so with multiplications. So the cases are at least somewhat different.
Qiaochu_Yuan already answered your question, but because he was pretty technical with his answer, I thought I should try to simplify the point here a bit. The problem with division by zero is that division is essentially defined through multiplication and existence of certain inverse elements. It’s an axiom in itself in group theory that there are inverse elements, that is, for each a, there is x such that ax = 1. Our notation for x here would be 1/a, and it’s easy to see why a 1/a = 1. Division is defined by these inverse elements: a/b is calculated by a * (1/b), where (1/b) is the inverse of b.
But, if you have both multiplication and addition, there is one interesting thing. If we assume addition is the group operation for all numbers(and we use “0” to signify additive neutral element you get from adding together an element and its additive inverse, that is, “a + (-a) = 0″), and we want multiplication to work the way we like it to work(so that a(x + y) = (ax) + (a*y), that is, distributivity hold, something interesting happens.
Now, neutral element 0 is such that x + 0 = x, this is by definition of neutral element. Now watch the magic happen:
0x
= (0 + 0)x = 0x + 0x
So 0x = 0x + 0x.
We subtract 0x from both sides, leaving us with 0x = 0.
Doesn’t matter what you are multiplying 0 with, you always end up with zero. So, assuming 1 and 0 are not the same number(in zero ring, that’s the case, also, 0 = 1 is the only number in the entire zero ring), you can’t get a number such that 0*x = 1. Lacking inverse elements, there’s no obvious way to define what it would mean to divide by zero. There are special situations where there is a natural way to interpret what it means to divide by zero, in which cases, go for it. However, it’s separate from the division defined for other numbers.
And, if you end up dividing by zero because you somewhere assumed that there actually was such a number x that 0*x = 1, well, that’s just your own clumsiness.
Also, you can prove 1=2 if you multiply both sides by zero. 1 = 2. Proof: 10 = 20 ⇒ 0 = 0. Division and multiplication work in opposite directions, multiplication gets you from not equals to equals, division gets you from equals to not equals.
Excellent explanation, thank you. I’ve been telling everyone I know about your resolution to my worry. I believe in math again.
Maybe you can solve my similarly dumb worry about ethics: If the best life is the life of ethical action (insofar as we do or ought to prefer to do the ethically right thing over any other comforts or pleasures), and if ethical action consists at least largely in providing and preserving the goods of life for our fellow human beings, then if someone inhabited the limit case of the best possible life (by permanently providing immortality, freedom, and happiness for all human beings), wouldn’t they at the same time cut everyone else off from the best kind of life?
Ethical action is defined by situations. The best life in the scenario where we don’t have immortality freedom and happiness is to try to bring them about, but the best life in the scenario where we already have them is something different.
Good! That would solve the problem, if true. Do you have a ready argument for this thesis (I mean “but the best life in the scenario where we already have them is something different.”)?
“If true” is a tough thing here because I’m not a moral realist. I can argue by analogy for the best moral life in different scenarios being a different life but I don’t have a deductive proof of anything.
By analogy: the best ethical life in 1850 is probably not identical to the best ethical life in 1950 or in 2050, simply because people have different capacities and there exist different problems in the world. This means the theoretical most ethical life is actually divorced from the real most ethical life, because no one in 1850 could’ve given humanity all those things and working toward would’ve taken away ethical effort from eg, abolishing slavery. Ethics under uncertainty means that more than one person can be living the subjectively ethically perfect life even if only one of them will achieve what their goal is because no one knows who that is ahead of time.
In more advanced mathematics you’re required to keep track of values you’ve canceled out; the given equation remains invalid even though the cancelled value has disappeared. The cancellation isn’t real; it’s a notational convenience which unfortunately is promulgated as a real operation in mathematics classes. All those cancelled-out values are in fact still there. That’s (one of) the mistakes performed in the proof you reference.
Keeping track of cancelled values is not required as long as you’re working with a group, that is, a set(like reals), and an operation(like addition) that follows the kinda rules addition with integers and multiplication with non-zero real values do. If you are working with a group, there’s no sense in which those canceled out values are left dangling. Once you cancel them out, they are gone.
Then again, canceling out, as it is procedurally done in math classes, requires each and every group axiom. That basically means it’s nonsense to speak of canceling out with structures that aren’t groups. If you tried to cancel out stuff with non-group, that’d be basically assuming stuff you know ain’t true.
Which begs a question: What are these structures in advanced maths that you speak of?
I have a super dumb question.
So, if you allow me to divide by zero, I can derive a contradiction from the basic rules of arithmetic to the effect that any two numbers are equal. But there’s a rule that I cannot divide by zero. In any other case, it seems like if I can derive a contradiction from basic operations of a system of, say, logic, then the logician is not allowed to say “Well...don’t do that”.
So there must be some other reason for the rule, ‘don’t divide by zero.’ What is it?
We don’t divide by zero because it’s boring.
You can totally divide by zero, but the ring you get when you do that is the zero ring, and it only has one element. When you start with the integers and try dividing by nonzero stuff, you can say “you can’t do that” or you can move out of the integers and into the rationals, into which the integers embed (or you can restrict yourself to only dividing by some nonzero things—that’s called localization—which is also interesting). The difference between doing that and dividing by zero is that nothing embeds into the zero ring (except the zero ring). It’s not that we can’t study it, but that we don’t want to.
Also, in the future, if you want to ask math questions, ask them on math.stackexchange.com (I’ve answered a version of this question there already, I think).
Thanks, I think that answers my question.
What do you mean by “you get”? Do you mean Wheel theory or what?
I mean if you localize a ring at zero you get the zero ring. Equivalently, the unique ring in which zero is invertible is the zero ring. (Some textbooks will tell you that you can’t localize at zero. They are haters who don’t like the zero ring for some reason.)
BTW, how comes the ring with one element isn’t usually considered a field?
The theorems work out nicer if you don’t. A field should be a ring with exactly two ideals (the zero ideal and the unit deal), and the zero ring has one ideal.
Ah, so it’s for exactly the same reason that 1 isn’t prime.
Yes, more or less. On nLab this phenomenon is called too simple to be simple.
We often want the field without zero to form a multiplicative group, and this isn’t the case in the ring with one element (because the empty set lacks an identity and hence isn’t a group). Indeed we could take the definition of a field to be
and this is fairly elegant.
The rule isn’t that you cannot divide by zero. You need a rule to allow you to divide by a number, and the rule happens to only allow you to divide by nonzero numbers.
There are also lots of things logicians can tell you that you’re not allowed to do. For example, you might prove that (A or B) is equivalent to (A or C). You cannot proceed to cancel the A’s to prove that B and C are equivalent, unless A happens to be false. This is completely analogous to going from AB = AC to B = C, which is only allowed when A is nonzero.
However, {false, true} - {true} has only one member, and so values from it become constant, whereas ℝ - {0} has many members and can therefore remain significant.
For the real numbers, the equation a x = b has infinitely many solutions if a = b = 0, no solutions if a = 0 but b ≠ 0, and exactly one solution whenever a ≠ 0. Because there’s nearly always exactly one solution, it’s convenient to have a symbol for “the one solution to the equation a x = b” and that symbol is b / a; b but you can’t write that if a = 0 because then there isn’t exactly one solution.
This is true of any field, almost by definition.
Didn’t they do the same with set theory? You can derive a contradiction from the existence of “the set of sets that don’t contain themselves”… therefore, build a system where you just can’t do that.
(of course, coming from the axioms, it’s more like “it wasn’t ever allowed”, like in Kindly’s comment, but the “new and updated” axioms were invented specifically so that wouldn’t happen.)
We divide by zero all the time, actually; derivatives are the long way about dividing by zero. We just work very carefully to cancel the actual zero out of the equation.
The rule is less “Don’t divide by zero”, as much as “Don’t perform operations which delete your data.” Dividing by zero doesn’t produce a contradiction, it eliminates meaning in the data. You -can- divide by zero, you just have to do so in a way that maintains all the data you started with. Multiplying by zero eliminates data, and can be used for the same destructive purpose.
I completely fail to understand how you got such a doctrine on dividing by zero. Mathematics just doesn’t work like that.
Are you denying this as somebody with strong knowledge of mathematics?
(I need to know what prior I should assign to this conceptualization being wrong. I got it from a mathematics instructor, quite possibly the best I ever had, in his explanation on why canceling out denominators doesn’t fix discontinuities.)
ETA: The problem he was demonstrating it with focused more on the error of -adding- information than removing it, but he did show us how information could be deleted from an equation by inappropriately multiplying by or dividing by zero, showing how discontinuities could be removed or introduced. He also demonstrated a really weird function involving a square root which had two solutions, one of which introduced a discontinuity, one of which didn’t.
I’m a graduate student, working on my thesis.
I accept that this is some pedagogical half-truth, but I just don’t see how it benefits people to pretend mathematics cares about whether or not you “eliminate meaning in the data.” There’s no meta-theorem that says information in an equation has to be preserved, whatever that means.
Dividing by zero leads to a contradiction
Never divide by zero
Division by zero
Not necessarily true. A good rule for introductory math students, but some advanced math requires dividing by zero. (As mentioned, that’s what a derivative is, a division by zero.)
Limits are a way of getting information out of a division by zero, which is why derivatives involve taking the limit.
Division by zero is kind of like the square root of a negative number (something introductory mathematics coursework also tells you not to do). It’s not an invalid operation, it’s just an operation you have to be aware of the ramifications of. (If it seems like zero has unusual behavior, well, the same is true of negative numbers with respect to zero and positive numbers, and again the same is true of positive numbers with respect to zero and negative numbers.)
You’ve got it the wrong way round. “A derivative is a division by zero” is the pedagogical lie for introductory students (probably one that causes more confusion than it solves), and advanced maths doesn’t require it.
Another link, this time explicitly dealing with derivatives and division by zero, in the vain hope that you’ll actually update someday.
What are you expecting me to update on? None of what you’ve sent me contradicts anything except the language I use to describe it.
A derivative -is- a division by zero; infinitesimal calculus, and limits, were invented to try to figure out what the value of a specific division by zero would be. Mathematicians threw a -fit- over infinitesimal calculus and limits, denying that division by zero was valid, and insisting that the work was therefore invalid.
So what exactly is our disageement? That I regard limits as a way of getting information out of a division by zero? Or that I insist, on the basis that we -can- get information out of a division by zero, that a division by zero can be valid? Or is it something else entirely?
Incidentally, even if I were certain exactly what you’re trying to convince me of and it was something I didn’t already agree with, your links are nothing but appeals to authority, and they wouldn’t convince me -anyways-. They lack any kind of proof; they’re just assertions.
The definition of limit: “lim x → a f(x) = c ” means for all epsilon > 0, there exists delta > 0 such that for all x, if 0 < |x-a|<delta then |f(x) - c| < epsilon.
The definition of derivative: f’(x) = lim h → 0 (f(x+h) - f(x))/h
That is, for all epsilon > 0, there exists delta > 0 such that for all h, if 0 < |h| < delta then |(f(x+h) - f(x))/h—f’(x)| < epsilon.
At no point do we divide by 0. h never takes on the value 0.
Sigh. Consider this my last reply.
mstevens’ links have several demonstrations that division by zero leads to contradictions in arithmetic.
my link (singular) demonstrates that the definition of a derivative never requires division by zero.
Qiaochu’s proof in a sibling thread that the only ring in which zero has an inverse is the zero ring.
That you continue to say things like “A derivative -is- a division by zero” and “division by zero can be valid”, as if they were facts. Yes, you may have been taught these things, but that does not make them literally true, as many people have tried to explain to you.
Whose authority am I appealing to in my (singular) link? Doctor Rick? I imagine he’s no more a doctor than Dr. Laura. (I actually knew one of the “doctors” on the math forum once, and he wasn’t a Ph. D. (or even a grad student) either; just a reasonably intelligent person who understood mathematics properly.) The only thing he asserts is the classical definition of a derivative.
Or maybe you were just giving a fully general counterargument, without reading the link.
EDIT: It’s simply logically rude to ask for my credentials, and then treat every single argument you’ve been presented as an argument from authority, using that as a basis for dismissing them out of hand.
I am treating your links as arguments from authority, because they don’t provide proof of their assertions, they simply assert them. As I wrote there, I didn’t ask for your credentials to decide whether or not I was wrong, but to provide a prior probability of being wrong. It started pretty high. It declined; my mathematics instructor provided better arguments than you have, which have simply been assertions that I’m incorrect.
My experience with infinitesimal calculus is limited, so I can’t provide proofs that you’re wrong (and thus have no basis to say you’re wrong), but I haven’t seen proofs that my understanding is wrong, either, and thus have no basis to update in either direction on. At this point I’m tapping out; I don’t see this discussion going anywhere.
You said ” Dividing by zero doesn’t produce a contradiction”
Several of these links include examples of contradictions. There is no authority required.
For example:
Er, 1⁄0 * 0 != 1.
The law of cancellation requires that all values being cancelled have an inverse. The inverse of 0 doesn’t exist in the set of real numbers (although it does exist in the hyperreals). This doesn’t mean you can’t multiply a number by the inverse of 0, but the product doesn’t exist in real numbers, either. (Hyperreal numbers don’t cancel out the way real numbers do, however; they can leave behind a hyperreal component [ETA: Or at least that’s my understanding from the way my instructor explained why removable discontinuities couldn’t actually be removed—open to proof otherwise].)
0 doesn’t have an inverse in the hyperreal numbers either (To see why this it true, consider the first-order statement “∀x, x*0 != 1” which is true in the real numbers and therefore also true in the hyperreals by the transfer principle). From this it obviously follows that you can’t multiply a number by the inverse of 0.
Further, if you did decide to adjoin an inverse of zero to the hyperreals, the result would be the zero ring.
Going to have to investigate more, but that looks solid.
Since you asked this of papermachine, it seems reasonable to reflect it back:
Are you asserting this as somebody with strong knowledge of mathematics?
Not compared to somebody who specializes in the field of mathematics, no.
But I don’t expect to change paper-machine’s mind, where paper-machine expects to change mine. I expect more than appeals to authority. I have some prior that paper-machine might be right, given that this is their field of expertise. My posterior odds that they have a strong knowledge of this particular subject, however, are shrinking pretty rapidly, since all I’m getting are links that come up early in a Google search.
Limits and calculus isn’t what I think of, at all, when I think of division. I pretty much limit it exclusive to the multiplicative inverse in mathematical systems where addition and multiplication work like you think they ought to. There are axioms that encompass all of “works like you think they ought to”, and a necessary one of them is the multiplicative inverse of zero is not a number.
Thanks, that’s helpful. But I guess my point is that it seems to me to be a problem for a system of mathematics that one can do operations which, as you say, delete the data. In other words, isn’t it a problem that it’s even possible to use basic arithmetical operations to render my data meaningless? If this were possible in a system of logic, we would throw the system out without further ado.
And while I can construct a proof that 2=1 (what I called a contradiction, namely that a number be equal to its sucessor) if you allow me to divide by zero, I cannot do so with multiplications. So the cases are at least somewhat different.
Qiaochu_Yuan already answered your question, but because he was pretty technical with his answer, I thought I should try to simplify the point here a bit. The problem with division by zero is that division is essentially defined through multiplication and existence of certain inverse elements. It’s an axiom in itself in group theory that there are inverse elements, that is, for each a, there is x such that ax = 1. Our notation for x here would be 1/a, and it’s easy to see why a 1/a = 1. Division is defined by these inverse elements: a/b is calculated by a * (1/b), where (1/b) is the inverse of b.
But, if you have both multiplication and addition, there is one interesting thing. If we assume addition is the group operation for all numbers(and we use “0” to signify additive neutral element you get from adding together an element and its additive inverse, that is, “a + (-a) = 0″), and we want multiplication to work the way we like it to work(so that a(x + y) = (ax) + (a*y), that is, distributivity hold, something interesting happens.
Now, neutral element 0 is such that x + 0 = x, this is by definition of neutral element. Now watch the magic happen: 0x = (0 + 0)x
= 0x + 0x So 0x = 0x + 0x.
We subtract 0x from both sides, leaving us with 0x = 0.
Doesn’t matter what you are multiplying 0 with, you always end up with zero. So, assuming 1 and 0 are not the same number(in zero ring, that’s the case, also, 0 = 1 is the only number in the entire zero ring), you can’t get a number such that 0*x = 1. Lacking inverse elements, there’s no obvious way to define what it would mean to divide by zero. There are special situations where there is a natural way to interpret what it means to divide by zero, in which cases, go for it. However, it’s separate from the division defined for other numbers.
And, if you end up dividing by zero because you somewhere assumed that there actually was such a number x that 0*x = 1, well, that’s just your own clumsiness.
Also, you can prove 1=2 if you multiply both sides by zero. 1 = 2. Proof: 10 = 20 ⇒ 0 = 0. Division and multiplication work in opposite directions, multiplication gets you from not equals to equals, division gets you from equals to not equals.
Excellent explanation, thank you. I’ve been telling everyone I know about your resolution to my worry. I believe in math again.
Maybe you can solve my similarly dumb worry about ethics: If the best life is the life of ethical action (insofar as we do or ought to prefer to do the ethically right thing over any other comforts or pleasures), and if ethical action consists at least largely in providing and preserving the goods of life for our fellow human beings, then if someone inhabited the limit case of the best possible life (by permanently providing immortality, freedom, and happiness for all human beings), wouldn’t they at the same time cut everyone else off from the best kind of life?
Ethical action is defined by situations. The best life in the scenario where we don’t have immortality freedom and happiness is to try to bring them about, but the best life in the scenario where we already have them is something different.
Good! That would solve the problem, if true. Do you have a ready argument for this thesis (I mean “but the best life in the scenario where we already have them is something different.”)?
“If true” is a tough thing here because I’m not a moral realist. I can argue by analogy for the best moral life in different scenarios being a different life but I don’t have a deductive proof of anything.
By analogy: the best ethical life in 1850 is probably not identical to the best ethical life in 1950 or in 2050, simply because people have different capacities and there exist different problems in the world. This means the theoretical most ethical life is actually divorced from the real most ethical life, because no one in 1850 could’ve given humanity all those things and working toward would’ve taken away ethical effort from eg, abolishing slavery. Ethics under uncertainty means that more than one person can be living the subjectively ethically perfect life even if only one of them will achieve what their goal is because no one knows who that is ahead of time.
I think you mean x + 0 = x
yes. yes. i remember thinking “x + 0 =”. after that it gets a bit fuzzy.
You can do the same thing in any system of logic.
In more advanced mathematics you’re required to keep track of values you’ve canceled out; the given equation remains invalid even though the cancelled value has disappeared. The cancellation isn’t real; it’s a notational convenience which unfortunately is promulgated as a real operation in mathematics classes. All those cancelled-out values are in fact still there. That’s (one of) the mistakes performed in the proof you reference.
This strikes to me as massively confused.
Keeping track of cancelled values is not required as long as you’re working with a group, that is, a set(like reals), and an operation(like addition) that follows the kinda rules addition with integers and multiplication with non-zero real values do. If you are working with a group, there’s no sense in which those canceled out values are left dangling. Once you cancel them out, they are gone.
http://en.wikipedia.org/wiki/Group_%28mathematics%29 ← you can check group axioms here, I won’t list them here.
Then again, canceling out, as it is procedurally done in math classes, requires each and every group axiom. That basically means it’s nonsense to speak of canceling out with structures that aren’t groups. If you tried to cancel out stuff with non-group, that’d be basically assuming stuff you know ain’t true.
Which begs a question: What are these structures in advanced maths that you speak of?