There are a bunch of areas in math where you get expressions of the form 00 and they resolve to some number, but it’s not always the same number. I’ve heard some people say that 00 “can be any number”. Can we formalize this? The formalism would have to include 4⋅0 as something different than 3⋅0, so that if you divide the first by 0, you get 4, but the second gets 3.
Here is a way to turn this into what may be a field or ring. Each element is a function f:Z→R, where a function of the form (⋯0,4,3,5,1,2,0⋯) reads as 4⋅02+3⋅0+5+10+202. Addition is component-wise (3⋅0+6⋅0=9⋅0; this makes sense), i.e., (f+g)(z):=f(z)+g(z), and multiplication is, well, 30⋅20=602, so we get the rule
(f⋅g)(z)=∑k+ℓ=zf(k)g(ℓ)
This becomes a problem once elements with infinite support are considered, i.e., functions f that are nonzero at infinitely many values, since then the sum may not converge. But it’s well defined for numbers with finite support. This is all similar to how polynomials are handled formally, except that polynomials only go in one direction (i.e., they’re functions from N rather than Z), and that also solves the non-convergence problem. Even if infinite polynomials are allowed, multiplication is well-defined since for any n∈N, there are only finitely many pairs of natural numbers k,ℓ such that k+ℓ=n.
The additively neutral element in this setting is 0:=(⋯0,0,0,0,0⋯) and the multiplicatively neutral element is 1:=(⋯0,0,1,0,0⋯). Additive inverses are easy; (−f)(z)=−f(z)∀z∈Z. The interesting part is multiplicative inverses. Of course, there is no inverse of 0, so we still can’t divide by the ‘real’ zero. But I believe all elements with finite support do have a multicative inverse (there should be a straight-forward inductive proof for this). Interestingly, those inverses are not finite anymore, but they are periodical. For example, the inverse of 1⋅0 is just 10, but the inverse of 1+1⋅0 is actually
1−1⋅0+1⋅02−1⋅03+1⋅04⋯
I think this becomes a field with well-defined operations if one considers only the elements with finite support and elements with inverses of finite support. (The product of two elements-whose-inverses-have-finite-support should itself have an inverse of finite support because (fg)−1=g−1f−1). I wonder if this structure has been studied somewhere… probably without anyone thinking of the interpretation considered here.
If I’m correctly understanding your construction, it isn’t actually using any properties of 0. You’re just looking at a formal power series (with negative exponents) and writing powers of 0 instead of x. Identifying x with “0” gives exactly what you motivated—1x and 2x (which are 10 and 20 when interpreted) are two different things.
The structure you describe (where we want elements and their inverses to have finite support) turns out to be quite small. Specifically, this field consists precisely of all monomials in x. Certainly all monomials work; the inverse of cxk is c−1x−k for any c∈R∖{0} and k∈Z.
To show that nothing else works, let P(x) and Q(x) be any two nonzero sums of finitely many integer powers of x (so like 1x+1−x2). Then, the leading term (product of the highest power terms of P and Q) will be some nonzero thing. But also, the smallest term (product of the lower power terms of P and Q) will be some nonzero thing. Moreover, we can’t get either of these to cancel out. So, the product can never be equal to 1. (Unless both are monomials.)
For an example, think about multiplying (x+1x)(1x−1x3). The leading term x⋅1x=x0 is the highest power term and 1x⋅(−1x3) is the lowest power term. We can get all the inner stuff to cancel but never these two outside terms.
A larger structure to take would be formal Laurent series in x. These are sums of finitely many negative powers of x and arbitrarily many positive powers of x. This set is closed under multiplicative inverses.
Equivalently, you can take the set of rational functions in x. You can recover the formal Laurent series from a rational function by doing long division / taking the Taylor expansion.
(If the object extends infinitely in the negative direction and is bounded in the positive direction, it’s just a formal Laurent series in 1x.)
If it extends infinitely in both directions, that’s an interesting structure I don’t know how to think about. For example, (…1,1,1,1,1,…)=⋯+x−2+x−1+1+x+x2+… stays the same when multiplied by x. This means what we have isn’t a field. I bet there’s a fancy algebra word for this object but I’m not aware of it.
You’ve understood correctly minus one important detail:
The structure you describe (where we want elements and their inverses to have finite support)
Not elements and their inverses! Elements or their inverses. I’ve shown the example of 1+1x to demonstrate that you quickly get infinite inverses, and you’ve come up with an abstract argument why finite inverses won’t cut it:
To show that nothing else works, let P(x) and Q(x) be any two nonzero sums of finitely many integer powers of x (so like 1x+1−x2). Then, the leading term (product of the highest power terms of P and Q) will be some nonzero thing. But also, the smallest term (product of the lower power terms of P and Q) will be some nonzero thing. Moreover, we can’t get either of these to cancel out. So, the product can never be equal to 1. (Unless both are monomials.)
In particular, your example of x+1x has the inverse x−x3+x5−x7⋯. Perhaps a better way to describe this set is ‘all you can build in finitely many steps using addition, inverse, and multiplication, starting from only elements with finite support’. Perhaps you can construct infinite-but-periodical elements with infinite-but-periodical inverses; if so, those would be in the field as well (if it’s a field).
If you can construct (⋯1,1,1,1⋯), it would not be field. But constructing this may be impossible.
I’m currently completely unsure if the resulting structure is a field. If you get a bunch of finite elements, take their infinite-but-periodical inverse, and multiply those inverses, the resulting number has again a finite inverse due to the argument I’ve shown in the previous comment. But if you use addition on one of them, things may go wrong.
A larger structure to take would be formal Laurent series in x. These are sums of finitely many negative powers of x and arbitrarily many positive powers of x. This set is closed under multiplicative inverses.
Thanks; this is quite similar—although not identical.
Perhaps a better way to describe this set is ‘all you can build in finitely many steps using addition, inverse, and multiplication, starting from only elements with finite support’.
Ah, now I see what you are after.
But if you use addition on one of them, things may go wrong.
This is exactly right, here’s an illustration:
Here is a construction of (…,1,1,1,…): We have that 1+x+x2+… is the inverse of 1−x. Moreover, 1x+1x2+1x3+…is the inverse of x−1. If we want this thing to be closed under inverses and addition, then this implies that
(1+x+x2+…)+(1x+1x2+1x3+…)=⋯+1x3+1x2+1x+1+x+x2+…
can be constructed.
But this is actually bad news if you want your multiplicative inverses to be unique. Since 1x+1x2+1x3+… is the inverse of x−1, we have that −1x−1x2−1x3… is the inverse of 1−x. So then you get
−1x−1x2−1x3−⋯=1+x+x2+…
so
0=⋯+1x3+1x2+1x+1+x+x2+…
On the one hand, this is a relief, because it explains the strange property that this thing stays the same when multiplied by x. On the other hand, it means that it is no longer the case that the coordinate representation (…,1,1,1,…) is well-defined—we can do operations which, by the rules, should produce equal outputs, but they produce different coordinates.
In fact, for any polynomial (such as 1−x), you can find one inverse which uses arbitrarily high positive powers of x and another inverse which uses arbitrarily low negative powers of x. The easiest way to see this is by looking at another example, let’s say x2+1x.
One way you can find the inverse of x2+1x is to get the 1 out of the x2 term and keep correcting: first you have (x2+1x)(1x2+?), then you have (x2+1x)(1x2−1x5+?), then you have (x2+1x)(1x2−1x5+1x8+?), and so on.
Another way you can find the inverse of x2+1x is to write its terms in opposite order. So you have 1x+x2 and you do the same correcting process, starting with (1x+x2)(x+?), then (1x+x2)(x−x4+?), and continuing in the same way.
Then subtract these two infinite series and you have a bidirectional sum of integer powers of x which is equal to 0.
My hunch is that any bidirectional sum of integer powers of x which we can actually construct is “artificially complicated” and it can be rewritten as a one-directional sum of integer powers of x. So, this would mean that your number system is what you get when you take the union of Laurent series going in the positive and negative directions, where bidirectional coordinate representations are far from unique. Would be delighted to hear a justification of this or a counterexample.
Here is a construction of (…,1,1,1,…): We have that 1+x+x2+… is the inverse of 1−x. Moreover, 1x+1x2+1x3+…is the inverse of x−1. [...]
Yeah, that’s conclusive. Well done! I guess you can’t divide by zero after all ;)
I think the main mistake I’ve made here is to assume that inverses are unique without questioning it, which of course doesn’t make sense at all if I don’t yet know that the structure is a field.
My hunch is that any bidirectional sum of integer powers of x which we can actually construct is “artificially complicated” and it can be rewritten as a one-directional sum of integer powers of x. So, this would mean that your number system is what you get when you take the union of Laurent series going in the positive and negative directions, where bidirectional coordinate representations are far from unique. Would be delighted to hear a justification of this or a counterexample.
So, I guess one possibility is that, if we let [x] be the equivalence class of all elements that are =x in this structure, the resulting set of classes is isomorphic to the Laurent numbers. But another possibility could be that it all collapses into a single class—right? At least I don’t yet see a reason why that can’t be the case (though I haven’t given it much thought). You’ve just proven that some elements equal zero, perhaps it’s possible to prove it for all elements.
If you allow series that are infinite in both directions, then you have a new problem which is that multiplication may no longer be possible: the sums involved need not converge. And there’s also the issue already noted, that some things that don’t look like they equal zero may in some sense have to be zero. (Meaning “absolute” zero = (...,0,0,0,...) rather than the thing you originally called zero which should maybe be called something like ε instead.)
What’s the best we could hope for? Something like this. Write R for RZ, i.e., all formal potentially-double-ended Laurent series. There’s an addition operation defined on the whole thing, and a multiplicative operation defined on some subset of pairs of its elements, namely those for which the relevant sums converge (or maybe are “summable” in some weaker sense). There are two problems: (1) some products aren’t defined, and (2) at least with some ways of defining them, there are some zero-divisors—e.g., (x-1) times the sum of all powers of x, as discussed above. (I remark that if your original purpose is to be able to divide by zero, perhaps you shouldn’t be too troubled by the presence of zero-divisors; contrapositively, that if they trouble you, perhaps you shouldn’t have wanted to divide by zero in the first place.)
We might hope to deal with issue 1 by restricting to some subset A of R, chosen so that all the sums that occur when multiplying elements of A are “well enough behaved”; if issue 2 persists after doing that, maybe we might hope to deal with that by taking a quotient of A—i.e., treating some of its elements as being equal to one another.
Some versions of this strategy definitely succeed, and correspond to things just_browsing already mentioned above. For instance, let A consist of everything in R with only finitely many negative powers of x, the Laurent series already mentioned; this is a field. Or let it consist of everything that’s the series expansion of a rational function of x; this is also a field. This latter is, I think, the nearest you can get to “finite or periodic”. The periodic elements are the ones whose denominator has degree at most 1. Degree ⇐ 2 brings in arithmetico-periodic elements—things that go, say, 1,1,2,2,3,3,4,4, etc. I’m pretty sure that degree <=d in the denominator is the same as coefficients being ultimately (periodic + polynomial of degree < d). And this is what you get if you say you want to include both 1 and x, and to be closed under addition, subtraction, multiplication, and division.
Maybe that’s already all you need. If not, perhaps the next question is: is there any version of this that gives you a field and that allows, at least, some series that are infinite in both directions? Well, by considering inverses of (1-x)^k we can get sequences that grow “rightward” as fast as any polynomial. So if we want the sums inside our products to converge, we’re going to need our sequences to shrink faster-than-polynomially as we move “leftward”. So here’s an attempt. Let A consist of formal double-ended Laurent series ∑n∈Zanxn such that for n<0 we have |an|=O(t−n) for some t<1, and for n>0 we have |an|=O(nk) for some k. Clearly the sum or difference of two of these has the same properties. What about products? Well, if we multiply together a,b to get c then cn=∑p+q=napbq. The terms with p<0<q are bounded in absolute value by some constant times t−pqk where t gets its value from a and k gets its value from b; so the sum of these terms is bounded by some constant times ∑q>0tq−nqk which in turn is a constant times t−n. Similarly for the terms with q<0<p; the terms with p,q both of the same sign are bounded by a constant times t−n when they’re negative and by a constant times n(ka+kb) when they’re positive. So, unless I screwed up, products always “work” in the sense that the sums involved converge and produce a series that’s in A. Do we have any zero-divisors? Eh, I don’t think so, but it’s not instantly obvious.
Here’s a revised version that I think does make it obvious that we don’t have zero-divisors. Instead of requiring that for n<0 we have |an|=O(tn) for some t<1, require that to hold for allt<1. Once again our products always exist and still lie in A. But now it’s also true that for small enough t, the formal series themselves converge to well-behaved functions of t. In particular, there can’t be zero-divisors.
I’m not sure any of this really helps much in your quest to divide by zero, though :-).
Edit: this structure is not a field as proved by just_browsing.
Here is a wacky idea I’ve had forever.
There are a bunch of areas in math where you get expressions of the form 00 and they resolve to some number, but it’s not always the same number. I’ve heard some people say that 00 “can be any number”. Can we formalize this? The formalism would have to include 4⋅0 as something different than 3⋅0, so that if you divide the first by 0, you get 4, but the second gets 3.
Here is a way to turn this into what may be a field or ring. Each element is a function f:Z→R, where a function of the form (⋯0,4,3,5,1,2,0⋯) reads as 4⋅02+3⋅0+5+10+202. Addition is component-wise (3⋅0+6⋅0=9⋅0; this makes sense), i.e., (f+g)(z):=f(z)+g(z), and multiplication is, well, 30⋅20=602, so we get the rule
(f⋅g)(z)=∑k+ℓ=zf(k)g(ℓ)
This becomes a problem once elements with infinite support are considered, i.e., functions f that are nonzero at infinitely many values, since then the sum may not converge. But it’s well defined for numbers with finite support. This is all similar to how polynomials are handled formally, except that polynomials only go in one direction (i.e., they’re functions from N rather than Z), and that also solves the non-convergence problem. Even if infinite polynomials are allowed, multiplication is well-defined since for any n∈N, there are only finitely many pairs of natural numbers k,ℓ such that k+ℓ=n.
The additively neutral element in this setting is 0:=(⋯0,0,0,0,0⋯) and the multiplicatively neutral element is 1:=(⋯0,0,1,0,0⋯). Additive inverses are easy; (−f)(z)=−f(z)∀z∈Z. The interesting part is multiplicative inverses. Of course, there is no inverse of 0, so we still can’t divide by the ‘real’ zero. But I believe all elements with finite support do have a multicative inverse (there should be a straight-forward inductive proof for this). Interestingly, those inverses are not finite anymore, but they are periodical. For example, the inverse of 1⋅0 is just 10, but the inverse of 1+1⋅0 is actually
1−1⋅0+1⋅02−1⋅03+1⋅04⋯
I think this becomes a field with well-defined operations if one considers only the elements with finite support and elements with inverses of finite support. (The product of two elements-whose-inverses-have-finite-support should itself have an inverse of finite support because (fg)−1=g−1f−1). I wonder if this structure has been studied somewhere… probably without anyone thinking of the interpretation considered here.
This looks like the hyperreal numbers, with your 10 equal to their ω.
If I’m correctly understanding your construction, it isn’t actually using any properties of 0. You’re just looking at a formal power series (with negative exponents) and writing powers of 0 instead of x. Identifying x with “0” gives exactly what you motivated—1x and 2x (which are 10 and 20 when interpreted) are two different things.
The structure you describe (where we want elements and their inverses to have finite support) turns out to be quite small. Specifically, this field consists precisely of all monomials in x. Certainly all monomials work; the inverse of cxk is c−1x−k for any c∈R∖{0} and k∈Z.
To show that nothing else works, let P(x) and Q(x) be any two nonzero sums of finitely many integer powers of x (so like 1x+1−x2). Then, the leading term (product of the highest power terms of P and Q) will be some nonzero thing. But also, the smallest term (product of the lower power terms of P and Q) will be some nonzero thing. Moreover, we can’t get either of these to cancel out. So, the product can never be equal to 1. (Unless both are monomials.)
For an example, think about multiplying (x+1x)(1x−1x3). The leading term x⋅1x=x0 is the highest power term and 1x⋅(−1x3) is the lowest power term. We can get all the inner stuff to cancel but never these two outside terms.
A larger structure to take would be formal Laurent series in x. These are sums of finitely many negative powers of x and arbitrarily many positive powers of x. This set is closed under multiplicative inverses.
Equivalently, you can take the set of rational functions in x. You can recover the formal Laurent series from a rational function by doing long division / taking the Taylor expansion.
(If the object extends infinitely in the negative direction and is bounded in the positive direction, it’s just a formal Laurent series in 1x.)
If it extends infinitely in both directions, that’s an interesting structure I don’t know how to think about. For example, (…1,1,1,1,1,…)=⋯+x−2+x−1+1+x+x2+… stays the same when multiplied by x. This means what we have isn’t a field. I bet there’s a fancy algebra word for this object but I’m not aware of it.
You’ve understood correctly minus one important detail:
Not elements and their inverses! Elements or their inverses. I’ve shown the example of 1+1x to demonstrate that you quickly get infinite inverses, and you’ve come up with an abstract argument why finite inverses won’t cut it:
In particular, your example of x+1x has the inverse x−x3+x5−x7⋯. Perhaps a better way to describe this set is ‘all you can build in finitely many steps using addition, inverse, and multiplication, starting from only elements with finite support’. Perhaps you can construct infinite-but-periodical elements with infinite-but-periodical inverses; if so, those would be in the field as well (if it’s a field).
If you can construct (⋯1,1,1,1⋯), it would not be field. But constructing this may be impossible.
I’m currently completely unsure if the resulting structure is a field. If you get a bunch of finite elements, take their infinite-but-periodical inverse, and multiply those inverses, the resulting number has again a finite inverse due to the argument I’ve shown in the previous comment. But if you use addition on one of them, things may go wrong.
Thanks; this is quite similar—although not identical.
Ah, now I see what you are after.
This is exactly right, here’s an illustration:
Here is a construction of (…,1,1,1,…): We have that 1+x+x2+… is the inverse of 1−x. Moreover, 1x+1x2+1x3+…is the inverse of x−1. If we want this thing to be closed under inverses and addition, then this implies that
(1+x+x2+…)+(1x+1x2+1x3+…)=⋯+1x3+1x2+1x+1+x+x2+…
can be constructed.
But this is actually bad news if you want your multiplicative inverses to be unique. Since 1x+1x2+1x3+… is the inverse of x−1, we have that −1x−1x2−1x3… is the inverse of 1−x. So then you get
−1x−1x2−1x3−⋯=1+x+x2+…
so
0=⋯+1x3+1x2+1x+1+x+x2+…
On the one hand, this is a relief, because it explains the strange property that this thing stays the same when multiplied by x. On the other hand, it means that it is no longer the case that the coordinate representation (…,1,1,1,…) is well-defined—we can do operations which, by the rules, should produce equal outputs, but they produce different coordinates.
In fact, for any polynomial (such as 1−x), you can find one inverse which uses arbitrarily high positive powers of x and another inverse which uses arbitrarily low negative powers of x. The easiest way to see this is by looking at another example, let’s say x2+1x.
One way you can find the inverse of x2+1x is to get the 1 out of the x2 term and keep correcting: first you have (x2+1x)(1x2+?), then you have (x2+1x)(1x2−1x5+?), then you have (x2+1x)(1x2−1x5+1x8+?), and so on.
Another way you can find the inverse of x2+1x is to write its terms in opposite order. So you have 1x+x2 and you do the same correcting process, starting with (1x+x2)(x+?), then (1x+x2)(x−x4+?), and continuing in the same way.
Then subtract these two infinite series and you have a bidirectional sum of integer powers of x which is equal to 0.
My hunch is that any bidirectional sum of integer powers of x which we can actually construct is “artificially complicated” and it can be rewritten as a one-directional sum of integer powers of x. So, this would mean that your number system is what you get when you take the union of Laurent series going in the positive and negative directions, where bidirectional coordinate representations are far from unique. Would be delighted to hear a justification of this or a counterexample.
Yeah, that’s conclusive. Well done! I guess you can’t divide by zero after all ;)
I think the main mistake I’ve made here is to assume that inverses are unique without questioning it, which of course doesn’t make sense at all if I don’t yet know that the structure is a field.
So, I guess one possibility is that, if we let [x] be the equivalence class of all elements that are =x in this structure, the resulting set of classes is isomorphic to the Laurent numbers. But another possibility could be that it all collapses into a single class—right? At least I don’t yet see a reason why that can’t be the case (though I haven’t given it much thought). You’ve just proven that some elements equal zero, perhaps it’s possible to prove it for all elements.
If you allow series that are infinite in both directions, then you have a new problem which is that multiplication may no longer be possible: the sums involved need not converge. And there’s also the issue already noted, that some things that don’t look like they equal zero may in some sense have to be zero. (Meaning “absolute” zero = (...,0,0,0,...) rather than the thing you originally called zero which should maybe be called something like ε instead.)
What’s the best we could hope for? Something like this. Write R for RZ, i.e., all formal potentially-double-ended Laurent series. There’s an addition operation defined on the whole thing, and a multiplicative operation defined on some subset of pairs of its elements, namely those for which the relevant sums converge (or maybe are “summable” in some weaker sense). There are two problems: (1) some products aren’t defined, and (2) at least with some ways of defining them, there are some zero-divisors—e.g., (x-1) times the sum of all powers of x, as discussed above. (I remark that if your original purpose is to be able to divide by zero, perhaps you shouldn’t be too troubled by the presence of zero-divisors; contrapositively, that if they trouble you, perhaps you shouldn’t have wanted to divide by zero in the first place.)
We might hope to deal with issue 1 by restricting to some subset A of R, chosen so that all the sums that occur when multiplying elements of A are “well enough behaved”; if issue 2 persists after doing that, maybe we might hope to deal with that by taking a quotient of A—i.e., treating some of its elements as being equal to one another.
Some versions of this strategy definitely succeed, and correspond to things just_browsing already mentioned above. For instance, let A consist of everything in R with only finitely many negative powers of x, the Laurent series already mentioned; this is a field. Or let it consist of everything that’s the series expansion of a rational function of x; this is also a field. This latter is, I think, the nearest you can get to “finite or periodic”. The periodic elements are the ones whose denominator has degree at most 1. Degree ⇐ 2 brings in arithmetico-periodic elements—things that go, say, 1,1,2,2,3,3,4,4, etc. I’m pretty sure that degree <=d in the denominator is the same as coefficients being ultimately (periodic + polynomial of degree < d). And this is what you get if you say you want to include both 1 and x, and to be closed under addition, subtraction, multiplication, and division.
Maybe that’s already all you need. If not, perhaps the next question is: is there any version of this that gives you a field and that allows, at least, some series that are infinite in both directions? Well, by considering inverses of (1-x)^k we can get sequences that grow “rightward” as fast as any polynomial. So if we want the sums inside our products to converge, we’re going to need our sequences to shrink faster-than-polynomially as we move “leftward”. So here’s an attempt. Let A consist of formal double-ended Laurent series ∑n∈Zanxn such that for n<0 we have |an|=O(t−n) for some t<1, and for n>0 we have |an|=O(nk) for some k. Clearly the sum or difference of two of these has the same properties. What about products? Well, if we multiply together a,b to get c then cn=∑p+q=napbq. The terms with p<0<q are bounded in absolute value by some constant times t−pqk where t gets its value from a and k gets its value from b; so the sum of these terms is bounded by some constant times ∑q>0tq−nqk which in turn is a constant times t−n. Similarly for the terms with q<0<p; the terms with p,q both of the same sign are bounded by a constant times t−n when they’re negative and by a constant times n(ka+kb) when they’re positive. So, unless I screwed up, products always “work” in the sense that the sums involved converge and produce a series that’s in A. Do we have any zero-divisors? Eh, I don’t think so, but it’s not instantly obvious.
Here’s a revised version that I think does make it obvious that we don’t have zero-divisors. Instead of requiring that for n<0 we have |an|=O(tn) for some t<1, require that to hold for all t<1. Once again our products always exist and still lie in A. But now it’s also true that for small enough t, the formal series themselves converge to well-behaved functions of t. In particular, there can’t be zero-divisors.
I’m not sure any of this really helps much in your quest to divide by zero, though :-).