It’s instructive to set out the proof you give for 0.999...=1 in number bases other than ten. For example base eleven, in which the maximum value single digit is conventionally represented as A and amounts to 10 (base ten). 10 (base eleven) amounts to 11 (base ten). So
Let x = 0.AAA...
10x = A.AAA...
10x—x = A
Ax = A
x = 1
0.AAA… = 1
But 0. A (base eleven) = 10⁄11 (base ten) which is bigger than 0.9 (base ten) = 9⁄10 (base ten). So shouldn’t that inequality apply to 0.AAA… (base eleven) and 0.999… (base ten) as well? (A debatable point maybe). If so, then they can’t both equal 1, unless we say something like 0.999...=1 and 0.AAA...=1 are both valid but base dependent equations, as indeed any such equation would be when using the top valued single digit of its base. This would mean 0.111...=1 in binary.
Does this mean that because the difference or “lateness” gets smaller tending to zero each time a single identical digit is added to 0.A and 0.9 respectively, then 0.A… = 0.9...?
(Whereas the difference we get when we do this to say 0.8 and 0.9 gets larger each time so we can’t say 0.8… = 0.9...)
No I believe you are reaching a different concept. It is true that the difference squashes towards 0 but that would be different line of thinking. In a contex where infinidesimal are allowed (ie non-real) we might associate the series to different amounts and indeed find that they differ by a “minuscule amount”. But as we normally operate on reals we only get a “real precision” result. For example if you had to say whether 3⁄4, 1 and 5⁄4 name which integers probalby your best bet would be that all of them name the same integer 1, if you are only restricted to integer precision. In the same way you might have 1 and 1-epsilon to be differnt numbers when infinidesimal accuracy is allowed but a real + anything infinidesimal is going to be the same real regardless of the infinidesimal (1 and 1-epsilon are the same real in real precision)
What I was actually going fo is that, for any r < 1 you can ask how many terms you need to get up to that level and both series will give a finite answer. Ie to get to the same “depth” as 0.999999… gets with 6 digits you might need a bit less with 0.AAAAA… .It’s a “horizontal” difference instead of a “vertical” one. However there is no number that one of the series could reach but the other does not (and the number that both series fails to reach is 1, it might be helpful to remember that an suprenum is the smallest upper limit). if one series reaches a sum with 10 terms and other reaches the same sum in 10000 terms it’s equally good, we are only interested what happens “eventually” or after all terms have been accounted for. The way we have come up what the repeating digit sign means refers to limits and it’s pretty guaranteed to produce reals.
So shouldn’t that inequality apply to 0.AAA… (base eleven) and 0.999… (base ten) as well? (A debatable point maybe).
Not debatable, just false. Formally, the fact that xk<yk for all k does not imply that limk→∞xk<limk→∞yk.
If I were to poke a hole in the (proposed) argument that 0.[k 9s]{base 10} < 0.[k As]{base 11} (0.9<0.A; 0.99<0.AA;...), I’d point out that 0.[2*k 9s]{base 10} > 0.[k As]{base 11} (0.99>0.A; 0.9999>0.AA;...), and that this gives the opposite result when you take k→∞ (in the standard sense of those terms). I won’t demonstrate it rigorously here, but the faulty link here (under the standard meanings of real numbers and infinities) is that carrying the inequality through the limit just doesn’t create a necessarily-true statement.
0.111...{binary} is 1, basically for the Dedekind cut reason in the OP, which is not base-dependent (or representation-dependent at all) -- you can define and identify real numbers without using Arabic numerals or place value at all, and if you do that, then 0.999...=1 is as clear as not(not(true))=true.
You are assuming that there is a link between the per-term value and the whole series value. The connection just isn’t there and if you think it would be it would be important to show why.
I could have two small finite series of A=10 and B=2+3+5 and compare that 2<10, 3<10 and 5<10 and then be surprised when A=B. When the term amount is not finite it’s harder to verify thjat you haven’t made this kind of error.
Still not entirely convinced. If 0.A > 0.9 then surely0.A… > 0.9...?
Or does the fact this is true only when we halt at an equal number of digits after the point make a difference? 0.A = 10⁄11 and 0.9 = 9⁄10, so 0.A > 0.9, but 0.A < 0.99.
I think you are still treating infinite desimals with some approximation when the question you are pursuing relies on the more finer details.
**Appeal to graphical asymptotes**
Make a plot of the value of the series after x terms so that one plot F is 0.9, 0.99,0.999,… and another G is 0.A, 0.AA, 0.AAA,.… Now it is true that all of Gs have a F below them and that F never crosses “over” above G. Now consider the asymptotes of F and G (ie draw the line that F and G approach to). Now my claim is that the asymptotes of F and G are the same line. It is not the case that G has a line higher than F. They are of exactly the same height which happens to be 1. The meaning of infinite decimals is more closely connected to the asymptote rather than what happens “to the right” in the graph. There is a possibly surprising “taking of limit” which might not be totally natural.
**constustruction of wedges that don’t break limit**
It might be illuminateing to take the reverse approach. Have an asymptote of 1 and ask what all series have it as it’s asymtote. Note that among the candidates some might be strictly greater than others. If per term value domination forced a different limit that would push such “wedgings” to have a different limit. But given some series that has 1 as limit it’s always possible to have another series that fits between 1 and the original series and the new series limit will be 1. Thus there should be series whose are per item-dominating but end up summing to the same thing.
**Rate mismatch between accuracy and digits**
If you have 0.9 and 0.99 the latter is more precise. This is also true with 0.A and 0.AA. However between 0.9 and 0.A, 0.A is a bit more precise. In general if the bases are not nice multiples of each other the level of accuracy won’t be the same. However there are critical number of digits where the accuracy ends up being exactly the same. If you write out the sums as fractions and want to have a common denominator one lazy way to guarantee a common demoninator is to multiply all different demoniators together. This means that a fraction in a decimal number multiplied by 11 and a fraction in undecimal multiplied by 10 will have the same denominators. This means that 0.99999999999 and 0.AAAAAAAAAA are of same precision and have the same value but one has 11 digits and the other has 10. If we go by pure digits to digits comparison we end up comparing two 11 digit numbers when the equal value is expressed by a 10 and 11 digit numbers. At this level of accuracy it’s fair to give decimals 11 digits and undecimals 10 digits. If we go blindly by digit numbers we are unfair to the amount of digits available for the level of accuracy demanded. Sure for most level of accuracy there is no nice natural number of digits that would be fair to both at the same time.
**Graphical rate mismatch**
One can highlight the rate mismatch in graphical terms too. Have a nice x=y graph and then have a decimal scale and undecimal salce on the x axis. Mark every point of the x=y that corresponds to a scale mark on both scales. Comparing digit to digit corresponds to firt going to 9/10th marker on decimal scale and 10/11th mark on the undecimal scale and then going 9th subdivison on the decimal scale and 10th subdivision on the undecimal scale. If we step so it’s true that on each step the undecimal “resting place” is to the right and up to the decimal resting place. But it should also be clear that each time we take a step we keep within the original compartment and we end up in the high part of the orginal department and that right side of the comparment will always be limited by (x=1,y=1). By every 11 decimal steps we land in a location that was landed in by the undecimal series and by every 10 undecimal steps we land in a location that will be visited by the decimal steps. This gives a nice interpretation for having a finite number of digits. What you do when you want to take infinite steps? One way is to say you can’t take infinite steps but you can talk about the limit of the finite steps. For every real number less than 1 both steppings will at some finite step cross over that number. 1 is the first real number for which this doesn’t happen. Thus 1 is the “destination of infinite steps”.
It’s instructive to set out the proof you give for 0.999...=1 in number bases other than ten. For example base eleven, in which the maximum value single digit is conventionally represented as A and amounts to 10 (base ten). 10 (base eleven) amounts to 11 (base ten). So
Let x = 0.AAA...
10x = A.AAA...
10x—x = A
Ax = A
x = 1
0.AAA… = 1
But 0. A (base eleven) = 10⁄11 (base ten) which is bigger than 0.9 (base ten) = 9⁄10 (base ten). So shouldn’t that inequality apply to 0.AAA… (base eleven) and 0.999… (base ten) as well? (A debatable point maybe). If so, then they can’t both equal 1, unless we say something like 0.999...=1 and 0.AAA...=1 are both valid but base dependent equations, as indeed any such equation would be when using the top valued single digit of its base. This would mean 0.111...=1 in binary.
f(x)=2/x
g(x)=1/x
f(x) > g(x) for all x but lim f(x) = lim g(x) = 0. Just becuause f gets there “later” does not mean it gets any less deep.
Repeating decimals are far enough removed from decimals its like mixing rationals and integers.
I think I see your first point.
0.A{base11} = 10⁄11
0.9 = 9⁄10
0.A − 0.9 = 0.0_09...
0.AA = 10⁄11 + 10⁄121
0.99 = 9⁄10 + 9⁄100
0.AA − 0.99 = 0.00_1735537190082644628099...
Does this mean that because the difference or “lateness” gets smaller tending to zero each time a single identical digit is added to 0.A and 0.9 respectively, then 0.A… = 0.9...?
(Whereas the difference we get when we do this to say 0.8 and 0.9 gets larger each time so we can’t say 0.8… = 0.9...)
No I believe you are reaching a different concept. It is true that the difference squashes towards 0 but that would be different line of thinking. In a contex where infinidesimal are allowed (ie non-real) we might associate the series to different amounts and indeed find that they differ by a “minuscule amount”. But as we normally operate on reals we only get a “real precision” result. For example if you had to say whether 3⁄4, 1 and 5⁄4 name which integers probalby your best bet would be that all of them name the same integer 1, if you are only restricted to integer precision. In the same way you might have 1 and 1-epsilon to be differnt numbers when infinidesimal accuracy is allowed but a real + anything infinidesimal is going to be the same real regardless of the infinidesimal (1 and 1-epsilon are the same real in real precision)
What I was actually going fo is that, for any r < 1 you can ask how many terms you need to get up to that level and both series will give a finite answer. Ie to get to the same “depth” as 0.999999… gets with 6 digits you might need a bit less with 0.AAAAA… .It’s a “horizontal” difference instead of a “vertical” one. However there is no number that one of the series could reach but the other does not (and the number that both series fails to reach is 1, it might be helpful to remember that an suprenum is the smallest upper limit). if one series reaches a sum with 10 terms and other reaches the same sum in 10000 terms it’s equally good, we are only interested what happens “eventually” or after all terms have been accounted for. The way we have come up what the repeating digit sign means refers to limits and it’s pretty guaranteed to produce reals.
Not debatable, just false. Formally, the fact that xk<yk for all k does not imply that limk→∞xk<limk→∞yk.
If I were to poke a hole in the (proposed) argument that 0.[k 9s]{base 10} < 0.[k As]{base 11} (0.9<0.A; 0.99<0.AA;...), I’d point out that 0.[2*k 9s]{base 10} > 0.[k As]{base 11} (0.99>0.A; 0.9999>0.AA;...), and that this gives the opposite result when you take k→∞ (in the standard sense of those terms). I won’t demonstrate it rigorously here, but the faulty link here (under the standard meanings of real numbers and infinities) is that carrying the inequality through the limit just doesn’t create a necessarily-true statement.
0.111...{binary} is 1, basically for the Dedekind cut reason in the OP, which is not base-dependent (or representation-dependent at all) -- you can define and identify real numbers without using Arabic numerals or place value at all, and if you do that, then 0.999...=1 is as clear as not(not(true))=true.
0.9{base10}<0.99{base10} but 0.9...{base10}=0.99...{base10}
0.9{base10}<0.A{base11} but 0.9...{base10}=0.A...{base11}
0.8{base10}<0.9{base10} and 0.8...{base10}<0.9...({ase10}
0.9{base10}<0.A{base11} and 0.9...{base10}<0.A...{base11}
I’m not trying to prove “0.999...{base10}=1 “is false, nor that “0.111...(base2)=1” is either—in fact it’s an even more fascinating result.
Also “not(not(true))=true” is good enough for me as well.
You are assuming that there is a link between the per-term value and the whole series value. The connection just isn’t there and if you think it would be it would be important to show why.
I could have two small finite series of A=10 and B=2+3+5 and compare that 2<10, 3<10 and 5<10 and then be surprised when A=B. When the term amount is not finite it’s harder to verify thjat you haven’t made this kind of error.
So would you say that 0.999...(base10) = 0.AAA...(base11) = 0.111...(base2)= 1?
Yes, it happens to be that way.
Still not entirely convinced. If 0.A > 0.9 then surely0.A… > 0.9...?
Or does the fact this is true only when we halt at an equal number of digits after the point make a difference? 0.A = 10⁄11 and 0.9 = 9⁄10, so 0.A > 0.9, but 0.A < 0.99.
I think you are still treating infinite desimals with some approximation when the question you are pursuing relies on the more finer details.
**Appeal to graphical asymptotes**
Make a plot of the value of the series after x terms so that one plot F is 0.9, 0.99,0.999,… and another G is 0.A, 0.AA, 0.AAA,.… Now it is true that all of Gs have a F below them and that F never crosses “over” above G. Now consider the asymptotes of F and G (ie draw the line that F and G approach to). Now my claim is that the asymptotes of F and G are the same line. It is not the case that G has a line higher than F. They are of exactly the same height which happens to be 1. The meaning of infinite decimals is more closely connected to the asymptote rather than what happens “to the right” in the graph. There is a possibly surprising “taking of limit” which might not be totally natural.
**constustruction of wedges that don’t break limit**
It might be illuminateing to take the reverse approach. Have an asymptote of 1 and ask what all series have it as it’s asymtote. Note that among the candidates some might be strictly greater than others. If per term value domination forced a different limit that would push such “wedgings” to have a different limit. But given some series that has 1 as limit it’s always possible to have another series that fits between 1 and the original series and the new series limit will be 1. Thus there should be series whose are per item-dominating but end up summing to the same thing.
**Rate mismatch between accuracy and digits**
If you have 0.9 and 0.99 the latter is more precise. This is also true with 0.A and 0.AA. However between 0.9 and 0.A, 0.A is a bit more precise. In general if the bases are not nice multiples of each other the level of accuracy won’t be the same. However there are critical number of digits where the accuracy ends up being exactly the same. If you write out the sums as fractions and want to have a common denominator one lazy way to guarantee a common demoninator is to multiply all different demoniators together. This means that a fraction in a decimal number multiplied by 11 and a fraction in undecimal multiplied by 10 will have the same denominators. This means that 0.99999999999 and 0.AAAAAAAAAA are of same precision and have the same value but one has 11 digits and the other has 10. If we go by pure digits to digits comparison we end up comparing two 11 digit numbers when the equal value is expressed by a 10 and 11 digit numbers. At this level of accuracy it’s fair to give decimals 11 digits and undecimals 10 digits. If we go blindly by digit numbers we are unfair to the amount of digits available for the level of accuracy demanded. Sure for most level of accuracy there is no nice natural number of digits that would be fair to both at the same time.
**Graphical rate mismatch**
One can highlight the rate mismatch in graphical terms too. Have a nice x=y graph and then have a decimal scale and undecimal salce on the x axis. Mark every point of the x=y that corresponds to a scale mark on both scales. Comparing digit to digit corresponds to firt going to 9/10th marker on decimal scale and 10/11th mark on the undecimal scale and then going 9th subdivison on the decimal scale and 10th subdivision on the undecimal scale. If we step so it’s true that on each step the undecimal “resting place” is to the right and up to the decimal resting place. But it should also be clear that each time we take a step we keep within the original compartment and we end up in the high part of the orginal department and that right side of the comparment will always be limited by (x=1,y=1). By every 11 decimal steps we land in a location that was landed in by the undecimal series and by every 10 undecimal steps we land in a location that will be visited by the decimal steps. This gives a nice interpretation for having a finite number of digits. What you do when you want to take infinite steps? One way is to say you can’t take infinite steps but you can talk about the limit of the finite steps. For every real number less than 1 both steppings will at some finite step cross over that number. 1 is the first real number for which this doesn’t happen. Thus 1 is the “destination of infinite steps”.