Still not entirely convinced. If 0.A > 0.9 then surely0.A… > 0.9...?
Or does the fact this is true only when we halt at an equal number of digits after the point make a difference? 0.A = 10⁄11 and 0.9 = 9⁄10, so 0.A > 0.9, but 0.A < 0.99.
I think you are still treating infinite desimals with some approximation when the question you are pursuing relies on the more finer details.
**Appeal to graphical asymptotes**
Make a plot of the value of the series after x terms so that one plot F is 0.9, 0.99,0.999,… and another G is 0.A, 0.AA, 0.AAA,.… Now it is true that all of Gs have a F below them and that F never crosses “over” above G. Now consider the asymptotes of F and G (ie draw the line that F and G approach to). Now my claim is that the asymptotes of F and G are the same line. It is not the case that G has a line higher than F. They are of exactly the same height which happens to be 1. The meaning of infinite decimals is more closely connected to the asymptote rather than what happens “to the right” in the graph. There is a possibly surprising “taking of limit” which might not be totally natural.
**constustruction of wedges that don’t break limit**
It might be illuminateing to take the reverse approach. Have an asymptote of 1 and ask what all series have it as it’s asymtote. Note that among the candidates some might be strictly greater than others. If per term value domination forced a different limit that would push such “wedgings” to have a different limit. But given some series that has 1 as limit it’s always possible to have another series that fits between 1 and the original series and the new series limit will be 1. Thus there should be series whose are per item-dominating but end up summing to the same thing.
**Rate mismatch between accuracy and digits**
If you have 0.9 and 0.99 the latter is more precise. This is also true with 0.A and 0.AA. However between 0.9 and 0.A, 0.A is a bit more precise. In general if the bases are not nice multiples of each other the level of accuracy won’t be the same. However there are critical number of digits where the accuracy ends up being exactly the same. If you write out the sums as fractions and want to have a common denominator one lazy way to guarantee a common demoninator is to multiply all different demoniators together. This means that a fraction in a decimal number multiplied by 11 and a fraction in undecimal multiplied by 10 will have the same denominators. This means that 0.99999999999 and 0.AAAAAAAAAA are of same precision and have the same value but one has 11 digits and the other has 10. If we go by pure digits to digits comparison we end up comparing two 11 digit numbers when the equal value is expressed by a 10 and 11 digit numbers. At this level of accuracy it’s fair to give decimals 11 digits and undecimals 10 digits. If we go blindly by digit numbers we are unfair to the amount of digits available for the level of accuracy demanded. Sure for most level of accuracy there is no nice natural number of digits that would be fair to both at the same time.
**Graphical rate mismatch**
One can highlight the rate mismatch in graphical terms too. Have a nice x=y graph and then have a decimal scale and undecimal salce on the x axis. Mark every point of the x=y that corresponds to a scale mark on both scales. Comparing digit to digit corresponds to firt going to 9/10th marker on decimal scale and 10/11th mark on the undecimal scale and then going 9th subdivison on the decimal scale and 10th subdivision on the undecimal scale. If we step so it’s true that on each step the undecimal “resting place” is to the right and up to the decimal resting place. But it should also be clear that each time we take a step we keep within the original compartment and we end up in the high part of the orginal department and that right side of the comparment will always be limited by (x=1,y=1). By every 11 decimal steps we land in a location that was landed in by the undecimal series and by every 10 undecimal steps we land in a location that will be visited by the decimal steps. This gives a nice interpretation for having a finite number of digits. What you do when you want to take infinite steps? One way is to say you can’t take infinite steps but you can talk about the limit of the finite steps. For every real number less than 1 both steppings will at some finite step cross over that number. 1 is the first real number for which this doesn’t happen. Thus 1 is the “destination of infinite steps”.
So would you say that 0.999...(base10) = 0.AAA...(base11) = 0.111...(base2)= 1?
Yes, it happens to be that way.
Still not entirely convinced. If 0.A > 0.9 then surely0.A… > 0.9...?
Or does the fact this is true only when we halt at an equal number of digits after the point make a difference? 0.A = 10⁄11 and 0.9 = 9⁄10, so 0.A > 0.9, but 0.A < 0.99.
I think you are still treating infinite desimals with some approximation when the question you are pursuing relies on the more finer details.
**Appeal to graphical asymptotes**
Make a plot of the value of the series after x terms so that one plot F is 0.9, 0.99,0.999,… and another G is 0.A, 0.AA, 0.AAA,.… Now it is true that all of Gs have a F below them and that F never crosses “over” above G. Now consider the asymptotes of F and G (ie draw the line that F and G approach to). Now my claim is that the asymptotes of F and G are the same line. It is not the case that G has a line higher than F. They are of exactly the same height which happens to be 1. The meaning of infinite decimals is more closely connected to the asymptote rather than what happens “to the right” in the graph. There is a possibly surprising “taking of limit” which might not be totally natural.
**constustruction of wedges that don’t break limit**
It might be illuminateing to take the reverse approach. Have an asymptote of 1 and ask what all series have it as it’s asymtote. Note that among the candidates some might be strictly greater than others. If per term value domination forced a different limit that would push such “wedgings” to have a different limit. But given some series that has 1 as limit it’s always possible to have another series that fits between 1 and the original series and the new series limit will be 1. Thus there should be series whose are per item-dominating but end up summing to the same thing.
**Rate mismatch between accuracy and digits**
If you have 0.9 and 0.99 the latter is more precise. This is also true with 0.A and 0.AA. However between 0.9 and 0.A, 0.A is a bit more precise. In general if the bases are not nice multiples of each other the level of accuracy won’t be the same. However there are critical number of digits where the accuracy ends up being exactly the same. If you write out the sums as fractions and want to have a common denominator one lazy way to guarantee a common demoninator is to multiply all different demoniators together. This means that a fraction in a decimal number multiplied by 11 and a fraction in undecimal multiplied by 10 will have the same denominators. This means that 0.99999999999 and 0.AAAAAAAAAA are of same precision and have the same value but one has 11 digits and the other has 10. If we go by pure digits to digits comparison we end up comparing two 11 digit numbers when the equal value is expressed by a 10 and 11 digit numbers. At this level of accuracy it’s fair to give decimals 11 digits and undecimals 10 digits. If we go blindly by digit numbers we are unfair to the amount of digits available for the level of accuracy demanded. Sure for most level of accuracy there is no nice natural number of digits that would be fair to both at the same time.
**Graphical rate mismatch**
One can highlight the rate mismatch in graphical terms too. Have a nice x=y graph and then have a decimal scale and undecimal salce on the x axis. Mark every point of the x=y that corresponds to a scale mark on both scales. Comparing digit to digit corresponds to firt going to 9/10th marker on decimal scale and 10/11th mark on the undecimal scale and then going 9th subdivison on the decimal scale and 10th subdivision on the undecimal scale. If we step so it’s true that on each step the undecimal “resting place” is to the right and up to the decimal resting place. But it should also be clear that each time we take a step we keep within the original compartment and we end up in the high part of the orginal department and that right side of the comparment will always be limited by (x=1,y=1). By every 11 decimal steps we land in a location that was landed in by the undecimal series and by every 10 undecimal steps we land in a location that will be visited by the decimal steps. This gives a nice interpretation for having a finite number of digits. What you do when you want to take infinite steps? One way is to say you can’t take infinite steps but you can talk about the limit of the finite steps. For every real number less than 1 both steppings will at some finite step cross over that number. 1 is the first real number for which this doesn’t happen. Thus 1 is the “destination of infinite steps”.