Hi Misha, I might also turn that argument back on you and repeat what I said before:
“if you meant 2, why not just say 2?” It’s as valid as “if you meant the sequence, why not just write {1, 1.9, 1.99, 1.999, …}”?
Clearly there are other reasons for using something that is not the usual convention. There are definitely good reasons for representing infinite series or sequences… as you have pointed out. However—there is no particular reason why mathematics has chosen to use 1.999… to mean the limit, as opposed to the actual infinite series. Either one could be equally validly used in this situation.
It is only by common convention that mathematics uses it to represent the actual limit (as n tends to infinity) instead of the other possibility—which would be “the actual limit as n tends to infinity… if we actually take it to infinity, or an infinitesimal less than the limit if we don’t”, which is how I assumed (incorrectly) that it was to be used
However, the other thing you say that “we never denote it 1.999...” pulls out an interesting though, and if I grasp what you’re saying correctly, then I disagree with you.
As I’ve mentioned in another comment now—mathematical symbolic conventions are the same as “words”—they are map, not territory. We define them to mean what we want them to mean. We choose what they mean by common consensus (motivated by convenience). It is a very good idea to follow that convention—which is why I decided I was wrong to use it the way I originally assumed it was being used… and from now on, I will use the usual convention...
However, you seem to be saying that you think the current way is “the one true way” and that the other way is not valid at all… ie that “we would never denote it 1.9999...” as being some sort of basis of fact out there in reality, when really it’s just a convention that we’ve chosen, and is therefore non-obvious from looking at the symbol without the prior knowledge of the convention (as I did).
I am trying to explain that this is not the case—without knowing the convention, either meaning is valid… it’s only having now been shown the convention that I now know what is generally “by definition” meant by the symbol, and it happened to be a different way to what I automatically picked. without prior knowledge.
so yes, I think we would never denote the sequence as 1.999… but not because the sequence is not representable by 1.999… - simply because it is conventional to do so.
You have a point. I tend to dislike arguments about mathematics that start with “well, this definition is just a choice” because they don’t capture any substance about any actual math. As a result, I tried to head that off by (perhaps poorly) making a case for why this definition is a reasonable choice.
In any case, I misunderstood the nature of what you were saying about the convention, so I don’t think we’re in any actual disagreement.
I might also turn that argument back on you and repeat what I said before: “if you meant 2, why not just say 2?”
If I meant 2, I would say 2. However, our system of writing repeating decimals also allows us to (redundantly) write the repeating decimal 1.999… which is equivalent to 2. It’s not a very useful repeating decimal, but it sometimes comes out as a result of an algorithm: e.g. when you multiply 2⁄9 = 0.222… by 9, you will get 1.999… as you calculate it, instead of getting 2 straight off the bat.
You have a point. I tend to dislike arguments about mathematics that start with “well, this definition is just a choice”
Me too! Especially as I’ve just been reading that sequence here about “proving by definition” and “I can define it any way I like”… that’s why I tried to make it very clear I wasn’t saying that… I also needed to head of the heading off ;)
Anyway—I believe we are just in violent agreement here, so no problems ;)
Hi Misha, I might also turn that argument back on you and repeat what I said before: “if you meant 2, why not just say 2?” It’s as valid as “if you meant the sequence, why not just write {1, 1.9, 1.99, 1.999, …}”?
Clearly there are other reasons for using something that is not the usual convention. There are definitely good reasons for representing infinite series or sequences… as you have pointed out. However—there is no particular reason why mathematics has chosen to use 1.999… to mean the limit, as opposed to the actual infinite series. Either one could be equally validly used in this situation.
It is only by common convention that mathematics uses it to represent the actual limit (as n tends to infinity) instead of the other possibility—which would be “the actual limit as n tends to infinity… if we actually take it to infinity, or an infinitesimal less than the limit if we don’t”, which is how I assumed (incorrectly) that it was to be used
However, the other thing you say that “we never denote it 1.999...” pulls out an interesting though, and if I grasp what you’re saying correctly, then I disagree with you.
As I’ve mentioned in another comment now—mathematical symbolic conventions are the same as “words”—they are map, not territory. We define them to mean what we want them to mean. We choose what they mean by common consensus (motivated by convenience). It is a very good idea to follow that convention—which is why I decided I was wrong to use it the way I originally assumed it was being used… and from now on, I will use the usual convention...
However, you seem to be saying that you think the current way is “the one true way” and that the other way is not valid at all… ie that “we would never denote it 1.9999...” as being some sort of basis of fact out there in reality, when really it’s just a convention that we’ve chosen, and is therefore non-obvious from looking at the symbol without the prior knowledge of the convention (as I did).
I am trying to explain that this is not the case—without knowing the convention, either meaning is valid… it’s only having now been shown the convention that I now know what is generally “by definition” meant by the symbol, and it happened to be a different way to what I automatically picked. without prior knowledge.
so yes, I think we would never denote the sequence as 1.999… but not because the sequence is not representable by 1.999… - simply because it is conventional to do so.
You have a point. I tend to dislike arguments about mathematics that start with “well, this definition is just a choice” because they don’t capture any substance about any actual math. As a result, I tried to head that off by (perhaps poorly) making a case for why this definition is a reasonable choice.
In any case, I misunderstood the nature of what you were saying about the convention, so I don’t think we’re in any actual disagreement.
If I meant 2, I would say 2. However, our system of writing repeating decimals also allows us to (redundantly) write the repeating decimal 1.999… which is equivalent to 2. It’s not a very useful repeating decimal, but it sometimes comes out as a result of an algorithm: e.g. when you multiply 2⁄9 = 0.222… by 9, you will get 1.999… as you calculate it, instead of getting 2 straight off the bat.
Me too! Especially as I’ve just been reading that sequence here about “proving by definition” and “I can define it any way I like”… that’s why I tried to make it very clear I wasn’t saying that… I also needed to head of the heading off ;)
Anyway—I believe we are just in violent agreement here, so no problems ;)