it has to denote the limit, because we want it to denote a number,
This is the part I take issue with.
It does not have to denote a number, but we choose to let it denote a number (rather than a sequence) because that is how mathematicians find it most convenient to use that particular representation.
That sequence is also quite useful mathematically—just not as useful as the number-that-represents-the-limit. Many sequences are considered to be useful… though generally not in algebra—it’s more common in Calculus, where such sequences are extremely useful. In fact I’d say that in calculus “just a sequence” is perhaps even more useful than “just a number”.
My first impression (and thus what I originally got wrong) was that 1.999… represented the sequence and not the limit because, really, if you meant 2, why not just say 2? :)
If we wanted to talk about the sequence we would never denote it 1.999… We would write {1, 1.9, 1.99, 1.999, …} and perhaps give the formula for the Nth term, which is 2 − 10^-N.
Hi Misha, I might also turn that argument back on you and repeat what I said before:
“if you meant 2, why not just say 2?” It’s as valid as “if you meant the sequence, why not just write {1, 1.9, 1.99, 1.999, …}”?
Clearly there are other reasons for using something that is not the usual convention. There are definitely good reasons for representing infinite series or sequences… as you have pointed out. However—there is no particular reason why mathematics has chosen to use 1.999… to mean the limit, as opposed to the actual infinite series. Either one could be equally validly used in this situation.
It is only by common convention that mathematics uses it to represent the actual limit (as n tends to infinity) instead of the other possibility—which would be “the actual limit as n tends to infinity… if we actually take it to infinity, or an infinitesimal less than the limit if we don’t”, which is how I assumed (incorrectly) that it was to be used
However, the other thing you say that “we never denote it 1.999...” pulls out an interesting though, and if I grasp what you’re saying correctly, then I disagree with you.
As I’ve mentioned in another comment now—mathematical symbolic conventions are the same as “words”—they are map, not territory. We define them to mean what we want them to mean. We choose what they mean by common consensus (motivated by convenience). It is a very good idea to follow that convention—which is why I decided I was wrong to use it the way I originally assumed it was being used… and from now on, I will use the usual convention...
However, you seem to be saying that you think the current way is “the one true way” and that the other way is not valid at all… ie that “we would never denote it 1.9999...” as being some sort of basis of fact out there in reality, when really it’s just a convention that we’ve chosen, and is therefore non-obvious from looking at the symbol without the prior knowledge of the convention (as I did).
I am trying to explain that this is not the case—without knowing the convention, either meaning is valid… it’s only having now been shown the convention that I now know what is generally “by definition” meant by the symbol, and it happened to be a different way to what I automatically picked. without prior knowledge.
so yes, I think we would never denote the sequence as 1.999… but not because the sequence is not representable by 1.999… - simply because it is conventional to do so.
You have a point. I tend to dislike arguments about mathematics that start with “well, this definition is just a choice” because they don’t capture any substance about any actual math. As a result, I tried to head that off by (perhaps poorly) making a case for why this definition is a reasonable choice.
In any case, I misunderstood the nature of what you were saying about the convention, so I don’t think we’re in any actual disagreement.
I might also turn that argument back on you and repeat what I said before: “if you meant 2, why not just say 2?”
If I meant 2, I would say 2. However, our system of writing repeating decimals also allows us to (redundantly) write the repeating decimal 1.999… which is equivalent to 2. It’s not a very useful repeating decimal, but it sometimes comes out as a result of an algorithm: e.g. when you multiply 2⁄9 = 0.222… by 9, you will get 1.999… as you calculate it, instead of getting 2 straight off the bat.
You have a point. I tend to dislike arguments about mathematics that start with “well, this definition is just a choice”
Me too! Especially as I’ve just been reading that sequence here about “proving by definition” and “I can define it any way I like”… that’s why I tried to make it very clear I wasn’t saying that… I also needed to head of the heading off ;)
Anyway—I believe we are just in violent agreement here, so no problems ;)
OK, let me put it this way: If we are considering the question “Is 1.999...=2?”, the context makes it clear that we must be considering the left hand side as a number, because the RHS is a number. (Would you interpret 2 in that context as the constant 2 sequence? Well then of course they’re not equal, but this is obvious and unenlightening.) Why would you compare a number for equality against a sequence? They’re entirely different sorts of objects.
is “x-squared = 2” ?
is a perfectly valid question to ask in mathematics even though the LHS is not obviously an number
In this case, it is a formula that can equate to a number… just as the sequence is a (very limited) formula that can equate to 2 - if we take the sequence to its limit; or that falls just shy of 2 - if we try and represent it in any finite/limited way.
In stating that 1.9999… is a number, you are assuming the usage of the limit/number, rather than the other potential usage ie, you are falling into the same assumption-trap that I fell into…
It’s just that your assumption happens to be the one that matches with common usage, whereas mine wasn’t ;)
Using 1.9999. to represent the limit of the sequence (ie the number) is certainly true by convention (ie “by definition”), but is no means the only way to interpret the symbols. It could just as easily represent the sequence itself… we just don’t happen to do that—we define what mathematical symbols refer to… they’re just the word/pointers to what we’re talking about yes?
is “x-squared = 2” ? is a perfectly valid question to ask in mathematics even though the LHS is not obviously an number
Er… yes it is? In that context, x^2 is a number. We just don’t know what number it might be. By contrast, the sequence (1, 1.9, 1.99, …) is not a number at all.
Furthermore, even if we insist on regarding x^2 as a formula with a free variable, your analogy doesn’t hold. The sequence (1, 1.9, 1.99, …) has no free variables; it’s one specific sequence.
You are correct that the convention could have been that 1.999… represents the sequence… but as I stated before, in that case, the question of whether it equals 2 would not be very meaningful. Given the context you can deduce that we are using the convention that it designates a number.
By contrast, the sequence (1, 1.9, 1.99, …) is not a number at all
yes I agree, a sequence is not a number, it’s sequence… though I wonder if we’re getting confused, because we’re talking about the sequence, instead of the infinite series (1 + 0.9 + 0.09 +...) which is actually what I had in my head when I was first thinking about 1.999...
Along the way, somebody said “sequence” and that’s the word I started using… when really I’ve been thinking about the infinite series.… anyway
The infinite series has far less freedom than x^2, but that doesn’t mean that it’s a different thing entirely from x^2.
Lets consider “x − 1”
“x −1 ” is not a number, until we equate it to something that lets us determine what x is…
If we use: “x −1 =4 ” however. We can solve-for-x and there are no degrees of freedom.
If we use “1.9 < x −1 < 2” we have some minor degree of freedom… and only just a few more than the infinite series in question.
Admittedly, the only degree of freedom left to 1.9999… (the series) is to either be 2 or an infinitesimal away from 2. But I don’t think that makes it different in kind to x −1 = 4
anyway—I think we’re probably just in “violent agreement” (as a friend of mine once used to say) ;)
All the bits that I was trying to really say we agree over… now we’re just discussing the related maths ;)
the question of whether it equals 2 would not be very meaningful
Ok, lets move into hypothetical land and pretend that 1.9999… represents what I originally though it represents.
The comparison with the number 2 provides the meaning that what you want to do is to evaluate the series at its limit.
It’s totally supportable for you to equate 1.9999… = 2 and determine that this is a statement that is:
1) true when the infinite series has been evaluated to the limit
2) false when it is represented in any finite/limited way
Edit: ah… that’s why you can’t use stars for to-the-power-of ;)
anyway—I think we’re probably just in “violent agreement” (as a friend of mine once used to say) ;)
Er, no… there still seems to be quite a bit of confusion here...
All the bits that I was trying to really say we agree over… now we’re just discussing the related maths ;)
Well, if you really think that’s not significant… :P
yes I agree, a sequence is not a number, it’s sequence… though I wonder if we’re getting confused, because we’re talking about the sequence, instead of the infinite series (1 + 0.9 + 0.09 +...) which is actually what I had in my head when I was first thinking about 1.999...
Along the way, somebody said “sequence” and that’s the word I started using… when really I’ve been thinking about the infinite series.… anyway
It’s not clear to me what distinction you’re drawing here. A series is a sequence, just written differently.
The infinite series has far less freedom than x^2, but that doesn’t mean that it’s a different thing entirely from x^2.
It’s not at all clear to me what notion of “degrees of freedom” you’re using here. The sequence is an entirely different sort of thing than x^2, in that one is a sequence, a complete mathematical object, while the other is an expression with a free variable. If by “degrees of freedom” you mean something like “free variables”, then the sequence has none. Now it’s true that, being a sequence of real numbers, it is a function from N to R, but there’s quite a difference between the expression 2-10^(-n), and the function (i.e. sequence) n |-> 2-10^(-n) ; yes, normally we simply write the latter as the former when the meaning is understood, but under the hood they’re quite different. In a sense, functions are mathematical, expressions are metamathematical.
When I say “x^2 is a number”, what I mean is essentially, if we’re working under a type system, then it has the type “real number”. It’s an expression with one free variable, but it has type “real number”. By contrast, the function x |-> x^2 has type “function from reals to reals”, the sequence (1, 1.9, 1.99, …) has type “sequence of reals”… (I realize that in standard mathematics we don’t actually technically work under a type system, but for practical purposes it’s a good way to think, and it’s I’m pretty sure it’s possible to sensibly formulate things this way.) To equate a sequence to a number may technically in a sense return “false”, but it’s better to think of it as returning “type error”. By contrast, equating x^2 to 2 - not equating the function x|->x^2 to 2, which is a type error! - allows us to infer that x^2 is also a number.
Admittedly, the only degree of freedom left to 1.9999… (the series) is to either be 2 or an infinitesimal away from 2. But I don’t think that makes it different in kind to x −1 = 4
Note, BTW, that the real numbers don’t have any infinitesimals (save for 0, if you count it).
It’s totally supportable for you to equate 1.9999… = 2 and determine that this is a statement that is: 1) true when the infinite series has been evaluated to the limit 2) false when it is represented in any finite/limited way
Sorry, what does it even mean for it to be “represented in a finite/limited way”? The alternative to it being a number is it being an infinite sequence, which is, well, infinite.
I am really getting the idea you should go read the standard stuff on this and clear up any remaining confusion that way, rather than try to argue this here...
Er, no… there still seems to be quite a bit of confusion here...
ah—then I apologise. I need to clarify. I see that there are several points where you’ve pointed out that I am using mathematical language in a sloppy fashion. How about I get those out of the way first.
that the real numbers don’t have any infinitesimals
I should not have used the word “infinitesimal”—as I really meant “a very small number” and was being lazy. I am aware that “the theory of infinitesimals” has an actual mathematical meaning… but this is not the way in which I was using the word.
I’ll explain what I meant in a bit..
what does it even mean for it to be “represented in a finite/limited way”?
If I write a program that starts by adding 1 to 0.9 then I put it into a loop where it then adds “one tenth of the previous number you just added”...
If at any point I tell the program “stop now and print out what you’ve got so far”… then what it will print out is something that is “a very small number” less than 2.
If I left the program running for literally an infinite amount of time, it would eventually reach two. If I stop at any point at all (ie the program is finite), then it will return a number that a very small amount less than two.
In this way, the program has generated a finite approximation of 1.999… that is != 2
As humans, we can think about the problem in a way that a stupid computer algorithm cannot, and can prove to ourselves that 1+(0.111.. * 9) actually == 2 exactly.
but that is knowledge outside of the proposed “finite” solution/system as described above.
Thus the two are different “representations” of 1.999...
I am reminded of the old engineering adage that “3 is a good approximation of Pi for all practical purposes”—which tends to make some mathematicians squirm.
It’s not at all clear to me what notion of “degrees of freedom” you’re using here.
x^2 has one degree of freedom. x can be any real number
1 < x < 1.1
has less freedom than that. It can be any real number between 1 and 1.1
With the previous description I’ve given of the difference between the results of a “finite” and “infinite” calculation of the limit of 1.999… (the series), “x = 1.999...” can be either 2 (if we can go to the limit or can think about it in a way outside of the summing-the-finite-series method) or a very small number less than two (if we begin calculating but have to stop calculating for some weird reason, such as running out of time before the heat-death of the universe).
The “freedom” involved here is even more limited than the freedom of 1 < x < 1.1
and would not constitute a full “degree” of freedom in the mathematical sense.
But in the way that I have already mentioned above (quite understanding that this may not be the full mathematically approved way of reasoning about it)… it can have more than one value (given the previously-stated contexts) and thus may be considered to have some “freedom”.
…even if it’s only between “2″ and “a very, very small distance from 2”
I’d like to think of it as a fractional degree of freedom :)
I am really getting the idea you should go read the standard stuff on this and clear up any remaining confusion that way, rather than try to argue this here...
Firstly—there is no surprise that you are unfamiliar with my background… as I haven’t specifically shared it with you. But I happen to have actually started in a maths degree. I had a distinction average, but didn’t enjoy it enough… so I switched to computing.
I’m certainly not a total maths expert (unlike my Dad and my maths-PhD cousin) but I would say that I’m fairly familiar with “the standard stuff”.
Of course… as should be obvious—this does not mean that errors do not still slip through (as I’ve recently just clearly learned).
Secondly—with respect, I think that some of the confusion here is that you are confused as to what I’m talking about… that is totally my fault for not being clear—but it will not be cleared up by me going away and researching anything… because I think it’s more of a communication issue than a knowledge-based one.
So… back to the point at hand.
I think I get what you’re trying to say with the type-error example. But I don’t know that you quite get what I’m saying. That is probably because I’ve been saying it poorly…
I don’t know if you’ve programmed in typeless programming languages, but my original understanding is more along the lines of:
Lets say I have this object, and on the outside it’s called “1.999...”
When I ask it “how do I calculate your value?” it can reply “well, you add 1 to 0.9 and then 0.09 and then 0.009...” and it keeps going on and on… and if I write it down as it comes out… it looks just like the Infinite Series.
So then I ask it “what number do you equate to if I get to the end of all that addition?” and it says “2″ - and that looks like the Limit
I could even ask ask it “do you equal two?” and it could realise that I’m asking it to calculate its limit and say “yes”
But then I actually try the addition in the Series myself… and I go on and on and on… and each next value looks like the next number in the Sequence
but eventually I get bored and stop… and the number I have is not quite 2… almost, but not quite… which is the Finite Representation that I keep talking about.
Then you can see that this object matches all the properties that I have mentioned in my previous discussion… no type-errors required, and each “value” comes naturally from the given context.
That “object” is what I have in my head when I’m talking about something that can be both the number and the sequence, and in which it can reveal the properties of itself depending on how you ask it.
...it’s also a reasonably good example of duck-typing ;)
We want it to denote a number for simple consistency. .11111… is a number. It is a limit. 3.14159… should denote a number. Why should 1.99999?… Be any different? If we are going to be at all consistent in our notation they should all represent the same sort of series. Otherwise this is extremely irregular notation to no end.
Yes, I totally agree with you: consistency and convenience are why we have chosen to use 1.9999… notation to represent the limit, rather than the sequence.
consistency and convenience tends to drive most mathematical notational choices (with occasional other influences), for reasons that should be extremely obvious.
It just so happened that, o this occasion, I was not aware enough of either the actual convention, or of other “things that this notation would be consistent with” before I guessed at the meaning of this particular item of notation.
And so my guessed meaning was one of the two things that I thought would be “likely meanings” for the notation.
In this case, my guess was for the wrong one of the two.
I seem to be getting a lot of comments that are implying that I should have somehow naturally realised which of the two meanings was “correct”… and have tried very hard to explain why it is not obvious, and not somehow inevitable.
Both of my possible interpretations were potentially valid, and I’d like to insist that the sequence-one is wrong only by convention (ie maths has to pick one or the other meaning… it happens to be the most convenient for mathematicians, which happens in this case to be the limit-interpretation)… but as is clearly evidenced by the fact that there is so much confusion around the subject (ref the wikipedia page) - it is not obvious intuitively that one is “correct” and one is “not correct”.
I maintain that without knowledge of the convention, you cannot know which is the “correct” interpretation. Any assumption otherwise is simply hindsight bias.
it is not obvious intuitively that one is “correct” and one is “not correct”.
There is no inherent meaning to a set of symbols scrawled on paper. There is no “correct” and “incorrect” way of interpreting it; only convention (unless your goal is to communicate with others). There is no Platonic Ideal of Mathematical Notation, so obviously there is no objective way to pluck the “correct” interpretation of some symbols out of the interstellar void. You are right in as far as you say that.
However, you are expected to know the meaning of the notation you use in exactly the same way that you are expected to know the meaning of the words you use. Not knowing is understandable, but observing that it is possible to not-know a convention is not a particular philosophical insight.
People guess the meanings of words and notations from context all the time. Especially when they aren’t specialists in the field in question. Lots of interested amateurs exist and read things without the benefit of years of training before hand.
Some things just lend themselves more easily to guessing the accepted-meaning than others. It is often a good idea to make things easier to guess the accepted-meaning, rather than to fail to do so, if at all possible. Make it hard to fail.
This is the part I take issue with.
It does not have to denote a number, but we choose to let it denote a number (rather than a sequence) because that is how mathematicians find it most convenient to use that particular representation.
That sequence is also quite useful mathematically—just not as useful as the number-that-represents-the-limit. Many sequences are considered to be useful… though generally not in algebra—it’s more common in Calculus, where such sequences are extremely useful. In fact I’d say that in calculus “just a sequence” is perhaps even more useful than “just a number”.
My first impression (and thus what I originally got wrong) was that 1.999… represented the sequence and not the limit because, really, if you meant 2, why not just say 2? :)
If we wanted to talk about the sequence we would never denote it 1.999… We would write {1, 1.9, 1.99, 1.999, …} and perhaps give the formula for the Nth term, which is 2 − 10^-N.
Hi Misha, I might also turn that argument back on you and repeat what I said before: “if you meant 2, why not just say 2?” It’s as valid as “if you meant the sequence, why not just write {1, 1.9, 1.99, 1.999, …}”?
Clearly there are other reasons for using something that is not the usual convention. There are definitely good reasons for representing infinite series or sequences… as you have pointed out. However—there is no particular reason why mathematics has chosen to use 1.999… to mean the limit, as opposed to the actual infinite series. Either one could be equally validly used in this situation.
It is only by common convention that mathematics uses it to represent the actual limit (as n tends to infinity) instead of the other possibility—which would be “the actual limit as n tends to infinity… if we actually take it to infinity, or an infinitesimal less than the limit if we don’t”, which is how I assumed (incorrectly) that it was to be used
However, the other thing you say that “we never denote it 1.999...” pulls out an interesting though, and if I grasp what you’re saying correctly, then I disagree with you.
As I’ve mentioned in another comment now—mathematical symbolic conventions are the same as “words”—they are map, not territory. We define them to mean what we want them to mean. We choose what they mean by common consensus (motivated by convenience). It is a very good idea to follow that convention—which is why I decided I was wrong to use it the way I originally assumed it was being used… and from now on, I will use the usual convention...
However, you seem to be saying that you think the current way is “the one true way” and that the other way is not valid at all… ie that “we would never denote it 1.9999...” as being some sort of basis of fact out there in reality, when really it’s just a convention that we’ve chosen, and is therefore non-obvious from looking at the symbol without the prior knowledge of the convention (as I did).
I am trying to explain that this is not the case—without knowing the convention, either meaning is valid… it’s only having now been shown the convention that I now know what is generally “by definition” meant by the symbol, and it happened to be a different way to what I automatically picked. without prior knowledge.
so yes, I think we would never denote the sequence as 1.999… but not because the sequence is not representable by 1.999… - simply because it is conventional to do so.
You have a point. I tend to dislike arguments about mathematics that start with “well, this definition is just a choice” because they don’t capture any substance about any actual math. As a result, I tried to head that off by (perhaps poorly) making a case for why this definition is a reasonable choice.
In any case, I misunderstood the nature of what you were saying about the convention, so I don’t think we’re in any actual disagreement.
If I meant 2, I would say 2. However, our system of writing repeating decimals also allows us to (redundantly) write the repeating decimal 1.999… which is equivalent to 2. It’s not a very useful repeating decimal, but it sometimes comes out as a result of an algorithm: e.g. when you multiply 2⁄9 = 0.222… by 9, you will get 1.999… as you calculate it, instead of getting 2 straight off the bat.
Me too! Especially as I’ve just been reading that sequence here about “proving by definition” and “I can define it any way I like”… that’s why I tried to make it very clear I wasn’t saying that… I also needed to head of the heading off ;)
Anyway—I believe we are just in violent agreement here, so no problems ;)
OK, let me put it this way: If we are considering the question “Is 1.999...=2?”, the context makes it clear that we must be considering the left hand side as a number, because the RHS is a number. (Would you interpret 2 in that context as the constant 2 sequence? Well then of course they’re not equal, but this is obvious and unenlightening.) Why would you compare a number for equality against a sequence? They’re entirely different sorts of objects.
is “x-squared = 2” ? is a perfectly valid question to ask in mathematics even though the LHS is not obviously an number
In this case, it is a formula that can equate to a number… just as the sequence is a (very limited) formula that can equate to 2 - if we take the sequence to its limit; or that falls just shy of 2 - if we try and represent it in any finite/limited way.
In stating that 1.9999… is a number, you are assuming the usage of the limit/number, rather than the other potential usage ie, you are falling into the same assumption-trap that I fell into… It’s just that your assumption happens to be the one that matches with common usage, whereas mine wasn’t ;)
Using 1.9999. to represent the limit of the sequence (ie the number) is certainly true by convention (ie “by definition”), but is no means the only way to interpret the symbols. It could just as easily represent the sequence itself… we just don’t happen to do that—we define what mathematical symbols refer to… they’re just the word/pointers to what we’re talking about yes?
Er… yes it is? In that context, x^2 is a number. We just don’t know what number it might be. By contrast, the sequence (1, 1.9, 1.99, …) is not a number at all.
Furthermore, even if we insist on regarding x^2 as a formula with a free variable, your analogy doesn’t hold. The sequence (1, 1.9, 1.99, …) has no free variables; it’s one specific sequence.
You are correct that the convention could have been that 1.999… represents the sequence… but as I stated before, in that case, the question of whether it equals 2 would not be very meaningful. Given the context you can deduce that we are using the convention that it designates a number.
yes I agree, a sequence is not a number, it’s sequence… though I wonder if we’re getting confused, because we’re talking about the sequence, instead of the infinite series (1 + 0.9 + 0.09 +...) which is actually what I had in my head when I was first thinking about 1.999...
Along the way, somebody said “sequence” and that’s the word I started using… when really I’ve been thinking about the infinite series.… anyway
The infinite series has far less freedom than x^2, but that doesn’t mean that it’s a different thing entirely from x^2.
Lets consider “x − 1”
“x −1 ” is not a number, until we equate it to something that lets us determine what x is…
If we use: “x −1 =4 ” however. We can solve-for-x and there are no degrees of freedom.
If we use “1.9 < x −1 < 2” we have some minor degree of freedom… and only just a few more than the infinite series in question.
Admittedly, the only degree of freedom left to 1.9999… (the series) is to either be 2 or an infinitesimal away from 2. But I don’t think that makes it different in kind to x −1 = 4
anyway—I think we’re probably just in “violent agreement” (as a friend of mine once used to say) ;)
All the bits that I was trying to really say we agree over… now we’re just discussing the related maths ;)
Ok, lets move into hypothetical land and pretend that 1.9999… represents what I originally though it represents.
The comparison with the number 2 provides the meaning that what you want to do is to evaluate the series at its limit.
It’s totally supportable for you to equate 1.9999… = 2 and determine that this is a statement that is: 1) true when the infinite series has been evaluated to the limit 2) false when it is represented in any finite/limited way
Edit: ah… that’s why you can’t use stars for to-the-power-of ;)
Er, no… there still seems to be quite a bit of confusion here...
Well, if you really think that’s not significant… :P
It’s not clear to me what distinction you’re drawing here. A series is a sequence, just written differently.
It’s not at all clear to me what notion of “degrees of freedom” you’re using here. The sequence is an entirely different sort of thing than x^2, in that one is a sequence, a complete mathematical object, while the other is an expression with a free variable. If by “degrees of freedom” you mean something like “free variables”, then the sequence has none. Now it’s true that, being a sequence of real numbers, it is a function from N to R, but there’s quite a difference between the expression 2-10^(-n), and the function (i.e. sequence) n |-> 2-10^(-n) ; yes, normally we simply write the latter as the former when the meaning is understood, but under the hood they’re quite different. In a sense, functions are mathematical, expressions are metamathematical.
When I say “x^2 is a number”, what I mean is essentially, if we’re working under a type system, then it has the type “real number”. It’s an expression with one free variable, but it has type “real number”. By contrast, the function x |-> x^2 has type “function from reals to reals”, the sequence (1, 1.9, 1.99, …) has type “sequence of reals”… (I realize that in standard mathematics we don’t actually technically work under a type system, but for practical purposes it’s a good way to think, and it’s I’m pretty sure it’s possible to sensibly formulate things this way.) To equate a sequence to a number may technically in a sense return “false”, but it’s better to think of it as returning “type error”. By contrast, equating x^2 to 2 - not equating the function x|->x^2 to 2, which is a type error! - allows us to infer that x^2 is also a number.
Note, BTW, that the real numbers don’t have any infinitesimals (save for 0, if you count it).
Sorry, what does it even mean for it to be “represented in a finite/limited way”? The alternative to it being a number is it being an infinite sequence, which is, well, infinite.
I am really getting the idea you should go read the standard stuff on this and clear up any remaining confusion that way, rather than try to argue this here...
ah—then I apologise. I need to clarify. I see that there are several points where you’ve pointed out that I am using mathematical language in a sloppy fashion. How about I get those out of the way first.
I should not have used the word “infinitesimal”—as I really meant “a very small number” and was being lazy. I am aware that “the theory of infinitesimals” has an actual mathematical meaning… but this is not the way in which I was using the word. I’ll explain what I meant in a bit..
If I write a program that starts by adding 1 to 0.9 then I put it into a loop where it then adds “one tenth of the previous number you just added”...
If at any point I tell the program “stop now and print out what you’ve got so far”… then what it will print out is something that is “a very small number” less than 2.
If I left the program running for literally an infinite amount of time, it would eventually reach two. If I stop at any point at all (ie the program is finite), then it will return a number that a very small amount less than two.
In this way, the program has generated a finite approximation of 1.999… that is != 2
As humans, we can think about the problem in a way that a stupid computer algorithm cannot, and can prove to ourselves that 1+(0.111.. * 9) actually == 2 exactly. but that is knowledge outside of the proposed “finite” solution/system as described above.
Thus the two are different “representations” of 1.999...
I am reminded of the old engineering adage that “3 is a good approximation of Pi for all practical purposes”—which tends to make some mathematicians squirm.
x^2 has one degree of freedom. x can be any real number
1 < x < 1.1 has less freedom than that. It can be any real number between 1 and 1.1
With the previous description I’ve given of the difference between the results of a “finite” and “infinite” calculation of the limit of 1.999… (the series), “x = 1.999...” can be either 2 (if we can go to the limit or can think about it in a way outside of the summing-the-finite-series method) or a very small number less than two (if we begin calculating but have to stop calculating for some weird reason, such as running out of time before the heat-death of the universe).
The “freedom” involved here is even more limited than the freedom of 1 < x < 1.1 and would not constitute a full “degree” of freedom in the mathematical sense. But in the way that I have already mentioned above (quite understanding that this may not be the full mathematically approved way of reasoning about it)… it can have more than one value (given the previously-stated contexts) and thus may be considered to have some “freedom”. …even if it’s only between “2″ and “a very, very small distance from 2”
I’d like to think of it as a fractional degree of freedom :)
Firstly—there is no surprise that you are unfamiliar with my background… as I haven’t specifically shared it with you. But I happen to have actually started in a maths degree. I had a distinction average, but didn’t enjoy it enough… so I switched to computing. I’m certainly not a total maths expert (unlike my Dad and my maths-PhD cousin) but I would say that I’m fairly familiar with “the standard stuff”. Of course… as should be obvious—this does not mean that errors do not still slip through (as I’ve recently just clearly learned).
Secondly—with respect, I think that some of the confusion here is that you are confused as to what I’m talking about… that is totally my fault for not being clear—but it will not be cleared up by me going away and researching anything… because I think it’s more of a communication issue than a knowledge-based one.
So… back to the point at hand.
I think I get what you’re trying to say with the type-error example. But I don’t know that you quite get what I’m saying. That is probably because I’ve been saying it poorly…
I don’t know if you’ve programmed in typeless programming languages, but my original understanding is more along the lines of:
Lets say I have this object, and on the outside it’s called “1.999...”
When I ask it “how do I calculate your value?” it can reply “well, you add 1 to 0.9 and then 0.09 and then 0.009...” and it keeps going on and on… and if I write it down as it comes out… it looks just like the Infinite Series.
So then I ask it “what number do you equate to if I get to the end of all that addition?” and it says “2″ - and that looks like the Limit
I could even ask ask it “do you equal two?” and it could realise that I’m asking it to calculate its limit and say “yes”
But then I actually try the addition in the Series myself… and I go on and on and on… and each next value looks like the next number in the Sequence
but eventually I get bored and stop… and the number I have is not quite 2… almost, but not quite… which is the Finite Representation that I keep talking about.
Then you can see that this object matches all the properties that I have mentioned in my previous discussion… no type-errors required, and each “value” comes naturally from the given context.
That “object” is what I have in my head when I’m talking about something that can be both the number and the sequence, and in which it can reveal the properties of itself depending on how you ask it.
...it’s also a reasonably good example of duck-typing ;)
We want it to denote a number for simple consistency. .11111… is a number. It is a limit. 3.14159… should denote a number. Why should 1.99999?… Be any different? If we are going to be at all consistent in our notation they should all represent the same sort of series. Otherwise this is extremely irregular notation to no end.
Yes, I totally agree with you: consistency and convenience are why we have chosen to use 1.9999… notation to represent the limit, rather than the sequence.
consistency and convenience tends to drive most mathematical notational choices (with occasional other influences), for reasons that should be extremely obvious.
It just so happened that, o this occasion, I was not aware enough of either the actual convention, or of other “things that this notation would be consistent with” before I guessed at the meaning of this particular item of notation.
And so my guessed meaning was one of the two things that I thought would be “likely meanings” for the notation.
In this case, my guess was for the wrong one of the two.
I seem to be getting a lot of comments that are implying that I should have somehow naturally realised which of the two meanings was “correct”… and have tried very hard to explain why it is not obvious, and not somehow inevitable.
Both of my possible interpretations were potentially valid, and I’d like to insist that the sequence-one is wrong only by convention (ie maths has to pick one or the other meaning… it happens to be the most convenient for mathematicians, which happens in this case to be the limit-interpretation)… but as is clearly evidenced by the fact that there is so much confusion around the subject (ref the wikipedia page) - it is not obvious intuitively that one is “correct” and one is “not correct”.
I maintain that without knowledge of the convention, you cannot know which is the “correct” interpretation. Any assumption otherwise is simply hindsight bias.
There is no inherent meaning to a set of symbols scrawled on paper. There is no “correct” and “incorrect” way of interpreting it; only convention (unless your goal is to communicate with others). There is no Platonic Ideal of Mathematical Notation, so obviously there is no objective way to pluck the “correct” interpretation of some symbols out of the interstellar void. You are right in as far as you say that.
However, you are expected to know the meaning of the notation you use in exactly the same way that you are expected to know the meaning of the words you use. Not knowing is understandable, but observing that it is possible to not-know a convention is not a particular philosophical insight.
People guess the meanings of words and notations from context all the time. Especially when they aren’t specialists in the field in question. Lots of interested amateurs exist and read things without the benefit of years of training before hand.
Some things just lend themselves more easily to guessing the accepted-meaning than others. It is often a good idea to make things easier to guess the accepted-meaning, rather than to fail to do so, if at all possible. Make it hard to fail.