As someone who is going to be TA-ing a math course next year, I appreciate hearing your perspective in “The Right to use a name” section. That’s not something I can recall stumbling on, but I can easily see it affect people and will try to keep it in mind in the future both for myself and for anyone I’m teaching. Certainly the approach of “let f(x) = blah” shortly followed by “and now we will show that f is well-defined” is of a similar vein and is rather oblique when first encountered—how could it not be well defined, you just defined it! And I could see now that there would be a good argument pedagogically to switch the order of exposition that mathematicians probably avoid because of either the extra writing involved or just following long-standing conventions.
Certainly the approach of “let f(x) = blah” shortly followed by “and now we will show that f is well-defined” is of a similar vein and is rather oblique when first encountered—how could it not be well defined, you just defined it!
It’s an important lesson, going beyond mathematics, that defining a concept does not guarantee that there is anything that satisfies it. The concept may turn out to be empty, confused, or contradictory, however clear an idea it seemed that you had at the time.
I thought about this some more and want to elaborate on what we’re talking about for those who haven’t encountered the question of being “well-defined” in math and might not know what exactly it is we mean.
Example: A definition that implicitly assumes existence of something.
If we have a collection of (real) numbers X, we might want to know what the largest number in that collection is. So let’s define max(X) to mean the largest number in X. Is this well-defined? Sure, I just defined it! But then what is max(X) when X is, say, all positive integers? No positive integer is larger than all the others, so there isn’t a largest number in X as every number in X is smaller than some other one.
Example: A definition that involves a implicit choice.
If we have a (real) number n and write the set of integers as Z, then we might write n+Z to mean all the numbers that may be written as a sum n+k for some integer k. We call n+Z a coset of Z. Note that we are definitely allowing n to be a non-integer value, such as n=1/2. Nothing is wrong with this definition.
But we can add integers, so can we add cosets? Well, let’s try defining what it would mean to add two cosets n+Z and m+Z together. Define n+Z + m+Z = (n+m)+Z, which seems to be the most natural thing to try. But are we done—is this actually well-defined?
Not really, since although we wrote n+Z down that way, we can see that there are other ways to write it, too. We get exactly the same set of numbers if we had instead used (n+1)+Z, so n+Z = (n+1)+Z. So our definition of how to add implicitly assumes that we have a ‘chosen’ way to write down the coset. But luckily, the way we defined it doesn’t actually depend on how we wrote down the coset! For example, (n+1)+Z + m+Z = (n+m+1)+Z = (n+m)+Z. In essence, even though there are multiple ways to write down the cosets n+Z and m+Z, there are also multiple ways to write down (n+m)+Z and the different ways for the first two just give different ways to write the second one. So this can be shown to be well-defined even though it involved an implicit choice.
Question: Does anyone have a good example of this off-hand of these kinds of things in real life? The only non-contrived example I have is an ontological ‘proof’ of God.
There are advantages and disadvantages both ways. If the name “order” comes last, then the order axioms will appear unmotivated and arbitrary to the student. I’m not sure what the best thing to do is.
We want something that corresponds to the intuitive idea of order. Let’s unpack this intuition. Now, given that some relation R has those properties we are then justified in using the symbol ≤.
In other words, you don’t need to hide your destination—you just need to make it clear that intuitive labels are a privilege entitled to objects that have demonstrated good behavior.
As someone who is going to be TA-ing a math course next year, I appreciate hearing your perspective in “The Right to use a name” section. That’s not something I can recall stumbling on, but I can easily see it affect people and will try to keep it in mind in the future both for myself and for anyone I’m teaching. Certainly the approach of “let f(x) = blah” shortly followed by “and now we will show that f is well-defined” is of a similar vein and is rather oblique when first encountered—how could it not be well defined, you just defined it! And I could see now that there would be a good argument pedagogically to switch the order of exposition that mathematicians probably avoid because of either the extra writing involved or just following long-standing conventions.
It’s an important lesson, going beyond mathematics, that defining a concept does not guarantee that there is anything that satisfies it. The concept may turn out to be empty, confused, or contradictory, however clear an idea it seemed that you had at the time.
Agreed! Which means that it’s presentation to people seeing it for the first time could be quite important.
I thought about this some more and want to elaborate on what we’re talking about for those who haven’t encountered the question of being “well-defined” in math and might not know what exactly it is we mean.
Example: A definition that implicitly assumes existence of something.
If we have a collection of (real) numbers X, we might want to know what the largest number in that collection is. So let’s define max(X) to mean the largest number in X. Is this well-defined? Sure, I just defined it! But then what is max(X) when X is, say, all positive integers? No positive integer is larger than all the others, so there isn’t a largest number in X as every number in X is smaller than some other one.
Example: A definition that involves a implicit choice.
If we have a (real) number n and write the set of integers as Z, then we might write n+Z to mean all the numbers that may be written as a sum n+k for some integer k. We call n+Z a coset of Z. Note that we are definitely allowing n to be a non-integer value, such as n=1/2. Nothing is wrong with this definition.
But we can add integers, so can we add cosets? Well, let’s try defining what it would mean to add two cosets n+Z and m+Z together. Define n+Z + m+Z = (n+m)+Z, which seems to be the most natural thing to try. But are we done—is this actually well-defined?
Not really, since although we wrote n+Z down that way, we can see that there are other ways to write it, too. We get exactly the same set of numbers if we had instead used (n+1)+Z, so n+Z = (n+1)+Z. So our definition of how to add implicitly assumes that we have a ‘chosen’ way to write down the coset. But luckily, the way we defined it doesn’t actually depend on how we wrote down the coset! For example, (n+1)+Z + m+Z = (n+m+1)+Z = (n+m)+Z. In essence, even though there are multiple ways to write down the cosets n+Z and m+Z, there are also multiple ways to write down (n+m)+Z and the different ways for the first two just give different ways to write the second one. So this can be shown to be well-defined even though it involved an implicit choice.
Question: Does anyone have a good example of this off-hand of these kinds of things in real life? The only non-contrived example I have is an ontological ‘proof’ of God.
There are advantages and disadvantages both ways. If the name “order” comes last, then the order axioms will appear unmotivated and arbitrary to the student. I’m not sure what the best thing to do is.
I’d recommend something like
In other words, you don’t need to hide your destination—you just need to make it clear that intuitive labels are a privilege entitled to objects that have demonstrated good behavior.