Your struggles with coordinate vectors sound familiar. I remember that, in the book I used, they introduced ΦB:Rn→V for a basis B and called x∈Rn a coordinate vector of v∈V iff ΦB(x)=v (V is left general on purpose). This explicit formulation as a simple bijective function cleared up the confusion for me. You can do neat things with it like turn an abstract map into a map from one basis to another: TB,B′=Φ−1B∘T∘ΦB′ for linear T:V→V .
But of course, from a practical standpoint, you don’t think about vectors this way. As with proofs, once you heard a plausible argument and accepted it, there is no reason to go back to them. So, I am somewhat critical of your book not introducing matrix-vector multiplication, as this is an obvious centerpiece of linear algebra (especially considering that its historic roots lie in solving Ax=b).
That looks like it also works. It’s a different philosophy I think, where LADR says “vectors and matrices are fundamentally different objects and vectors aren’t dependent on bases, ever” and your view says “each basis defines a bijective function that maps vectors from the no-basis world into the basis-world (or from the basis1 world into the basis2 world)” but it doesn’t insist on them being fundamentally different objects. Like if V=Rn then they’re the same kind of object, and you just need to know which world you’re in (i.e. relative to which basis, if any, you need to interpret your vector to).
II don’t think not having matrix-vector multiplication is an issue. The LADR model still allows you to do everything you can do in normal LA. If you want to multiply a matrix A with a vector v, you just make v into the n-by-1 matrix and then multiply two matrices. So you multiply A⋅M(B,v) rather than A⋅v. It forces you to be explicit about which basis you want the vector to be relative to, which seems like a good thing to me. If B is the standard basis, then M(B,v) will have the same entries as v, it’ll just be written as ⎡⎢
⎢⎣v1⋮vn⎤⎥
⎥⎦ rather than (v1,...,vn).
Just to share my two cents on the matter, the distinction between abstract vectors and maps on the one hand, and columns with numbers in them (confusingly also called vectors) and matrices on the other hand, is a central headache for Linear Algebra students across the globe (and by extension also for the lecturers). If the approach this book takes works for you then that’s great to hear, but I’m wary of `hacks’ like this that only supply a partial view of the distinction. In particular matrix-vector mulitplication is something that’s used almost everywhere, if you need several translation steps to make use of this that could be a serious obstacle. Also the base map ΦB:Fn→V that limerott mentions is of central importance from a category-theoretic point of view and is essential in certain more advanced fields, for example in differential geometry. I’m therefore not too keen on leaving it out of a Linear Algebra introduction.
Unfortunately I don’t really know what to do about this, like I said this topic has always caused major confusion and the trade-off between completeness and conciseness is extremely complicated. But do beware that, based on only my understanding of your post, you might still be missing important insights about the distinction between numerical linear algebra and abstract linear algebra.
I honestly don’t think the tradeoff is real (but please tell me if you don’t find my reasons compelling). If I study category theory next and it does some cool stuff with the base map, I won’t reject that on the basis of it contradicting this book. Ditto if I actually use LA and want to do calculations. The philosophical understanding that matrix-vector multiplication isn’t ultimately a thing can peacefully coexist with me doing matrix-vector multiplication whenever I want to. Just like the understanding that the natural number 1 is a different object from the integer number 1 peacefully coexists with me treating them as equal in any other context.
I don’t agree that this view is theoretically limiting (if you were meaning to imply that), because it allows any calculation that was possible before. It’s even compatible with the base map.
Ah, I see. So vectors are treated like abstract objects and representing them in a matrix-like form is an additional step. And instead of coordinate vectors, which may be confusing, you only work with matrices. I can imagine that this is a useful perspective when you work with many different bases. Thank you for sharing it.
Would you then agree to define A⋅v:=A⋅M(E,v) where E is the standard basis?
I wouldn’t be heartbroken if it was defined like that, but I wouldn’t do it if I were writing a textbook myself. I think the LADR approach makes the most sense – vectors and matrices are fundamentally different – and if you want to bring a vector into the matfrix world, then why not demand that you do it explicitly?
If you actually use LA in practice, there is nothing stopping you from writing Av. You can be ‘sloppy’ in practice if you know what you’re doing while thinking that drawing this distinction is a good idea in a theoretical text book.
Your struggles with coordinate vectors sound familiar. I remember that, in the book I used, they introduced ΦB:Rn→V for a basis B and called x∈Rn a coordinate vector of v∈V iff ΦB(x)=v (V is left general on purpose). This explicit formulation as a simple bijective function cleared up the confusion for me. You can do neat things with it like turn an abstract map into a map from one basis to another: TB,B′=Φ−1B∘T∘ΦB′ for linear T:V→V .
But of course, from a practical standpoint, you don’t think about vectors this way. As with proofs, once you heard a plausible argument and accepted it, there is no reason to go back to them. So, I am somewhat critical of your book not introducing matrix-vector multiplication, as this is an obvious centerpiece of linear algebra (especially considering that its historic roots lie in solving Ax=b).
That looks like it also works. It’s a different philosophy I think, where LADR says “vectors and matrices are fundamentally different objects and vectors aren’t dependent on bases, ever” and your view says “each basis defines a bijective function that maps vectors from the no-basis world into the basis-world (or from the basis1 world into the basis2 world)” but it doesn’t insist on them being fundamentally different objects. Like if V=Rn then they’re the same kind of object, and you just need to know which world you’re in (i.e. relative to which basis, if any, you need to interpret your vector to).
II don’t think not having matrix-vector multiplication is an issue. The LADR model still allows you to do everything you can do in normal LA. If you want to multiply a matrix A with a vector v, you just make v into the n-by-1 matrix and then multiply two matrices. So you multiply A⋅M(B,v) rather than A⋅v. It forces you to be explicit about which basis you want the vector to be relative to, which seems like a good thing to me. If B is the standard basis, then M(B,v) will have the same entries as v, it’ll just be written as ⎡⎢ ⎢⎣v1⋮vn⎤⎥ ⎥⎦ rather than (v1,...,vn).
Just to share my two cents on the matter, the distinction between abstract vectors and maps on the one hand, and columns with numbers in them (confusingly also called vectors) and matrices on the other hand, is a central headache for Linear Algebra students across the globe (and by extension also for the lecturers). If the approach this book takes works for you then that’s great to hear, but I’m wary of `hacks’ like this that only supply a partial view of the distinction. In particular matrix-vector mulitplication is something that’s used almost everywhere, if you need several translation steps to make use of this that could be a serious obstacle. Also the base map ΦB:Fn→V that limerott mentions is of central importance from a category-theoretic point of view and is essential in certain more advanced fields, for example in differential geometry. I’m therefore not too keen on leaving it out of a Linear Algebra introduction.
Unfortunately I don’t really know what to do about this, like I said this topic has always caused major confusion and the trade-off between completeness and conciseness is extremely complicated. But do beware that, based on only my understanding of your post, you might still be missing important insights about the distinction between numerical linear algebra and abstract linear algebra.
I honestly don’t think the tradeoff is real (but please tell me if you don’t find my reasons compelling). If I study category theory next and it does some cool stuff with the base map, I won’t reject that on the basis of it contradicting this book. Ditto if I actually use LA and want to do calculations. The philosophical understanding that matrix-vector multiplication isn’t ultimately a thing can peacefully coexist with me doing matrix-vector multiplication whenever I want to. Just like the understanding that the natural number 1 is a different object from the integer number 1 peacefully coexists with me treating them as equal in any other context.
I don’t agree that this view is theoretically limiting (if you were meaning to imply that), because it allows any calculation that was possible before. It’s even compatible with the base map.
Ah, I see. So vectors are treated like abstract objects and representing them in a matrix-like form is an additional step. And instead of coordinate vectors, which may be confusing, you only work with matrices. I can imagine that this is a useful perspective when you work with many different bases. Thank you for sharing it.
Would you then agree to define A⋅v:=A⋅M(E,v) where E is the standard basis?
I wouldn’t be heartbroken if it was defined like that, but I wouldn’t do it if I were writing a textbook myself. I think the LADR approach makes the most sense – vectors and matrices are fundamentally different – and if you want to bring a vector into the matfrix world, then why not demand that you do it explicitly?
If you actually use LA in practice, there is nothing stopping you from writing Av. You can be ‘sloppy’ in practice if you know what you’re doing while thinking that drawing this distinction is a good idea in a theoretical text book.