I wonder, what do you think about the chapter about dual spaces, dual maps, annihilator, etc.? To me it seemed not too connected with everything else, and that’s bad. If I remember correctly, the author uses duality just to prove a few results and then throws duality away and never uses it again. Also in real life (numerical linear algebra, machine learning, and stuff) I am not aware of any use for those concepts.
So for “general” operators, this is always true, but there do exist specific operators for which it isn’t.
I believe when mathematicians say that in general P(x) holds, they mean that for any x in the domain of interest P(x) holds. Perhaps you want to you typical instead of general here. E.g. there is a notion called typical tensor rank of tensors of given shape, which means a tensor rank which occurs with non-zero probability when a random tensor of given shape is sampled.
I’ve used duality at least a bit in my subsequent ML classes, so I was happy that the book covered it. I do think I remember it not being used super much in the rest of the book.
The one thing the duality section was connected to was Riesz representation theorem. Riesz states every finite linear functional φ has a unique vector f, such that for all v, φ(v) = <v,f>. It gives an isomorphism from functionals to vectors for a given norm, as the function is just multiplication with the vector.
It’s not tied to the section on duals in the text, but the section on duals lets you appreciate the result more.
I wonder, what do you think about the chapter about dual spaces, dual maps, annihilator, etc.?
Nothing, because it wasn’t in the material. I worked through the second edition of the book, and the parts on duality seem to be new to the third edition.
I believe when mathematicians say that in general P(x) holds, they mean that for any x in the domain of interest P(x) holds. Perhaps you want to you typical instead of general here. E.g. there is a notion called typical tensor rank of tensors of given shape, which means a tensor rank which occurs with non-zero probability when a random tensor of given shape is sampled.
I wonder, what do you think about the chapter about dual spaces, dual maps, annihilator, etc.? To me it seemed not too connected with everything else, and that’s bad. If I remember correctly, the author uses duality just to prove a few results and then throws duality away and never uses it again. Also in real life (numerical linear algebra, machine learning, and stuff) I am not aware of any use for those concepts.
I believe when mathematicians say that in general P(x) holds, they mean that for any x in the domain of interest P(x) holds. Perhaps you want to you typical instead of general here. E.g. there is a notion called typical tensor rank of tensors of given shape, which means a tensor rank which occurs with non-zero probability when a random tensor of given shape is sampled.
I’ve used duality at least a bit in my subsequent ML classes, so I was happy that the book covered it. I do think I remember it not being used super much in the rest of the book.
Do you mean solving convex optimization problems by solving their dual problems instead?
Yeah, that was one of the applications.
Ok. It’s just that when I learned that, we didn’t even talk about dual spaces in linear algebraic sense, we worked just fine in Rn.
The one thing the duality section was connected to was Riesz representation theorem. Riesz states every finite linear functional φ has a unique vector f, such that for all v, φ(v) = <v,f>. It gives an isomorphism from functionals to vectors for a given norm, as the function is just multiplication with the vector.
It’s not tied to the section on duals in the text, but the section on duals lets you appreciate the result more.
Nothing, because it wasn’t in the material. I worked through the second edition of the book, and the parts on duality seem to be new to the third edition.
Thanks for that, I changed it.