Here is an additional perspective, if it’s helpful. Suppose you live in some measure space X, perhaps Rn for concreteness. For any p∈[1,∞] can make sense of a space Lp(X) of functions f:X→R for which the integral ∫X|f|p is defined (this technicality can be ignored at a first pass), and we use the norm ||f||:=(∫X|f|p)1/p. Of course, L1 and L2 are options, and when X is a finite set, these norms give the sum of absolute values as the norm, and the square root of the sum of squares of absolute values, respectively.
Fun exercise: If X is two points, then Lp(X) is just R2 with the norm (|ap|+|bp|)1/p. Plot the unit circle in this norm (i.e, the set of all points at a distance of 1 from the origin). If p=1, you get a square (rotated 45 degrees). If p=2 you get the usual circle. If p=∞ you get a square. As p varies, the “circle” gradually interpolates between these things. You could imagine trying to develop a whole theory of geometry—circumferences, areas, etc., in each of these norms (for example, p=∞ is taxicab geometry). What the other comments are saying (correctly) is that you really need to know about your distribution of errors. If you expect error vectors to form a circular shape (for the usual notion of circle), you should use L2. If you know that they form a diamond-ish shape, please use L1. If they form a box (uniform distribution across both axes, i.e., the two are genuinely uncorrelated) use L∞.
To have a larger and more useful example, let X be a 1000 x 1000 grid, and regard R as the set of possible colors between absolute black at −∞ and absolute white at +∞ (or use any other scheme you wish). Then a function f:X→R is just a 1000 x 1000 pixel black-and-white image. The space of all such functions has dimension 1,000,000 as a real vector space, and the concern about whether integrals exist goes away (X is a finite set). If f and g are two such images, the whole discussion about Lp concerns precisely the issue of how we measure the distance ||f−g||.
Suppose I have a distinguished subspace K of my vector space. Maybe I took ten thousand pictures of human faces perfectly centered in the frame, with their faces scaled to exactly fit the 1000 x 1000 grid from top to bottom. The K can be the span of those photos, which is roughly the “space of human face images” or at least the “space of images that look vaguely face-like” (principal component analysis tells us that we actually want to look for a much smaller-dimensional subspace of K that contains the most of the real “human face” essence, but’s let’s ignore that).
I have a random image h:X→R. I want to know if this image is a human face. The question is “how far away if h from the distinguished subspace K?” I might have some cutoff distance where images below the cutoff are classified as faces, and images away from the cutoff as non-faces.
Speaking in very rough terms, a Banach space is a vector space that comes with a notion of size and a notion of limits. A Hilbert space is a vector space that comes with a notion of size, a notion of angles, and a notion of limits. The nice thing about Hilbert spaces is that the question “how far away if h from the distinguished subspace K?” has a canonical best answer: the orthogonal projection. It is exactly analogous to the picture in the plane where you take K, draw the unique line from h to K that makes a right angle, and take the point of intersection. That point is the unique closest point, and the projection onto the closet point is a linear operator.
In a Banach space, as you have observed in this post, there need not exist a unique closest point. The set of closest points may not even be a subspace of K, it could be an unpleasant nonlinear object. There are no projection operators, and the distance between a point and a subspace is not very well-behaved.
It is a theorem that Lp(X) is a Hilbert space if and only if p=2. That is, only the L2 norm admits a compatible notion of angles (hence perpendicularity/orthogonality, hence orthogonal projection). You can talk about the “angle between” two images in the L2 norm, and subtract off the component of one image “along” another image. I can find which directions in K are the “face-iest” by finding the vector that my data points have the strongest components around, then project away from that direction, find a vector orthogonal to my “face-iest” vector along which the component of everything is strongest, project that away, etc., and identify a lower-dimensional subspace of K that captures the essential directions (principal components) of K that capture the property of “face-iness.” There are all kinds of fun things you can do with projection to analyze structure in a data set. You can project faces onto face-subspaces, e.g., project a face onto the space of female faces, or male faces, and find the “best female approximation” or “best male approximation” of a face, you could try to classify faces by age, try to recognize whether a face belongs to one specific person (but shot independently across many different photos) by projecting onto their personal subspace, etc.
In the infinite-dimensional L2(X) function spaces, these projections let you reconstruct the best approximation of a function by polynomials (the Taylor approximations) or by sinusoids (the Fourier approximations) or by whatever you want (e.g. wavelet approximations). You could project onto eigenspaces of some operator, where the projection might correspond to various energy levels of some Hamiltonian.
The reason we prefer L2 in quantum mechanics is because, while eigenvectors and eigenvalues only make sense, projection operators onto eigenspaces are very much a special L2 thing. Thus, things like spectral decomposition, and projection onto eigenstates of observable operators, all require you to live inside of L2.
None of this works in Lp(X) for p≠2 precisely because “orthogonal” projection does not make sense. That is not to say that these spaces are useless. If you are in a function space, you might sometimes do weird things that take you outside of L2 (i.e., they make the integral in question undefined) and you end up inside of some other Lp, and that may not necessarily be the end of the world. Something as innocent as multiplication point-by-point can take you outside of the Lp space you started in, and put you into a different one (i.e., the integral of |f|p might fail to exist, but the integral of |f|q for some q≠p may converge). That is to say, these spaces are Banach spaces but are not Banach algebras—they do not have a nice multiplication law. The flexibility of moving to different p’s can help get around this difficulty. Another example is taking dual spaces, which generally requires a different p unless, funnily enough, p=2 (the only Lp space that is self-dual).
tl;dr if you want geometry with angles, you are required to use p=2. If you don’t feel like you need angles (hence, no orthogonality, no projections, no spectral decompositions, no “component of x along y”, no PCA, etc.) then you are free to use Lp. Sometimes the latter is forced upon you, but usually only if you’re in the infinite-dimensional function space setting.
Here is an additional perspective, if it’s helpful. Suppose you live in some measure space X, perhaps Rn for concreteness. For any p∈[1,∞] can make sense of a space Lp(X) of functions f:X→R for which the integral ∫X|f|p is defined (this technicality can be ignored at a first pass), and we use the norm ||f||:=(∫X|f|p)1/p. Of course, L1 and L2 are options, and when X is a finite set, these norms give the sum of absolute values as the norm, and the square root of the sum of squares of absolute values, respectively.
Fun exercise: If X is two points, then Lp(X) is just R2 with the norm (|ap|+|bp|)1/p. Plot the unit circle in this norm (i.e, the set of all points at a distance of 1 from the origin). If p=1, you get a square (rotated 45 degrees). If p=2 you get the usual circle. If p=∞ you get a square. As p varies, the “circle” gradually interpolates between these things. You could imagine trying to develop a whole theory of geometry—circumferences, areas, etc., in each of these norms (for example, p=∞ is taxicab geometry). What the other comments are saying (correctly) is that you really need to know about your distribution of errors. If you expect error vectors to form a circular shape (for the usual notion of circle), you should use L2. If you know that they form a diamond-ish shape, please use L1. If they form a box (uniform distribution across both axes, i.e., the two are genuinely uncorrelated) use L∞.
To have a larger and more useful example, let X be a 1000 x 1000 grid, and regard R as the set of possible colors between absolute black at −∞ and absolute white at +∞ (or use any other scheme you wish). Then a function f:X→R is just a 1000 x 1000 pixel black-and-white image. The space of all such functions has dimension 1,000,000 as a real vector space, and the concern about whether integrals exist goes away (X is a finite set). If f and g are two such images, the whole discussion about Lp concerns precisely the issue of how we measure the distance ||f−g||.
Suppose I have a distinguished subspace K of my vector space. Maybe I took ten thousand pictures of human faces perfectly centered in the frame, with their faces scaled to exactly fit the 1000 x 1000 grid from top to bottom. The K can be the span of those photos, which is roughly the “space of human face images” or at least the “space of images that look vaguely face-like” (principal component analysis tells us that we actually want to look for a much smaller-dimensional subspace of K that contains the most of the real “human face” essence, but’s let’s ignore that).
I have a random image h:X→R. I want to know if this image is a human face. The question is “how far away if h from the distinguished subspace K?” I might have some cutoff distance where images below the cutoff are classified as faces, and images away from the cutoff as non-faces.
Speaking in very rough terms, a Banach space is a vector space that comes with a notion of size and a notion of limits. A Hilbert space is a vector space that comes with a notion of size, a notion of angles, and a notion of limits. The nice thing about Hilbert spaces is that the question “how far away if h from the distinguished subspace K?” has a canonical best answer: the orthogonal projection. It is exactly analogous to the picture in the plane where you take K, draw the unique line from h to K that makes a right angle, and take the point of intersection. That point is the unique closest point, and the projection onto the closet point is a linear operator.
In a Banach space, as you have observed in this post, there need not exist a unique closest point. The set of closest points may not even be a subspace of K, it could be an unpleasant nonlinear object. There are no projection operators, and the distance between a point and a subspace is not very well-behaved.
It is a theorem that Lp(X) is a Hilbert space if and only if p=2. That is, only the L2 norm admits a compatible notion of angles (hence perpendicularity/orthogonality, hence orthogonal projection). You can talk about the “angle between” two images in the L2 norm, and subtract off the component of one image “along” another image. I can find which directions in K are the “face-iest” by finding the vector that my data points have the strongest components around, then project away from that direction, find a vector orthogonal to my “face-iest” vector along which the component of everything is strongest, project that away, etc., and identify a lower-dimensional subspace of K that captures the essential directions (principal components) of K that capture the property of “face-iness.” There are all kinds of fun things you can do with projection to analyze structure in a data set. You can project faces onto face-subspaces, e.g., project a face onto the space of female faces, or male faces, and find the “best female approximation” or “best male approximation” of a face, you could try to classify faces by age, try to recognize whether a face belongs to one specific person (but shot independently across many different photos) by projecting onto their personal subspace, etc.
In the infinite-dimensional L2(X) function spaces, these projections let you reconstruct the best approximation of a function by polynomials (the Taylor approximations) or by sinusoids (the Fourier approximations) or by whatever you want (e.g. wavelet approximations). You could project onto eigenspaces of some operator, where the projection might correspond to various energy levels of some Hamiltonian.
The reason we prefer L2 in quantum mechanics is because, while eigenvectors and eigenvalues only make sense, projection operators onto eigenspaces are very much a special L2 thing. Thus, things like spectral decomposition, and projection onto eigenstates of observable operators, all require you to live inside of L2.
None of this works in Lp(X) for p≠2 precisely because “orthogonal” projection does not make sense. That is not to say that these spaces are useless. If you are in a function space, you might sometimes do weird things that take you outside of L2 (i.e., they make the integral in question undefined) and you end up inside of some other Lp, and that may not necessarily be the end of the world. Something as innocent as multiplication point-by-point can take you outside of the Lp space you started in, and put you into a different one (i.e., the integral of |f|p might fail to exist, but the integral of |f|q for some q≠p may converge). That is to say, these spaces are Banach spaces but are not Banach algebras—they do not have a nice multiplication law. The flexibility of moving to different p’s can help get around this difficulty. Another example is taking dual spaces, which generally requires a different p unless, funnily enough, p=2 (the only Lp space that is self-dual).
tl;dr if you want geometry with angles, you are required to use p=2. If you don’t feel like you need angles (hence, no orthogonality, no projections, no spectral decompositions, no “component of x along y”, no PCA, etc.) then you are free to use Lp. Sometimes the latter is forced upon you, but usually only if you’re in the infinite-dimensional function space setting.