I also wouldn’t give this result (if I’m understanding which result you mean) as an example where the assumptions are technicalities / inessential for the “spirit” of the result. Assuming monotonicity or commutativity (either one is sufficient) is crucial here, otherwise you could have some random (commutative) group with the same cardinality as the reals.
Generally, I think math is the wrong comparison here. To be fair, there are other examples of results in math where the assumptions are “inessential for the core idea”, which I think is what you’re gesturing at. But I think math is different in this dimension from other fields, where often you don’t lose much by fuzzing over technicalities (in fact the question of how much to fuss over technicalities like playing fast and loose with infinities or being careful about what kinds of functions are allowed in your fields is the main divider between math and theoretical physics).
In my experience in pure math, when you notice that the “boilerplate” assumptions on your result seem inessential, this is usually for one of the following reasons:
In fact, a more general result is true and the proof works with fewer/weaker assumptions, but either for historical reasons or for reasons of some results used (lemmas, etc.) being harder in more generality, it’s stated in this form
The result is true in more generality, but proving the more general result is genuinely harder or requires a different technique, and this can sometimes lead to new and useful insights
The result is false (or unknown) in more technicality, and the “boilerplate” assumptions are actually essential, and understanding why will give more insight into the proof (despite things seeming inessential at first)
The “boilerplate” assumptions the result uses are weaker than what the theorem is stated with, but it’s messy to explain the “minimal” assumptions, and it’s easier to compress the result by using a more restrictive but more standard class of objects (in this way a lot of results that are true for some messy class of functions are easier to remember and use for a more restrictive class: most results that use “Schwartz spaces” are of this form; often results that are true for distributions are stated for simplicity for functions, etc.).
Some assumptions are needed for things to “work right,” but are kind of “small”: i.e., trivial to check or mostly just controlling for degenerate edge cases, and can be safely compressed away in your understanding of the proof if you know what you’re doing (a standard example is checking for the identity in group laws: it’s usually trivial to check if true, and the “meaty” part of the axiom is generally associativity; another example is assuming rings don’t have 0 = 1, i.e., aren’t the degenerate ring with one element).
There’s some dependence on logical technicalities, or what axioms you assume (especially relevant in physics- or CS/cryptography- adjacent areas, where different additional axioms like P != NP are used, and can have different flavors which interface with proofs in different ways, but often don’t change the essentials).
I think you’re mostly talking about 6 here, though I’m not sure (and not sure math is the best source of examples for this). I think there’s a sort of “opposite” phenomenon also, where a result is true in one context but in fact generalizes well to other contexts. Often the way to generalize is standard, and thus understanding the “essential parts” of the proof in any one context are sufficient to then be able to recreate them in other contexts, with suitably modified constructions/axioms. For example, many results about sets generalize to topoi, many results about finite-dimensional vector spaces generalize to infinite-dimensional vector spaces, etc. This might also be related to what you’re talking about. But generally, I think the way you conceptualize “essential vs. boilerplate” is genuinely different in math vs. theoretical physics/CS/etc.
I also wouldn’t give this result (if I’m understanding which result you mean) as an example where the assumptions are technicalities / inessential for the “spirit” of the result. Assuming monotonicity or commutativity (either one is sufficient) is crucial here, otherwise you could have some random (commutative) group with the same cardinality as the reals.
Generally, I think math is the wrong comparison here. To be fair, there are other examples of results in math where the assumptions are “inessential for the core idea”, which I think is what you’re gesturing at. But I think math is different in this dimension from other fields, where often you don’t lose much by fuzzing over technicalities (in fact the question of how much to fuss over technicalities like playing fast and loose with infinities or being careful about what kinds of functions are allowed in your fields is the main divider between math and theoretical physics).
In my experience in pure math, when you notice that the “boilerplate” assumptions on your result seem inessential, this is usually for one of the following reasons:
In fact, a more general result is true and the proof works with fewer/weaker assumptions, but either for historical reasons or for reasons of some results used (lemmas, etc.) being harder in more generality, it’s stated in this form
The result is true in more generality, but proving the more general result is genuinely harder or requires a different technique, and this can sometimes lead to new and useful insights
The result is false (or unknown) in more technicality, and the “boilerplate” assumptions are actually essential, and understanding why will give more insight into the proof (despite things seeming inessential at first)
The “boilerplate” assumptions the result uses are weaker than what the theorem is stated with, but it’s messy to explain the “minimal” assumptions, and it’s easier to compress the result by using a more restrictive but more standard class of objects (in this way a lot of results that are true for some messy class of functions are easier to remember and use for a more restrictive class: most results that use “Schwartz spaces” are of this form; often results that are true for distributions are stated for simplicity for functions, etc.).
Some assumptions are needed for things to “work right,” but are kind of “small”: i.e., trivial to check or mostly just controlling for degenerate edge cases, and can be safely compressed away in your understanding of the proof if you know what you’re doing (a standard example is checking for the identity in group laws: it’s usually trivial to check if true, and the “meaty” part of the axiom is generally associativity; another example is assuming rings don’t have 0 = 1, i.e., aren’t the degenerate ring with one element).
There’s some dependence on logical technicalities, or what axioms you assume (especially relevant in physics- or CS/cryptography- adjacent areas, where different additional axioms like P != NP are used, and can have different flavors which interface with proofs in different ways, but often don’t change the essentials).
I think you’re mostly talking about 6 here, though I’m not sure (and not sure math is the best source of examples for this). I think there’s a sort of “opposite” phenomenon also, where a result is true in one context but in fact generalizes well to other contexts. Often the way to generalize is standard, and thus understanding the “essential parts” of the proof in any one context are sufficient to then be able to recreate them in other contexts, with suitably modified constructions/axioms. For example, many results about sets generalize to topoi, many results about finite-dimensional vector spaces generalize to infinite-dimensional vector spaces, etc. This might also be related to what you’re talking about. But generally, I think the way you conceptualize “essential vs. boilerplate” is genuinely different in math vs. theoretical physics/CS/etc.