So, in the 20th century Russel’s paradox came along and forced mathematicians into creating constructive theories. For example, in ZFC set theory, you begin with the empty set {}, and build out all sets with a tower of lower-level sets. Maybe the natural numbers become {}, {{}}, {{{}}}, etc. Using different axioms you might get a type theory; in fact, any programming language is basically a formal logic. The basic building blocks like the empty set, or the builtin types are called atoms.
In algebraic geometry, the atom is a simplex—lines in one dimension, triangles in two dimensions, tetrahedrons in three dimensions, and so on. I think they generally use an axiom of infinity, so each simplex is infinitely small (convenient when you have smooth curves like circles), but they need to be defined at the lowest level. This includes how you define simplices from lower-dimensional simplices! And this is where the boundary comes in.
Say you have a triangle (2-simplex) [A, B, C]. Naively, we could define it’s boundary as the sum of its edges:
∂[A,B,C]=[A,B]+[A,C]+[B,C].
However, if we stuck two of them together, the shared edge [A, C] wouldn’t disappear from the boundary:
It’s essentially a directed loop around the triangle (the analogy breaks when you try higher dimensions, unfortunately). Now, the famous quote “the boundary of a boundary is zero” is relatively trivial to prove. Let’s remove just the two indices $A_i, A_j$ from the simplex $[A_1, A_2, \dots, A_i, \dots, A_j, \dots, A_n]$. If we remove $A_i$ first, we’d get
(−1)i(−1)j−1⋅[A1,A2,…,…,…,An]
while removing $A_j$ first gives
(−1)j(−1)i⋅[A1,A2,…,…,…,An]
The first is $-1$ times the second, so everything will zero out. However, it’s only zero because we decided edges should cancel out along shared boundaries. We can choose a different system where they add together, which leads to the permanent as a measure of volume instead of the determinant. Or, one that uses a much more complex relationship (re: immanent).
I’m certainly not an expert here, but it seems like fermions (e.g. electrons) exchange via the determinant, bosons (e.g. mass/gravity) use the permanent, and more exotic particles (e.g. anyons) use the immanent. So, when people base their speculations on the “boundary of a boundary” being a fundamental piece of reality, it bothers me.
A note on Russell’s paradox is that the problem with the Russell set isn’t that it’s nonconstructive, but rather the problem is that we allowed too much freedom in asserting for every property, there is a set of things that satisfy the property, and the conventional way it’s solved is by instead dropping the axiom of unrestricted comprehension, and adding the axiom of specification as well as a couple of other axioms to ensure that we have the sets we need.
Even without Russell’s paradox, you can still prove things nonconstructively.
I think this is mostly arbitrary.
So, in the 20th century Russel’s paradox came along and forced mathematicians into creating constructive theories. For example, in ZFC set theory, you begin with the empty set {}, and build out all sets with a tower of lower-level sets. Maybe the natural numbers become {}, {{}}, {{{}}}, etc. Using different axioms you might get a type theory; in fact, any programming language is basically a formal logic. The basic building blocks like the empty set, or the builtin types are called atoms.
In algebraic geometry, the atom is a simplex—lines in one dimension, triangles in two dimensions, tetrahedrons in three dimensions, and so on. I think they generally use an axiom of infinity, so each simplex is infinitely small (convenient when you have smooth curves like circles), but they need to be defined at the lowest level. This includes how you define simplices from lower-dimensional simplices! And this is where the boundary comes in.
Say you have a triangle (2-simplex) [A, B, C]. Naively, we could define it’s boundary as the sum of its edges:
∂[A,B,C]=[A,B]+[A,C]+[B,C].
However, if we stuck two of them together, the shared edge [A, C] wouldn’t disappear from the boundary:
∂([A,B,C]+[A,C,D])=[A,B]+[A,C]+[B,C]+[A,C]+[A,D]+[C,D].
This is why they usually alternate sign, so
∂[A,B,C]=[A,B]−[A,C]+[B,C].
Then, since
∂[A,C]=−∂[C,A]⟹[A,C]=−[C,A]
you could also write it like
∂[A,B,C]=[A,B]+[B,C]+[C,A].
It’s essentially a directed loop around the triangle (the analogy breaks when you try higher dimensions, unfortunately). Now, the famous quote “the boundary of a boundary is zero” is relatively trivial to prove. Let’s remove just the two indices $A_i, A_j$ from the simplex $[A_1, A_2, \dots, A_i, \dots, A_j, \dots, A_n]$. If we remove $A_i$ first, we’d get
(−1)i(−1)j−1⋅[A1,A2,…,…,…,An]
while removing $A_j$ first gives
(−1)j(−1)i⋅[A1,A2,…,…,…,An]
The first is $-1$ times the second, so everything will zero out. However, it’s only zero because we decided edges should cancel out along shared boundaries. We can choose a different system where they add together, which leads to the permanent as a measure of volume instead of the determinant. Or, one that uses a much more complex relationship (re: immanent).
I’m certainly not an expert here, but it seems like fermions (e.g. electrons) exchange via the determinant, bosons (e.g. mass/gravity) use the permanent, and more exotic particles (e.g. anyons) use the immanent. So, when people base their speculations on the “boundary of a boundary” being a fundamental piece of reality, it bothers me.
Thanks, hadn’t realized how this related to algebraic geometry. Reminds me of semi-simplicial type theory.
A note on Russell’s paradox is that the problem with the Russell set isn’t that it’s nonconstructive, but rather the problem is that we allowed too much freedom in asserting for every property, there is a set of things that satisfy the property, and the conventional way it’s solved is by instead dropping the axiom of unrestricted comprehension, and adding the axiom of specification as well as a couple of other axioms to ensure that we have the sets we need.
Even without Russell’s paradox, you can still prove things nonconstructively.