Ok, I’ve read up on Cantor’s theorem now, and I think the trick is in the types of A and P(A), and the solution to the paradox is to borrow a trick from type theory. A is defined as the set of all sets, so the obvious question is, sets of what key type? Let that key type be t. Then
A: t=>bool
P(A): (t=>bool)=>bool
We defined P(A) to be in A, so a t=>bool is also a t. Let all other possible types for t be T. t=(t=>bool)+T. Now, one common way to deal with recursive types like this is to treat them as the limit of a sequence of types:
t[i] = t[i-1]=>bool + T
A[i]: t[i]=>bool
P(A[i]) = A[i+1]
Then when we take the limit,
t = lim i->inf t[i]
A = lim i->inf A[i]
P(A) = lim i->inf P(A[i])
Then suddenly, paradoxes based on the cardinality of A and P(A) go away, because those cardinalities diverge!
I’m not sure I know enough about type theory to evaluate this. Although I do know that Russell’s original attempts to repair the defect involved type theory (Principia Mathematica uses a form of type theory however in that form one still can’t form the set of all sets). I don’t think the above works but I don’t quite see what’s wrong with it. Maybe Sniffnoy or someone else more versed in these matters can comment.
I don’t know anything about type theory; when I wrote that I heard it has philosophical problems when applied to set theory, I meant I heard that from you. What the problems might actually be was my own guess...
Ok, I’ve read up on Cantor’s theorem now, and I think the trick is in the types of A and P(A), and the solution to the paradox is to borrow a trick from type theory. A is defined as the set of all sets, so the obvious question is, sets of what key type? Let that key type be t. Then
We defined P(A) to be in A, so a t=>bool is also a t. Let all other possible types for t be T. t=(t=>bool)+T. Now, one common way to deal with recursive types like this is to treat them as the limit of a sequence of types:
Then when we take the limit,
Then suddenly, paradoxes based on the cardinality of A and P(A) go away, because those cardinalities diverge!
I’m not sure I know enough about type theory to evaluate this. Although I do know that Russell’s original attempts to repair the defect involved type theory (Principia Mathematica uses a form of type theory however in that form one still can’t form the set of all sets). I don’t think the above works but I don’t quite see what’s wrong with it. Maybe Sniffnoy or someone else more versed in these matters can comment.
I don’t know anything about type theory; when I wrote that I heard it has philosophical problems when applied to set theory, I meant I heard that from you. What the problems might actually be was my own guess...
Huh. Did I say that? I don’t know almost anything about type theory. When did I say that?