And for anyone else who was wondering which condition in Kalai and Schmeidler’s theorem fails for adding up utility functions, the answer as far as I can tell is cardinal independence of alternatives, but the reason is unsatisfying (again as far as I can tell): namely, restricting a utility function to a subset of outcomes changes the normalization used in their definition of adding up utility functions. If you’re willing to bite the bullet and work with actual utility functions rather than equivalence classes of functions, this won’t matter to you, but then you have other issues (e.g. utility monsters).
Edit: I would also like to issue a general warning against taking theorems too seriously. Theorems are very delicate creatures; often if their assumptions are relaxed even slightly they totally fall apart. They aren’t necessarily well-suited for reasoning about what to do in the real world (for example, I don’t think the Aumann agreement theorem is all that relevant to humans).
I would also like to issue a general warning against taking theorems too seriously. Theorems are very delicate creatures; often if their assumptions are relaxed even slightly they totally fall apart.
Is the criteria for antifragility formal enough that there could be a list of antifragile theorems?
No. The fragility is in humans’ ability to misinterpret theorems, not in the theorems themselves, and humans are complex enough that I highly doubt that you’d be able to come up with a useful list of criteria that could guarantee that no human would ever misinterpret a theorem.
Harsanyi’s social aggregation theorem seems more relevant than Arrow.
And for anyone else who was wondering which condition in Kalai and Schmeidler’s theorem fails for adding up utility functions, the answer as far as I can tell is cardinal independence of alternatives, but the reason is unsatisfying (again as far as I can tell): namely, restricting a utility function to a subset of outcomes changes the normalization used in their definition of adding up utility functions. If you’re willing to bite the bullet and work with actual utility functions rather than equivalence classes of functions, this won’t matter to you, but then you have other issues (e.g. utility monsters).
Edit: I would also like to issue a general warning against taking theorems too seriously. Theorems are very delicate creatures; often if their assumptions are relaxed even slightly they totally fall apart. They aren’t necessarily well-suited for reasoning about what to do in the real world (for example, I don’t think the Aumann agreement theorem is all that relevant to humans).
Is the criteria for antifragility formal enough that there could be a list of antifragile theorems?
No. The fragility is in humans’ ability to misinterpret theorems, not in the theorems themselves, and humans are complex enough that I highly doubt that you’d be able to come up with a useful list of criteria that could guarantee that no human would ever misinterpret a theorem.