I’d say the paradox of tolerance is probably correct, in the sense that an Adversarial example can always be constructed as a counterexample to a perfectly tolerant society, and I suspect it’s related to the reason you can’t optimize all goals at once, or why no learning algorithm performs better than any other on all possible problems.
I think your perspective also relies on an implicit assumption which may be flawed. Not quite sure what it is exactly—but something around assuming that agents are primarily goal-directed entities. This is the game-theoretic context—and in that case, you may be quite right.
But here I’m trying to point out precisely that people have qualities beyond the assumptions of a game-theoretic setup. Most of the times we don’t actually know what our goals are or where those goals came from. So I guess here I’m thinking of people more as dynamical systems.
I’d say the paradox of tolerance is probably correct, in the sense that an Adversarial example can always be constructed as a counterexample to a perfectly tolerant society, and I suspect it’s related to the reason you can’t optimize all goals at once, or why no learning algorithm performs better than any other on all possible problems.
I think your perspective also relies on an implicit assumption which may be flawed. Not quite sure what it is exactly—but something around assuming that agents are primarily goal-directed entities. This is the game-theoretic context—and in that case, you may be quite right.
But here I’m trying to point out precisely that people have qualities beyond the assumptions of a game-theoretic setup. Most of the times we don’t actually know what our goals are or where those goals came from. So I guess here I’m thinking of people more as dynamical systems.