It’s often difficult to prove a negative and I think the non-existence of a crisp definition of any given concept is no exception to this rule. Sometimes someone wants to come up with a crisp definition of a concept for which I suspect no such definition to exist. I usually find that I have little to say and can only wait for them to try to actually provide such a definition. And sometimes I’m surprised by what people can come up with. (Maybe this is the same point that Roman Leventov is making.)
Also, I think there are many different ways in which concepts can be crisp or non-crisp. I think cooperation can be made crisp in some ways and not in others.
For example, I do think that (in contrast to human values) there are approximate characterizations of cooperation that are useful, precise and short. For example: “Cooperation means playing Pareto-better equilibria.”
One way in which I think cooperation isn’t crisp, is that you can give multiple different sensible definitions that don’t fully agree with each other. (For example, some definitions (like the above) will include coordination in fully cooperative (i.e., common-payoff) games, and others won’t.) I think in that way it’s similar to comparing sets by size, where you can give lots of useful, insightful, precise definitions that disagree with each other. For example, bijection, isomorphism, and the subset relationship can each tell us when one set is larger than or as large as another, but they sometimes disagree and nobody expects that one can resolve the disagreement between the concepts or arrive at “one true definition” of whether one set is larger than another.
When applied to the real world rather than rational agent models, I would think we also inherit fuzziness from the application of the rational agent model to the real world. (Can we call the beneficial interaction between two cells cooperation? Etc.)
Yes, I fully agree with all of this except one point, and with that one point I only want to add a small qualification.
Sometimes someone wants to come up with a crisp definition of a concept for which I suspect no such definition to exist. I usually find that I have little to say and can only wait for them to try to actually provide such a definition. And sometimes I’m surprised by what people can come up with.
The quibble I want to make here is that if we somehow knew that the Kolmogorov complexity of the given concept was at least X (and if that was even a sensible thing to say), and somebody was trying to come up with a definition with K-complexity <<X, then we could safely say that this has no chance of working. But then in reality, we do not know anything like this, so the best we can do (as I try to do with this post) is to say “this concept seems kinda complicated, so perhaps we shouldn’t be too surprised if crisp definitions end up not working”.
I mean, translated to algorithmic description land, my claim was: It’s often difficult to prove a negative and I think the non-existence of a short algorithm to compute a given object is no exception to this rule. Sometimes someone wants to come up with a simple algorithm for a concept for which I suspect no such algorithm to exist. I usually find that I have little to say and can only wait for them to try to actually provide such an algorithm.
So, I think my comment already contained your proposed caveat. (“The concept has K complexity at least X” is equivalent to “There’s no algorithm of length <X that computes the concept.”)
Of course, I do not doubt that it’s in principle possible to know (with high confidence) that something has high description length. If I flip a coin n times and record the results, then I can be pretty sure that the resulting binary string will take at least ~n bits to describe. If I see the graph of a function and it has 10 local minima/maxima, then I can conclude that I can’t express it as a polynomial of degree <10. And so on.
I think I sort of agree, but...
It’s often difficult to prove a negative and I think the non-existence of a crisp definition of any given concept is no exception to this rule. Sometimes someone wants to come up with a crisp definition of a concept for which I suspect no such definition to exist. I usually find that I have little to say and can only wait for them to try to actually provide such a definition. And sometimes I’m surprised by what people can come up with. (Maybe this is the same point that Roman Leventov is making.)
Also, I think there are many different ways in which concepts can be crisp or non-crisp. I think cooperation can be made crisp in some ways and not in others.
For example, I do think that (in contrast to human values) there are approximate characterizations of cooperation that are useful, precise and short. For example: “Cooperation means playing Pareto-better equilibria.”
One way in which I think cooperation isn’t crisp, is that you can give multiple different sensible definitions that don’t fully agree with each other. (For example, some definitions (like the above) will include coordination in fully cooperative (i.e., common-payoff) games, and others won’t.) I think in that way it’s similar to comparing sets by size, where you can give lots of useful, insightful, precise definitions that disagree with each other. For example, bijection, isomorphism, and the subset relationship can each tell us when one set is larger than or as large as another, but they sometimes disagree and nobody expects that one can resolve the disagreement between the concepts or arrive at “one true definition” of whether one set is larger than another.
When applied to the real world rather than rational agent models, I would think we also inherit fuzziness from the application of the rational agent model to the real world. (Can we call the beneficial interaction between two cells cooperation? Etc.)
Yes, I fully agree with all of this except one point, and with that one point I only want to add a small qualification.
The quibble I want to make here is that if we somehow knew that the Kolmogorov complexity of the given concept was at least X (and if that was even a sensible thing to say), and somebody was trying to come up with a definition with K-complexity <<X, then we could safely say that this has no chance of working. But then in reality, we do not know anything like this, so the best we can do (as I try to do with this post) is to say “this concept seems kinda complicated, so perhaps we shouldn’t be too surprised if crisp definitions end up not working”.
I mean, translated to algorithmic description land, my claim was: It’s often difficult to prove a negative and I think the non-existence of a short algorithm to compute a given object is no exception to this rule. Sometimes someone wants to come up with a simple algorithm for a concept for which I suspect no such algorithm to exist. I usually find that I have little to say and can only wait for them to try to actually provide such an algorithm.
So, I think my comment already contained your proposed caveat. (“The concept has K complexity at least X” is equivalent to “There’s no algorithm of length <X that computes the concept.”)
Of course, I do not doubt that it’s in principle possible to know (with high confidence) that something has high description length. If I flip a coin n times and record the results, then I can be pretty sure that the resulting binary string will take at least ~n bits to describe. If I see the graph of a function and it has 10 local minima/maxima, then I can conclude that I can’t express it as a polynomial of degree <10. And so on.