The crisp portion of such a self-reference system will be equivalent to a Kripke fixed-point theory of truth, which I like. It won’t be the least fixed point, however, which is the one I prefer; still, that should not interfere with the normal mathematical reasoning process in any way.
In particular, the crisp subset which contains only statements that could safely occur at some level of a Tarski hierarchy will have the truth values we’d want them to have. So, there should be no complaints about the system coming to wrong conclusions, except where problematically self-referential sentences are concerned (sentences which are assigned no truth value in the least fixed point).
So; the question is: do the sentences which are assigned no truth value in Kripke’s construction, but are assigned real-numbered truth values in the fuzzy construction, play any useful role? Do they add mathematical power to the system?
For those not familiar with Kripke’s fixed points: basically, they allow us to use self-reference, but to say that any sentence whose truth value depends eventually on its own truth value might be truth-value-less (ie, meaningless). The least fixed point takes this to be the case whenever possible; other fixed points may assign truth values when it doesn’t cause trouble (for example, allowing “this sentence is true” to have a value).
If discourse about the fuzzy value of (what I would prefer to call) meaningless sentences adds anything, then it is by virtue of allowing structures to be defined which could not be defined otherwise. It seems that adding fuzzy logic will allow us to define “essentially fuzzy” structures… concepts which are fundamentally ill-defined… but in terms of the crisp structures that arise, correct me if I’m wrong, but it seems fairly clear to me that nothing will be added that couldn’t be added just as well (or, better) by adding talk about the class of real-valued functions that we’d be using for the fuzzy truth-functions.
To sum up: reasoning in this way seems to have no bad consequences, but I’m not sure it is useful...
By the way, how would you incorporate probabilities into binary logic? Either you can include statements about probabilities in binary logic (“probability on top of logic”), or you can assign probabilities to binary logic statements (“logic on top of probability theory”). The situation is just analogous to that of fuzziness. If you do #1, that means binary logic is the most fundamental layer. If you do #2, I can also do an analogous thing with fuzziness.
The rules of probability reduce to the rules of binary logic when the probabilities are all zero or one, so you get binary logic for free just by using probability.
But under this approach the binary logic is NOT operating at a fundamental level—it is subsumed by a probability theory. In other words, what is true in the binary logic is not really true; it depends on the probability assigned to the statement, which is external to the logic. In like manner, I can assign fuzzy values to a binary logic which are external to the binary logic.
It’s good that you pointed out Kripke’s fixed point theory of truth as a solution to the Liar’s paradox. It seems to be an acceptable solution.
On the other hand, I also agree that “fuzziness as a matter of degree” can be added on top of a binary logic. That would be very useful for dealing with commonsense reasoning—perhaps even indispensable.
What is particularly controversial is whether turth should be regarded as a matter of degree, ie, the development of a fuzzy-valued logic. At this point, I am kinda 50-50 about it. The advantage of doing this is that we can translate commonsense notions easily, and it may be more intuitive to design and implement the AGI. The disadvantage is that we need to deal with a relatively new form of logic (ie, many-valued logic) and its formal semantics, proof theory, model theory, deduction algorithms, etc. With binary logic we may be on firmer ground.
The problem with Kripke’s solution to the paradoxes, and with any solution really, is that it still contains reference holes. If I strictly adhere to Kripke’s system, then I can’t actually explain to you the idea of meaningless sentences, because it’s always either false or meaningless to claim that a sentence is meaningless. (False when we claim it of a meaningful sentence; meaningless when we claim it of a meaningless one.)
With the fuzzy way out, the reference gap is that we can’t have discontinuous functions. This means we can’t actually talk about the fuzzy value of a statement: any claim “This statement has value X” is a discontinuous claim, with value 1 at X and value 0 everywhere else. Instead, all we can do is get arbitrarily close to saying that, by having continuous functions that are 1 at X and fall off sharply around X… this, I admit, is rather nifty, but it is still a reference gap. Warrigal refers to actual values when describing the logic, but the logic itself is incapable of doing that without running into paradox.
About the so-called “discontinuous truth values”, I think the culprit is not that the truth value is discontinuous (it doesn’t make sense to say a point-value is continuous or not), but rather that we have a binary predicate, “less-than”, which is a discontinuous truth functional mapping.
The statement “less-than(tv, 0.5)” seems to be a binary statement. If we make that predicate fuzzy, it becomes “approximately less than 0.5″, which we can visualize as a sigmoidal curve, and this curve intersects with the slope=1 line at 0.5. Thus, the truth value of the fuzzy version of that statement is 0.5, ie, indeterminate.
All in all, this problem seems to stem from the fact that we’ve introduced the binary predicate “less-than”.
If I strictly adhere to Kripke’s system, then I can’t actually explain to you the idea of meaningless sentences, because it’s always either false or meaningless to claim that a sentence is meaningless. (False when we claim it of a meaningful sentence; meaningless when we claim it of a meaningless one.)
I’d like to clear this up for myself. You’re saying that under Kripke’s system we build up a tower of meaningful statements with infinitely many floors, starting from “grounded” statements that don’t mention truth values at all. All statements outside the tower we deem meaningless, but statements of the form “statement X is meaningless” can only become grounded as true after we finish the whole tower, so we aren’t supposed to make them.
But this looks weird. If we can logically see that the statement “this statement is true” is meaningless under Kripke’s system, why can’t we run this logic under that system? Or am I confusing levels?
The crisp portion of such a self-reference system will be equivalent to a Kripke fixed-point theory of truth, which I like. It won’t be the least fixed point, however, which is the one I prefer; still, that should not interfere with the normal mathematical reasoning process in any way.
In particular, the crisp subset which contains only statements that could safely occur at some level of a Tarski hierarchy will have the truth values we’d want them to have. So, there should be no complaints about the system coming to wrong conclusions, except where problematically self-referential sentences are concerned (sentences which are assigned no truth value in the least fixed point).
So; the question is: do the sentences which are assigned no truth value in Kripke’s construction, but are assigned real-numbered truth values in the fuzzy construction, play any useful role? Do they add mathematical power to the system?
For those not familiar with Kripke’s fixed points: basically, they allow us to use self-reference, but to say that any sentence whose truth value depends eventually on its own truth value might be truth-value-less (ie, meaningless). The least fixed point takes this to be the case whenever possible; other fixed points may assign truth values when it doesn’t cause trouble (for example, allowing “this sentence is true” to have a value).
If discourse about the fuzzy value of (what I would prefer to call) meaningless sentences adds anything, then it is by virtue of allowing structures to be defined which could not be defined otherwise. It seems that adding fuzzy logic will allow us to define “essentially fuzzy” structures… concepts which are fundamentally ill-defined… but in terms of the crisp structures that arise, correct me if I’m wrong, but it seems fairly clear to me that nothing will be added that couldn’t be added just as well (or, better) by adding talk about the class of real-valued functions that we’d be using for the fuzzy truth-functions.
To sum up: reasoning in this way seems to have no bad consequences, but I’m not sure it is useful...
By the way, how would you incorporate probabilities into binary logic? Either you can include statements about probabilities in binary logic (“probability on top of logic”), or you can assign probabilities to binary logic statements (“logic on top of probability theory”). The situation is just analogous to that of fuzziness. If you do #1, that means binary logic is the most fundamental layer. If you do #2, I can also do an analogous thing with fuzziness.
The rules of probability reduce to the rules of binary logic when the probabilities are all zero or one, so you get binary logic for free just by using probability.
Yes, we all know that ;)
But under this approach the binary logic is NOT operating at a fundamental level—it is subsumed by a probability theory. In other words, what is true in the binary logic is not really true; it depends on the probability assigned to the statement, which is external to the logic. In like manner, I can assign fuzzy values to a binary logic which are external to the binary logic.
It’s good that you pointed out Kripke’s fixed point theory of truth as a solution to the Liar’s paradox. It seems to be an acceptable solution.
On the other hand, I also agree that “fuzziness as a matter of degree” can be added on top of a binary logic. That would be very useful for dealing with commonsense reasoning—perhaps even indispensable.
What is particularly controversial is whether turth should be regarded as a matter of degree, ie, the development of a fuzzy-valued logic. At this point, I am kinda 50-50 about it. The advantage of doing this is that we can translate commonsense notions easily, and it may be more intuitive to design and implement the AGI. The disadvantage is that we need to deal with a relatively new form of logic (ie, many-valued logic) and its formal semantics, proof theory, model theory, deduction algorithms, etc. With binary logic we may be on firmer ground.
YKY,
The problem with Kripke’s solution to the paradoxes, and with any solution really, is that it still contains reference holes. If I strictly adhere to Kripke’s system, then I can’t actually explain to you the idea of meaningless sentences, because it’s always either false or meaningless to claim that a sentence is meaningless. (False when we claim it of a meaningful sentence; meaningless when we claim it of a meaningless one.)
With the fuzzy way out, the reference gap is that we can’t have discontinuous functions. This means we can’t actually talk about the fuzzy value of a statement: any claim “This statement has value X” is a discontinuous claim, with value 1 at X and value 0 everywhere else. Instead, all we can do is get arbitrarily close to saying that, by having continuous functions that are 1 at X and fall off sharply around X… this, I admit, is rather nifty, but it is still a reference gap. Warrigal refers to actual values when describing the logic, but the logic itself is incapable of doing that without running into paradox.
About the so-called “discontinuous truth values”, I think the culprit is not that the truth value is discontinuous (it doesn’t make sense to say a point-value is continuous or not), but rather that we have a binary predicate, “less-than”, which is a discontinuous truth functional mapping.
The statement “less-than(tv, 0.5)” seems to be a binary statement. If we make that predicate fuzzy, it becomes “approximately less than 0.5″, which we can visualize as a sigmoidal curve, and this curve intersects with the slope=1 line at 0.5. Thus, the truth value of the fuzzy version of that statement is 0.5, ie, indeterminate.
All in all, this problem seems to stem from the fact that we’ve introduced the binary predicate “less-than”.
I’d like to clear this up for myself. You’re saying that under Kripke’s system we build up a tower of meaningful statements with infinitely many floors, starting from “grounded” statements that don’t mention truth values at all. All statements outside the tower we deem meaningless, but statements of the form “statement X is meaningless” can only become grounded as true after we finish the whole tower, so we aren’t supposed to make them.
But this looks weird. If we can logically see that the statement “this statement is true” is meaningless under Kripke’s system, why can’t we run this logic under that system? Or am I confusing levels?
Call it “expected” truth, analagous to “expected value” in prob and stats. It’s effectively a way to incorporate a risk analysis into your reasoning.
Yes, I have worked out a fuzzy logic with probability distributions over fuzzy values.