I’m not arguing against fuzzy logic, just that it arguably doesn’t “morally” solve the liar paradox, insofar it yields similar revenge paradoxes. In natural language we arguably can’t just place restrictions, like banning non-continous truth functions such as “is exactly false”. Even if we don’t have a more appealing resolution. We can only pose voluntary restrictions on formal languages. For natural language, the only hope would be to argue that the predicate “is exactly false” doesn’t really make sense, or doesn’t actually yield a contradiction, though that seems difficult. Though I haven’t read Field’s book. Maybe he has some good arguments.
I’m not arguing against fuzzy logic, just that it arguably doesn’t “morally” solve the liar paradox, insofar it yields similar revenge paradoxes.
It has been years since I’ve read the book, so this might be a little bit off, but Field’s response to revenge is basically this:
The semantic values (which are more complex than fuzzy values, but I’ll pretend for simplicity that they’re just fuzzy values) are models of what’s going on, not literal. This idea is intended to respond to complaints like “but we can obviously refer to truth-value-less-than-one, which gives rise to a revenge paradox”. The point of the model is to inform us about what inference systems might be sound and consistent, although we can only ever prove this in a toy setting, thanks to Godel’s theorems.
So, indeed, within this model, “is exactly false” doesn’t make sense. Speaking outside this model, it may seem to make sense, but we can only step outside of it because it is a toy model.
However, we do get the ability to state ever-stronger Liar sentences with a “definitely” operator (“definitely x” is intuitively twice as strong a truth-claim compared to “x”). So the theory deals with revenge problems in that sense by formulating an infinite hierarchy of Strengthened Liars, none of which cause a problem. IIRC Hartry’s final theory even handles iteration of the “definitely” operator infinitely many times (unlike fuzzy logic).
In natural language we arguably can’t just place restrictions, like banning non-continous truth functions such as “is exactly false”. Even if we don’t have a more appealing resolution. We can only pose voluntary restrictions on formal languages. For natural language, the only hope would be to argue that the predicate “is exactly false” doesn’t really make sense, or doesn’t actually yield a contradiction, though that seems difficult.
Of course in some sense natural language is an amorphous blob which we can only formally model as an action-space which is instrumentally useful. The question, for me, is about normative reasoning—how can we model as many of the strengths of natural language as possible, while also keeping as many of the strengths of formal logic as possible?
So I do think fuzzy logic makes some positive progress on the Liar and on revenge problems, and Hartry’s proposal makes more positive progress.
That seems fair enough. Do you know what Field had to say about the “truth teller” (“This sentence is true”)? While the liar sentence can (classically) be neither true nor false, the problem with the truth teller is that it can be either true or false, with no fact of the matter deciding which. This does seem to be a closely related problem, even it it isn’t always considered a serious paradox. I’m not aware fuzzy truth values can help here. This is on contrast to Kripke’s proposed solution to the liar paradox: On his account, both the liar and the truth teller are “ungrounded” rather than true or false, because they use the truth predicate in a way that can’t be eliminated. Though I think one can construct some revenge paradoxes with his solution as well.
Anyway, I still think the main argument for fuzzy logic (or at least fuzzy truth values, without considering how logical connectives should behave) is still that concepts seem to be inherently vague. E.g. when I believe that Bob is bald, I don’t expect him to have an exact degree of baldness. So the extension of the concept expressed by the predicate “is bald” must be a fuzzy set. So Bob is partially contained in that set, and the degree to which he is, is the fuzzy truth value of the proposition that Bob is bald. This is independent of how paradoxes are handled.
(And of course the next problem is then how fuzzy truth values could be combined with probability theory, since the classical axiomatization of probability theory assumes that truth is binary. Are beliefs perhaps about an “expected” degree of truth? How would that be formalized? I don’t know.)
I’m not arguing against fuzzy logic, just that it arguably doesn’t “morally” solve the liar paradox, insofar it yields similar revenge paradoxes. In natural language we arguably can’t just place restrictions, like banning non-continous truth functions such as “is exactly false”. Even if we don’t have a more appealing resolution. We can only pose voluntary restrictions on formal languages. For natural language, the only hope would be to argue that the predicate “is exactly false” doesn’t really make sense, or doesn’t actually yield a contradiction, though that seems difficult. Though I haven’t read Field’s book. Maybe he has some good arguments.
It has been years since I’ve read the book, so this might be a little bit off, but Field’s response to revenge is basically this:
The semantic values (which are more complex than fuzzy values, but I’ll pretend for simplicity that they’re just fuzzy values) are models of what’s going on, not literal. This idea is intended to respond to complaints like “but we can obviously refer to truth-value-less-than-one, which gives rise to a revenge paradox”. The point of the model is to inform us about what inference systems might be sound and consistent, although we can only ever prove this in a toy setting, thanks to Godel’s theorems.
So, indeed, within this model, “is exactly false” doesn’t make sense. Speaking outside this model, it may seem to make sense, but we can only step outside of it because it is a toy model.
However, we do get the ability to state ever-stronger Liar sentences with a “definitely” operator (“definitely x” is intuitively twice as strong a truth-claim compared to “x”). So the theory deals with revenge problems in that sense by formulating an infinite hierarchy of Strengthened Liars, none of which cause a problem. IIRC Hartry’s final theory even handles iteration of the “definitely” operator infinitely many times (unlike fuzzy logic).
Of course in some sense natural language is an amorphous blob which we can only formally model as an action-space which is instrumentally useful. The question, for me, is about normative reasoning—how can we model as many of the strengths of natural language as possible, while also keeping as many of the strengths of formal logic as possible?
So I do think fuzzy logic makes some positive progress on the Liar and on revenge problems, and Hartry’s proposal makes more positive progress.
That seems fair enough. Do you know what Field had to say about the “truth teller” (“This sentence is true”)? While the liar sentence can (classically) be neither true nor false, the problem with the truth teller is that it can be either true or false, with no fact of the matter deciding which. This does seem to be a closely related problem, even it it isn’t always considered a serious paradox. I’m not aware fuzzy truth values can help here. This is on contrast to Kripke’s proposed solution to the liar paradox: On his account, both the liar and the truth teller are “ungrounded” rather than true or false, because they use the truth predicate in a way that can’t be eliminated. Though I think one can construct some revenge paradoxes with his solution as well.
Anyway, I still think the main argument for fuzzy logic (or at least fuzzy truth values, without considering how logical connectives should behave) is still that concepts seem to be inherently vague. E.g. when I believe that Bob is bald, I don’t expect him to have an exact degree of baldness. So the extension of the concept expressed by the predicate “is bald” must be a fuzzy set. So Bob is partially contained in that set, and the degree to which he is, is the fuzzy truth value of the proposition that Bob is bald. This is independent of how paradoxes are handled.
(And of course the next problem is then how fuzzy truth values could be combined with probability theory, since the classical axiomatization of probability theory assumes that truth is binary. Are beliefs perhaps about an “expected” degree of truth? How would that be formalized? I don’t know.)