Unsolved Problems in Philosophy Part 1: The Liar’s Paradox
Graham Priest discusses The Liar’s Paradox for a NY Times blog. It seems that one way of solving the Liar’s Paradox is defining dialethei, a true contradiction. Less Wrong, can you do what modern philosophers have failed to do and solve or successfully dissolve the Liar’s Paradox? This doesn’t seem nearly as hard as solving free will.
This post is a practice problem for what may become a sequence on unsolved problems in philosophy.
- Truth and the Liar Paradox by 2 Sep 2014 2:05 UTC; 5 points) (
- 22 Aug 2011 20:09 UTC; 2 points) 's comment on A Sketch of an Anti-Realist Metaethics by (
The formalist) school of math philosophy thinks that meaningful questions have to be phrased in terms of finite computational processes. But if you try to write a program for determining the truth value of “this statement is false”, you’ll see it recurses and never terminates:
See also Kleene-Rosser paradox. This may or may not dissolve the original question for you, but it works for me.
There’s more to be said about the paradox because it keeps turning up in many contexts. For example, see Terry Tao’s posts about “no self-defeating object”. Also note that if we replace “truth” with “provability”, the liar’s paradox turns into Godel’s first incompleteness theorem, and Curry’s paradox turns into Löb’s theorem.
ETA: see also Abram Demski’s explanation of Kripke’s fixed point theory here on LW, if that’s your cup of tea.
The wikipedia link for Curry’s paradox claims “It has also been called Löb’s paradox after Martin Hugo Löb.” Given that you require a word substitution I take it that wikipedia is oversimplyifying something? (Or perhaps overloading the Lob keyword at tad.)
The two are related, so the overloading is probably not accidental. When I studied math we used to joke that every area of classical math has a Gauss theorem, and more often than not it’s the most important theorem in the area.
Not accidental and not surprising either. But still undesirable. It obfuscates the meaning of people who are talking about either of the concepts specifically.
I was curious enough to look into some background. “Different but basically the same for practical purposes” seems to be the conclusion.
See also: A Universal Approach to Self-Referential Paradoxes, Incompleteness and Fixed Points, which treats the Liar’s paradox as an instance of a generalization of Cantor’s theorem (no onto mapping from N->2^N).
I’m not sure if I like this paper (it seems to be trying to do too much), but it did contain something new to me—Yablo’s non-self-referential version of the Liar Paradox: for every natural number n, let S(n) be the statement that for all m>n S(m) is false. Also there is a funny non-self-referential formulation by Quine: “Yields falsehood when preceded by its quotation” yields falsehood when preceded by its quotation.
Interestingly, the Yablo’s paradox vanishes when there is no infinity. If the last statement of the Yablo’s sequence exists, it is true. And all at the preceding positions are false. Everything is well. Another reason, I am an infinity atheist.
The “last statement”? This would require that there exists a highest natural number. That seems like it would be a weirder occurrence than the mostly harmless Yablo’s paradox.
Although I suppose we can always choose to work in “the natural numbers mod N”, for some value of N, which is one way to banish “infinity”.
There is no need for ridiculously large numbers. There is always the last statement in a row and this way and only this way, no Yablo paradox arises.
I’m not sure what you mean by this. “There is no need”? So is there a highest natural number, or not? Because if not:
If S(N) is the last statement, N is a natural number.
Therefore N + 1 is a natural number and N + 1 > N.
Therefore the statement S(N + 1) exists.
Therefore S(N) is not the last statement. Contradiction.
If there is no infinity (the premise) then there must be.
If there is no infinity there must not be a highest natural number, but there could be if there is infinity?
s/not //
Edit: That looks bad. Let’s see.
s/.ot /
That works.
The second has an implied “This sentence …” so I’d say it’s still self-referential.
edit: actually I don’t think that’s required (the quote is the subject) so it does count I suppose.
If I remember rightly, the process is called “quining” and while it produces similar paradoxes and problems, it is distinct from self-reference. Linguistically, at least—logically one might be a form of the other.
(Upvoted the edit!)
Yablo’s version looks like unrolled infinite loop of function
Not to me it doesn’t. Yablo’s version has a “forall” that your translation misses. So in Yablo’s version there’s no consistent way to assign truth values to S(n), but in your version we could make S(n) = “n is odd” or something.
Not exactly. My version is incorrect, yes. But there is, uhm, controversial way of consistent assignment of truth values to Yablo’s statements.
In my version n-th step of loop unrolling is
or
Yablo’s version
or
If we extend set of natural numbers by element omega such that
Than we can assign S(n)=false for all n in N, and S(omega)=true.
Edit: Oops, second version of Yablo’s statement, which I included to demonstrate why I had an idea of loop unrolling, is not consistent when n equals omega. Original Yablo’s statement is consistent although.
Edit: Meta. The thing I always hated about my mind is that it completely refuses to form intuitions about statements which aren’t directly connected to object level (but then what is object level?).
Edit: Meta Meta. On introspection I don’t feel anything about previous statement. Pretty damn consistent...
The above comment is the closest that I have ever found to the following Predicate Logic formalization:
“This sentence is not true.” ∃x ∈ finite strings from the alphabet of predicate logic ∃T ∈ Predicates ∃hasProperty ∈ Predicates | x = hasProperty(x, ~T(x))
Finite string x asserts that it has the property of the negation of the Boolean value result of evaluating predicate T with itself as T’s only argument.
The above is based on Tarski formal correctness of True: For all x, True(x) if and only if φ(x)
Copyright Pete Olcott 2016 ,2017
http://LiarParadox.org/
Helpful links. As stated though this doesn’t dissolve the strengthened liars paradox.
Does holding the view that meaningful questions have to be phrased in terms of finite computational processes imply the other tenets of formalism?
The Liar’s Paradox is still considered an “unsolved problem in philosophy”? I don’t see why it’s considered a big problem that we’re able to define things that can neither be sorted into the “true” bucket nor the “false” bucket. If you could derive a paradox from, say, the Peano axioms, then that would indeed be problematic, but as it is, why is the fact that you can say “This sentence is false” any more problematic than the fact that you can say “let X = ¬X” without all of logic imploding?
Math is the art of constructing tautologies complicated enough to be useful. I don’t think it’s any mark against it that you can use the same language to describe things that are neither useful nor tautologous.
Good quote.
The encyclopedia of philosophy article did a decent job of motivating it, I thought. Understanding how to classify the Liar’s sentence is linked to being able to use inconsistent information (like us humans do), without being able to prove absolutely everything from the inconsistency.
Also, it’s interesting.
What we humans do is store our underlying representations as contingent networks and only “round them off” to categorical propositions when we reason explicitly about them.
That is, “This sentence is in English” is a categorical proposition, but if I trace it down to the cognitive structures that motivated me to generate it, I won’t find any categorical representations, just contingent ones: spreading networks of activation. Ditto for “This sentence is true” and “This sentence is false” and everything else I might say.
In other words, 0 and 1 are not probabilities.
If what we want to accomplish is to design a system that can use inconsistent information like we humans do, without suddenly discovering that it believes everything, then the thing to do is move away from representing categorical propositions at all.
Now that makes perfect sense to me. If it’s interesting, by all means continue doing it with my blessing (not that you need it)… but if it has something to do with using inconsistent information the way humans do, then I’ve completely failed to understand.
Well, not “the way humans do,” specifically—the fact that humans do it is just a way to motivate making logical systems that can do it too. Hopefully we can find how to do it better than humans do it by standards like consistency.
Well, the problem with probabilities of 0 and 1 is more complicated than “they’re not probabilities.” But I see your point.
That seems tricky. All the input into our brains seems to be translatable into categorical propositions with one extra parameter of probability. But we want to make our logical system deterministic, so that probability is just an ordinary extra parameter, which we’ve already seen doesn’t help resolve the liar’s paradox in simple applications. So are you just proposing making a system that’s like the human brain in that we can’t pick out the influence of individual parts? I think this would be a bad approach, even ignoring the appeal of simplicity, since it likely wouldn’t solve the problem, it would just prevent us from knowing what the problems with the system were.
Fair enough.
If you’re not actually trying to build a system that interprets the Liar’s Paradox the way humans do, but rather a system that (for example) interprets the Liar’s Paradox as a categorical probabilistic proposition without immediately believing everything (perhaps using human cognition as an inspiration, but then again perhaps not), then none of what I said is relevant to what you’re doing.
I was misled by your original phrasing, which I now realize you meant more as an illustrative analogy. I apologize for the confusion.
Huh. I’m not sure I understand that.
If you say to me “It’s going to rain,” that evokes all kinds of symbols in my head, not just <probability P that it’s going to rain>.
Admittedly most of that isn’t input, strictly speaking… the fact that those symbols are activated is a fact about the receiver, not about the signal. (Though in many cases, an expectation of those facts about the receiver was instrumental in choosing the form of the signal in the first place, and in some cases the probability of that intention is one of the activated symbols, and in some cases an expectation of that was causal to the choice of symbol, and so forth. There are potentially many levels of communication. The receiver isn’t acting in isolation.)
In practice I don’t see how you can separate thinking about what the receiver does from thinking about what the input is. When I talk about “It’s going to rain” as a meaningful signal, I’m implicitly assuming loads of things about the receiver.
I’m asserting that if you actually want to build a system that understands utterances the way humans do (which, as I say, I now realize wasn’t your goal to begin with, which is fine), there are parts of human cognition that are non-optional, and some notion of pragmatics rooted in a model of the world is one of those.
In other words: yes, of course, the steering wheel is different from the engine, and if you don’t understand that you don’t really have any idea what’s going on. Agreed 100%. We want debugging tools that let us trace the stack through the various subsystems and confim what they’re doing, otherwise (as you say) we don’t know what’s going on.
On the other hand, if all you have is the steering wheel, you may understand it perfectly, but it won’t actually go anywhere. Which is fine, if what you’re working on is a better-designed steering wheel.
What about this:
The predicate “is true” usually gets applied to a sentence with a subject and predicate. The classic example is “Snow is white”. As Tarski says, “‘Snow is white’ is true if and only if snow is white”.
English allows us to pretend we’re applying the words “is true” to a noun, for example “Islam is true”. But this confuses Tarski: “Islam is true if and only if Islam” is nonsense. So we should properly understand “Islam” in this sentence as a stand-in for various sentences lumped under the name Islam, for example “Allah is God”, and “Mohammed is His prophet.” When we do this, the statement “Islam is true” unpacks to “‘Allah is God’ is true, and ‘Mohammed is His prophet’ is true.” This fits nicely in Tarski form: “Islam is true if and only if Allah is God and Mohammed is His prophet.”
So the general idea is that you can’t use a truth-function to evaluate the truth of a noun until you unpack the noun into a sentence.
Now consider the sentence “This sentence is true”. It Tarski-izes to “This sentence is true if and only if this sentence”, which doesn’t work. To make it work, we have to unpack the noun “this sentence” into a sentence. “This sentence” unpacks to the sentence to which it refers: “This sentence is true”. So the unpacking ends with:
“‘This sentence is true’ is true.”
The second round of unpacking ends with:
“″This sentence is true’ is true’ is true.”
And so on, with each unpacking just adding one more “is true” after it without making it any less packed. Trying to unpack the noun fully will lead to infinite regress; stopping at any point will mean you’re trying to run a truth predicate on a noun.
What can be said about a truth predicate can also be said about a falsehood predicate, so the Liar Sentence just returns “invalid argument for function”, the same as if you pointed to a dog and said “That dog is false!”
The other sentences mentioned as contrasts don’t have this problem. “This sentence is in English” also requires a sentence as an argument. It gets one: “The sentence ‘This sentence is in English’ is in English” is a perfectly valid sentence. It’s not necessary to evaluate the truth of the sentence in the middle (its English-ness isn’t related to whether it’s true or false), so we can leave that one unevaluated and just evaluate the frame sentence, which evaluates the inner sentence’s Englishness, which comes out as true.
What about
‘all sentances are either true or false’.
This sounds like the sort of sentance we’d want to assign a truth value to. Yet we can instanciate it into
‘this sentance is either true or false’
Which is problematic—and yet it seems that it must have a truth value if the first sentance did.
I’m comfortable (mostly, it’s a bit of a bullet bite) saying ‘all sentences are either true or false’ doesn’t have a truth value, since to determine one you have to reference the sentence itself and that function doesn’t terminate. You can say in English or a Meta-language that all well-formed formulas in some system are either true or false. But you can’t say this in the object language.
Did you intend to note that “this sentence is either true or false” is a true sentence (for most methods of evaluation) that can’t be evaluated by Yvain’s fairly straightforward approach? Because that’s definitely interesting (thanks Jack).
Just not messing with recursion, in general, is a fairly old solution and not very satisfying. I blame Yvain’s writing ability for leading 9 people astray :D
Why is that a problem? It is a true sentence.
I take it the problem is that it doesn’t unpack even though it does have a truth value. Or at least it isn’t obvious how to unpack it. It’s a false negative candidate.
So the point is that it is a sentence that demonstrates a problem with using unpackability as a requirement for qualifying as meaningful English? That seems reasonable.
That’s what I got from it.
Note that the universal “All sentences are either true or false” also, doesn’t appear to meet the unpackability requirement, though I’m not confident I know how to make a Tarski sentence out of that.
Well the dominant strategy in most of these attempts is to deny that all sentences are true or false; some sentences fail to return a truth value because they are meaningless/non-terminating/take invalid arguments etc.
I really like this. It’s an intuitive model of reference in the language, and most importantly it rules out self-reference for an actual reason (never unpacks).
EDIT:
I wonder if you couldn’t do something with that infinite regress. Maybe that’s something interesting in a formal language—doing calculus on infinite recursion? If that’s even possible.
My knowledge of the Arabic language is only good enough to recognize that this is a tautology.
...and now that I think about it, it doesn’t appear that the first part of
is actually an existence claim!
~exist(God)
Correct!
By the way, what do you think about this (last sentence)?
That’s not the kind of paperclips maximizing that I like. That instrument should be melted down to make numerous smaller paperclips. It is not maximising usefulness by being an instrument.
Now that would make headlines.
On the opening night of Paperclip Maximiser, have a metalsmith set up a furnace and proceed to fashion the orchestra members’ instruments, one by one, into paperclips.
Oh, that goes without saying.
I was more interested in whether you would appreciate a musical composition drawing attention to your existence and values.
If it makes clear that that’s what the reference is to, that would be great! A lot of people trivialise the paperclip maximiser as a mere “thought experiment”, when actually there are real, conscious, sentient beings out there that are really like that and whose interests need to be accounted for.
Importantly, paperclip maximiser analogues exist in the form of corporations, even if they aren’t sentient.
hee hee hee hee hee
(I’d say something about how I had a mental image of a dog being untrue in the sense of being somehow unfaithful, but I was laughing at this sentence before that occurred to me.)
The Liars Paradox appears to be a special case of infinite recursion.
liar() {
NOT liar()
}
Straight forward. A debugging tool would detect an infinite recursion. An English speaking logician could call it ‘meaningless’. Now consider the ‘strengthened paradox’:
“Everything written on the board in Room 33 is either false or meaningless.”
This isn’t translatable as a function. ‘Meaningful’ and ‘meaningless’ aren’t values bivalent functions return so they shouldn’t be values in our logic. Instead they should be thought of as flags for errors detected by our brain’s debugging tool. But our debugging tool is embedded into the semantics of our language. We talk about sentences having the property of ‘meaninglessness’ instead of our brains not knowing what to do with the string of letters shown to it. You could probably build a language that returned a pseudo-value of “Meaningless” for infinitely recursive functions. It wouldn’t “really” be a value, the program would just output a line that read “x = Meaningless” (not, and this is crucial, assign the variable the value of the string ‘Meaningless’) when asked to find Liar(x). That is basically what the human brain does.
When we get confused by the strengthened liar paradox we’re committing a category error, thinking “meaningless” is a value when it is really an error message. In fact both versions of the paradox are meaningless (assuming ‘meaningless’ can be taken to mean something like ‘Can’t compute’).
Of course it gets hairy since error messages have truth values too. But
Meaningless(Sliar()) = 1
Is not the same as
Sliar() =1
Thus there is no contradiction. Same will go for Meaningless(Meaningless(Sliar())).
The most straightforward way is to interpret “meaningless” as “doesn’t terminate”, or the value Bottom in Haskell.
Interesting. And it works in recursive functions the way it should?
In this context that is fine, but we tend to use ‘meaningless’ to describe a more general class of errors, including syntax.
So the sentence “The sentence ‘Everything written on the board in Room 33 is either false or meaningless.’ is meaningless” is not true?
Sure it’s true. Thats just Meaningless(Sliar())… I guess I don’t seen why the selected portion would imply otherwise.
Oh, right, now I get it.
Any discussion of the Liar ought to mention the books of the late Jon Barwise The Liar and Vicious Circles. Also worth mentioning is Raymond Smulyan’s lighter puzzle books based on this paradox.
I like the approach of ‘paraconsistency’ discussed here. But there are some prominent logicians (Girard, for example) who absolutely hate it.
As for “dissolving” the Liar, I would say that it has been dissolved many times, in multiple contradictory ways. Which only goes to show that everything, even logic, can profitably be looked at from divergent viewpoints.
Well, we here agree that beliefs should pay rent. So you see the sentence “this sentence is false” or similar. What new things do you expect now?
Me, I expect to see a philosopher trying to keep his sanity. But the only thing I learned about the sentence’s subject is that it is that same sentence, and that it says it’s false.
So in that sense it’s meaningless.
EDIT: I thought about what I said a bit, and concluded that I’m probably not entirely correct. What the above sentence predicts is that I can read this sentence a second time and see a falsehood.
I realized this after considering that “this sentence is true” is obviously true, yet very similar. And it can also be represented as an infinite recursive function.
EDIT2: Actually, no. “This sentence is true” is NOT obviously true. Infinite recursion is what should happen for it, too.
(I R confuzzled)
What the paradox tells me is that our understanding of the nature of language, logic, and mathematics is seriously incomplete, which might lead to disaster if we do anything whose success depends on such understanding.
The paradox is related to the fact that we don’t have a formal language that can talk about all of of the content of math/logic, for example, the truth value (or meaningfulness, if some sentences are allowed to be meaningless) of sentences in the language itself, which is obviously part of math or logic.
Since our current best ideas about how to let an AI do math is through formal languages, this implies that we are still far from having an AI achieve the same kind of understanding of math as us. We humans use natural language which does have these paradoxes which we don’t know how to resolve, but at least we are not (or at least not obviously) constrained in which parts of math we can even talk, or think about.
I deem “this sentence is false” as meaningless and unworthy of further scrutiny from me.
Challenge: On the basis of the above, paperclip-pump me. (Or assume I’m a human and money-pump me.)
What is your algorithm for determining which sentences are meaningless? Since we don’t have such an algorithm (without serious flaws), I’m guessing your algorithm is probably flawed also, and I can perhaps exploit such flaws if I knew what your algorithm is. See also this quote from the IEP:
The “beliefs should pay rent” heuristic mentioned by User:Tiiba already answers this. My method (not strictly an algorithm[1], but sufficient to avoid paperclip-pumps) is to identify what constraint such an expression places on my expectations. This method [2] has been thoroughly discussed on this internet and is already invoked here as the de facto standard for what is and is not “meaningless”, though such a characterisation might go by different names (“fake explanation”, “maximum entropy probability distribution”, “not a belief”, “just belief as attire”, “empty symbol”, etc.).
Is your claim, then, that the “beliefs should pay rent” heuristic has serious enough flaws that it leaves an agent such as a human vulnerable to money-pumping? Typically, beliefs with such a failure mode immediately suggest an exploitable outcome, even in the absence of detailed knowledge of the belief holder’s epistemology and decision theory, yet that is not the case here.
With that in mind, the excerpt you posted does not pose significant challenges. Observe:
This was not the justification that I or User:Tiiba gave.
The claim that a symbol string “is in English” suggests observable expectations of that symbol string—for example, whether native speakers can read it, if most of its words are found in an English dictionary, etc. This is a crucial difference from the Liar Sentence.
Again, lack of a mapping to a probability distribution that diverges from maximum entropy.
The non-Liar Sentence part of them is not.
The requirement that beliefs imply anticipations is systematic, and prevents such a continuation.
[1] and your insistence on an algorithm rather than mere heuristic is too strict here
[2] which is also an integral part of the Clippy Language Interface Protocol (CLIP)
I can’t argue with that!
This feels like the wrong step in the dance to me. Haven’t you just thrown away all of mathematics? What new things do you expect after solving a quadratic equation?
User:Tiiba is correct. Math (after mapping to real-world predicates) allows me to more quickly form reliable expectations about the world. The claim that “this sentence is false” does not. Therefore, I can leave beliefs about the latter unassigned without epistemic or instrumental penalty.
Omega places a series of 50 boxes in front of you, labelled numerically. Two of them contain explosive boobytraps while each of the remaining 48 contain $200,000. The sum of the labels on the trapped boxes is 55 while the product is 714. Which boxes would you not choose to open?
Nice example. To follow up:
Next, Omega places two boxes in front of you. One carries the label “The label on the other box contains a true sentence”. The label on the other box reads “The label on the other box contains a false sentence”. You are told that the box(es) without false labels contain $1,000,000, whereas the box(es) with false labels are boobytrapped. It is conceivable that the labels are meaningless—therefore not false. It is also conceivable that the labels are both true and false—contradictory, but paraconsistent.
Do you open the boxes?
Quadratic equations are relatively clear-cut.
The issue here is with language and the meaning of the word “false”, not with the concept of truth independent of language.
Consider: Omega places a number of colored boxes if front of you. Omega tells you that boxes with an even number of boxes in the same basic color contain $200,000 while those with an odd number contain bombs. The colors are such that speakers of languages with different divisions of the spectrum in basic color words group the boxes differently (Omega repeats the explanation in several different languages in a random order). Does this thought experiment reveal anything interesting about the concept of color?
Omega can only do this if by some coincidence the the even/oddness of each box is the same in every language.
Yes, even so.
The issue where? Are you saying that this thread is about language, rather than truth? Or that my example, as written, is about language rather than (as intended) about truth?
I’m a bit surprised that anyone could conceive of a concept of truth independent of language. I’ve always considered truth as an attribute of sentences—linguistic objects. Perhaps I am missing your point.
As for your example, yes, I would say that it does point out something interesting, though already known, about the concept of color. That valid classification systems based on this criterion may disagree. This is also true of truth and logic. Some people say that the Liar statement is neither true nor false. Some people say that it is both true and false. Both can be correct, depending on what else they claim.
“Snow is white” is true if, and only if, snow is white—Tarski’s material adequacy condition—only makes sense if there is a fact of the matter about whether or not snow is white independent of language.
Tarski left out some of the fine print. That “if and only if” works only under the prior assumption that “snow” designates snow, “white” designates white, and “is” designates the appropriate infix binary relation.
In other words, “Snow is white” is true only if we know that “Snow is white” is a sentence in the English language.
Not really. If “snow” designates grass, and “white” designates green, then “‘snow is white’ is true if and only if snow is white” is still correct. Same if “snow” designates the sky and “white” designates green.
I’m afraid I don’t understand your point.
It should have read: “Same if “snow” designates the sky and “white” designates blue.”
It was apparently a nitpick of your first paragraph, ignoring your second paragraph.
That can’t be right. If he both misinterpreted ‘prior assumption’ and made a serious typo, his comment would not have been twice upvoted, would it?
If you can’t take as a given that statements actually are in the language they appear to be in no statement can have any knowable truth value. If “snow” in the utterance is a word in the same language as the identically spelled word in the statement, and the same for “white” (and “is”), and the rest of the statement means exactly the same as it does in English then the statement is still correct. But if “white” might designate orange “true” might just as well designate bubblegum or “only” designate “to treat like a second cousin”.
And further the grammar of English is being assumed… as well as the very concept of languages.
Let me see if I understand you. “Snow is white” is true if and only if “snow” means snow, “is” means is, “white” means white, and snow is white? Because that still only makes sense if there’s a fact of the matter about whether or not snow is white. And as ata pointed out, it’s also false.
Edit: Maybe Tarski’s undefinability theorem applies here. It says that no powerful formal language can define truth in that language. So if, as you say, truth is an attribute of linguistic objects, you have to invoke a metalanguage in which truth is defined. Then you need a meta-meta-language, etc. Of course English is not a formal language, and there is no formal meta-language for English—we talk about the truth of English sentences in English—but that is my point, that it relies for certain things on non-linguistic definitions. When we start discussing sentences like “This sentence is false”, there’s a tendency to forget that English does not and cannot define the truth of all English sentences.
No, that is not what I said. I said that IF “snow” means snow, “is” means is, and “white” means white, THEN “Snow is white” is true iff snow is white.
I never denied that. But the fact has nothing to do with truth unless you bring language into the discussion. Only linguistic objects (such as sentences) can be true.
Somehow, I feel that we are talking past each other.
ETA:
And now I know we are talking past each other.
That makes a lot more sense, thanks.
I think we’re getting somewhere. I thought that you were saying that whether or not a statement is true is a property of language. Tarski’s saying that whether or not a sentence is true is determined by whether it corresponds to reality. You’re saying that whether or not it corresponds to reality is determined by the meaning the language assigns to it.
I’m still not convinced that truth is to do with language, though. Consider a squirrel trying to get nuts out of a bird-feeder, say. The squirrel believes that the feeder contains nuts, that there’s a small hole in the feeder, and that it can eat the nuts by suspending itself upside down from a branch to access the hole. The squirrel does actually possess those beliefs, in the sense that it has a state of mind which enables it to anticipate the given outcome from the given conditions. The beliefs are true, but I’m certain that the squirrel is not using a language to formulate those beliefs in.
That sounds right. I think if we describe a sentence as being “true” then we’re really saying that it induces a possibly-nonverbal mental model of reality that is true (or very accurate), but we can say the same about mental models that were nonverbal to begin with.
Can you clarify more exactly what you mean by “valid?” Because my initial reaction is that of course, you can come up with many classification systems for any set of things. It’s not yet clear to me what interesting thing we can take away about how people are using the classifications of “true” and “false”, other than the fact that they don’t work very well for classifying certain unusual statements.
It seems to me that we have seen people in this thread advocate two value logics, three value logics, and four value logics. You can have workable systems of logic with and without the law of the excluded middle, and with and without a law of contradiction. There are intuitionistic logics, relevance logics, classically consistent and paraconsistent logics. To say nothing of linear logic, modal logics, and ludics.
Follow the links to the SEP articles on dialethi and paraconsistency. And then follow the citations from there to learn that logic is pretty big and flexible field.
The latter, though the former might also be the case.
“Independent of language” as in independent of the conventions of English, or Chinese, or Python, or street signs, or dolphin calls or whatever, not removed everything that could bear it.
Well, arguing about words is not very interesting to me, nor is the insight that words are just conventions and to a large degree arbitrary.
A good follow up. My response is no Omega didn’t. The very nature of Omega prohibits writing such things. If someone gave you that problem it was someone other than Omega.
OH NO HE DI’INT
Yes. Quantum immortality.
I will expect new things about where the zeros are. That means I can expect new things about my graphing calculator.
What about the sentence “This English sentence has six words?” It’s self-referential, but it’s certainly not meaningless, is it? And yet if you believe it, it doesn’t tell you anything except something about itself.
(EDIT: Maybe my claim is conflating the proposition TESHSW with the written representation of the proposition, which it describes and does tell you something about. Perhaps simply “This sentence is either true or false” is a better example—I don’t think that is meaningless at all, either, just trivial.)
Consider the statement “BLGRGHLKH is either true or false”, where BLGRGHLKH is a meaningless combination of letters I just made up.
I interpret the statement “BLGRGHLKH is true” as meaningless (in fact, by Tarski, this statement correlates with BLGRGHLKH, which we know is meaningless), but I am tempted to say the statement “BLGRGHLKH is either true or false” is true, maybe just as a reflex of declaring “X is either true or false” true for all values of X.
That calls into question the ability to move from “This sentence is either true or false” sounding meaningful to “This sentence is false” sounding meaningful.
I think quotation-referent distinction makes this sufficiently different from Liar. The referent of this sentence is the the quotation “This English sentence has six words”, which is not quite the same as the referent being the meaning of the sentence. It’s no more self-referential than “This sentence is written in black ink”.
I agree with you. That’s basically what I was getting at afterward in my edit. I’m just trying to dig up a statement which is unambiguously true, but yet isn’t at all useful. I think that “This sentence is either true or false” fits the bill.
Hmm. If you visualize meaning as a mapping between representation space and some subset of expectation space, “this English sentence has six words” forms a tight little loop disconnected from the rest of the universe. That seems to me like as good an indication as any that the statement has no useful consequences.
The distinction between “meaningless” and “trivial” seems pretty semantic to me.
In my mind, I have the category “meaningless” as statements which can’t be assigned a truth value without breaking the consistency of our system, and “trivial” as statements which can be assigned a truth value, but don’t pay any rent at all.
Try this way: Working in boolean logic, “This sentence is either true or false” can be true, and it can’t be false, right? If we can make these definite remarks about its properties within our system, can we still call it meaningless? Even though it doesn’t have useful consequences. (A formal way of saying it doesn’t have useful consequences, I guess, is to say that for our useless statement B and for all A, P(A) = P(A|B) -- it isn’t any evidence for anything at all.)
Given your definitions, that makes sense. One of the points I was trying to make, though, is that “meaningless” is one of those words with several related but slightly different interpretations, and that a lot of the trouble in this thread seems to have come from conflicts between those interpretations. In particular, a lot of the people here seem to be using it to mean “lacks evidential value” without making a distinction between the cases you do.
As to which definition to use: I’d say it depends on what we’re looking at. If we’re trying to figure out the internal properties of the logical system we’re working with, it’s quite important to make a distinction between cata!trivial and cata!meaningless statements; the latter give us information about the system that the former don’t. If we’re looking at the external consequences of the system, though, the two seem pretty much equivalent to me—in both cases we can’t productively take truth or falsity into account..
To echo Tiiba but more formally: given a specific physical circumstance (transistors designed to do a computation) you can predict the result of the computation exactly and arbitrarily quickly (or as fast as you can look it up), because you have already done that computation.
For more abstract theorems, proving two things are equivalent leads me to expect one in the presence of the other.
For example, before proving Fermat’s Last Theorem, I might expect there to be a right triangle whose three sides were squares of integers. Now I expect not to.
I like the article’s approach, but it’s a bit arbitrary in that “true contradiction” and “false contradiction” are equivalent. But perhaps due to bias towards the positive they get characterized as “true.”
What the Liar’s paradox really demonstrates is that true and false are not general enough to apply to every sentence, and so to deal with such cases satisfactorily we must generalize our logic somehow.
Then the question is—which generalization do we make? Going with the first thing that pops into our heads is probably bad. Well, let’s start with some desiderata:
1) We want it to assign a definite classification to the Liar’s sentence. Fairly straightforward—whether it’s “option 3” or “1/2″ or “0.321374...” we want our system to be able to handle the Liar’s sentence without breaking.
2) It should reduce to classical logic in classical cases.
3) It should not be more complicated than necessary.
4) it should not be obviously vulnerable to a strengthened Liar’s paradox.
5, etc.) Help me out here :P
Desideratum (3) suggests something along the lines of this, but that might fall prey to (4). I think it’s possible that we’ll need to allow a continuous truth value. But for now, sleep!
EDIT: After a little experience with this stuff, I don’t like the article’s approach anymore. “This sentence is not true and is not a ‘true paradox.’”
Manfred’s log, stardate 11⁄30
A little sleep, a little progress. The “fuzzy logic” approach that gives each statement a truth value between 0 and 1 can’t handle the obvious “this sentence is not true,” so it’s out. The other one-parameter approach I can think of is more clever. The thought was that each self-referential statement defines a transformation of it’s own “truth vector” (T, F), so consistency means that the statement should evaluate to eigenvectors of the transformation. Unfortunately, these transformations don’t always commute, so you can get inconsistent answers to “this sentence is not true and is not (1/sqrt(2),1/sqrt(2)).” Still working on that one.
Tordmor’s first sentence below is correct, the system should be boolean arithmetic. (that’s all that’s correct in his post...)
Turing proved that any computational process (if we’re being formalists and saying that our philosophical problems are computations) can be simulated in a universal turing machine, and you can write those in binary; so in some sense you really only have two values to deal with. Given a trinary table of truth values, you can run the same computation in a binary system, and then in that binary system write a liar’s paradox and translate it.
I don’t know what you’d get but it might be something along the lines of “this proposition is (true and false) xor (both)” as a wild guess.
The Liar’s sentence is already uncomputable, so I’ve already abandoned Turning machines by attempting to give it a consistent classification. So his proposed desideratum 5 conflicts with what I consider to be the more important desideratum 1.
The sentence “assign a consistent classification” sounds an awful lot like computing something to me. If you have a different meaning in mind then please elaborate. “Caught by the bug-checker” seems to be what people have settled on elsewhere.
The liar’s sentence isn’t incomputable, it just never returns a value. My point is that you can’t use a third variable to fix everything.
Something does get computed, but not the usual thing. It is possible to write a computer program that can use the symbol “pi.” It is not possible to write computer program to tell you every digit of pi. But on the other hand, if it’s as easy as writing “pi,” there’s not much point to thinking of it as a computer program.
If it was computable, it would return a value. If P->Q, then not Q->not P.
We agree: in fact, that was a central point—adding more states is still trying to compute the same thing, and so it won’t fix everything for the same reason using boolean arithmetic won’t fix everything. In order to handle the liar’s paradox we need to change the comparison operation (pretty sure, unless we avoid the problem), thus doing away with boolean arithmetic.
When I think “not computable” I think of things which aren’t implementable as computations. For the definition “implementable as a computation of finite length” versus as a program of finite length, pi seems to become incomputable… so that use of incomputable is weird to me.
I do believe that we agree. Creating a different solution to the liar paradox requires us to abandon formalism… but as far as I am aware the whole point of formalism is to give us good criteria for when our answers are satisfying, so I don’t really see how abandoning it helps.
5) It should be a boolean arithmetic
The linked three valued logic failes because it is no boolean arithmetic which is impossible with only three states. You need at least four: true, false, contradictory and ambigous. With these you can not only solve the liar paradox but also the proposition “This proposition is true” which is ambigous. And no, that does not mean it would be false because it states it where true while it actually is ambigous. It is simply ambigous.
As a funny side note, I think that is where Gödel erred. His incompleteness theorem probably rests on a two valued logic. But I’m not a mathematician and can’t proof that.
You won’t create anything worthwhile in math if you don’t study it. To break your current system, consider the proposition “This proposition is either false, contradictory, or ambiguous”.
You are absolutely correct. I haven’t thought this through. Thank you for the lesson.
Edit: I did take the lesson that I should think more before making such a claim, however, I wanted to point out that your sentence poses no problem and was not the point.
this p. is false is contradictory this p. is condradictory/ambigous is false The conjunction of contradictory and false is contradictory so you have a unique solution. This is also what intuition tells us since the proposition cannot be true and cannot be false and that would be contradictory.
I don’t understand your solution. If the proposition is contradictory, then it’s true—just look at what it says.
Or maybe I don’t understand how we are supposed to assign truth values to disjunctions (“either/or”) in your system: can a disjunction still be contradictory if one of its clauses is true? And surely if X is contradictory, then the clause “X is contradictory” must be true… or is it?
Ok, I get it now. So, I was wrong on that too. Thank you.
What do you do with “This sentence is contradictory”?
false.
The method would be to ask: Can it be true? Can it be false?
If yes to both it is ambigous, if no to both it is contradictory.
This makes no sense.
I’m neither a mathematician nor a linguist but I think you mean ‘prove’.
Any system that does not give this proposition the value of ‘true’ is wrong, for all definitions of true and wrong that are useful, coherent, or reasonable.
Mind explaining why? I don’t see any reason it’s any more true than it is false.
Hmm. I was going to say “assign it the value of true, and it returns true. Assign it the value of false, and it returns a contradiction”, but on reflection that’s not the case. If you assign it the value of false, then the claim becomes ¬(A is true), so it returns false.
So I was wrong—the proposition is a null proposition, it simply returns the truth value you assign to it. I don’t know if ambiguous is the best way to describe it, but ‘true’ certainly isn’t.
edit: perhaps cata’s ‘trivial’ is a good word for it.
Interesting. If I infer correctly...
Tordmor messed up and wrote “This proposition is true” when he probably would have wanted to have referred to “This proposition is false”.
Shokwave correctly notes that “This proposition is true” isn’t ambiguous at all, it essentially returns the value True.
Jonii also correctly observes that the person speaking the claim “This proposition is true” could be lying or mistaken (to the extent that the statement has bearing on facts external to the phrase). Apparent disagreement with Shokwave is likely to be due to ambiguity in the casual English representations of logical dereferencing.
How did you determine that the sentence “This proposition is true” returns the value True?
To me it doesn’t seem to return any value. Tordmor correctly notes its truth-state is uncertain.
Again, English is messy. Shokwave was noting (and I was acknowledging) that there is the claim of truth.
No he doesn’t. He claims it is ambiguous—an entirely different thing. It is an unambiguous claim to be true. Such a claim can itself be false but the meaning is entirely clear. It says it’s true!
Contrast with “This statement is false”.
These distinctions become relevant when Omega throws you box puzzles like this.
This is a tangent to dialethei, but something I wanted to bring up for a while:
http://video.ias.edu/voevodsky-80th
Voevodsky asks: what if the current foundations of math are inconsistent? Answers: probably nothing so bad.
My first thought was to look for the technical version of Priest’s article, which turns out to be his book “In Contradiction”, which turns out not to be in my university library. The Amazon preview tells me that he discusses Gödel’s theorems but not the computational models that so many comments here talk about, and he gives a formalisation of some form of paraconsistent logic. However, the preview isn’t enough to answer the basic question to ask about any non-standard logic: is it intertranslatable with classical logic, such that every truth of either is mapped to a truth of the other? If it is, then there is no philosophically interesting distinction between them, any more than there is between English and French, or C++ and Perl, or standard analysis and non-standard analysis.
So summarizing the thread and the links I’ve read it looks like there are two basic strategies to solving this problem. One is the Dialetheist strategy used in para-consistent logics. This strategy rejects the principle of non-contradiction and one of the rules that leads to the principle of explosion (any part of the disjunction syllogism used in explosion). The other is the strategy characterized by the formalist school’s approach and cousin_it’s comment variations of which were given throughout the thread (I consider Yvain’s Tarski sentence approach to be an instance of this strategy). The idea here is that by treating the paradox as a bivalent function we notice that it is a function which recurses infinitely. We then add the perhaps not obvious but certainly intuitive premise that sentences which suggest non-terminating functions are part of the subset of ‘meaningless’ sentences.
From the IEP
Are what has been proposed here different from the approaches of Quine and Russell? If not which of the two is right? It isn’t obviously a difference that makes no difference; Russell’s approach rules out all self-reference while Quine’s does not.
Now, this method (call it the non-termination approach) certainly seems to dissolve the confusion. And compared to the Dialetheist approach the non-termination approach appears much superior. The former gives up the principle of non-contradiction and a useful rule like disjunction introduction (or some other alteration to deductive logic, there appear to be a lot of alternatives and I haven’t gone through them all).
So the question becomes: what advantages does the paraconsistent logic approach have? Does anyone know of examples of logic that don’t have to sacrifice significant power in order to accommodate dialetheias? It doesn’t seem like it would be worth it.
I remember being bothered by this problem, and feeling like I had resolved it as an undergrad. Calling it a “true contradiction” seems absurd; you’ve just drawn a circle around it and said, “Nothing to see here! Move along!”
I think the solution is related to modal logic. “This sentence is false” creates a self-referential universe devoid of meaning, and thus has no truth value. It refers only to the world of itself, and there are no rules that it can be evaluated against, nor are there any observations that can confirm or disconfirm it. It is, in a sense, epiphenomenal, as there is no actual thing which it corresponds, predicts, or relates to. It is, in a sense, a one-sentence universe that cannot be tied to anything in any other universe.
This concept seems more robust in my mind; I suspect I am either making a mistake or failing to explain myself. Criticism or questions would be appreciated.
I’m highly sympathetic to the intuition that the liar sentence is devoid of meaning in some important respect, but I don’t think we can just declare the liar sentence meaningless and then call it a day. Because in another respect, it definitely seems meaningful. I understand what a sentence is, and I feel like I understand what it is for a sentence to be true or false. If someone wrote on a blackboard “The thing written on the blackboard of room 428 is false,” I feel like I would understand what this is saying before I went to check out room 428. Hence I must understand the sentence if it turns out that we’re in room 428 already.
Also consider the Strengthened Liar: “This sentence is not true.” According to your solution, that sentence should also be dismissed as meaningless, right? But surely meaningless sentences a fortiori aren’t true. But that’s precisely what the sentence asserts, hence it is true.
If it’s meaningless, it doesn’t assert anything.
A sharper formulation of the paradox just came to my mind. Consider the statements X = “X is not true” and Y = “X isn’t true”. (The difference in spelling is intentional.) If X is meaningless, then X isn’t true, therefore Y is true. But it’s a very weird state of affairs if replacing “isn’t” by “is not” can make a true sentence meaningless!
The apostrophe in this sentence isn’t needed for comprehension.
Good point. I take the claim that a sentence S is meaningless as equivalent to the claim that S has no truth-conditions. Let A be any schema for the conditions on which a sentence has truth-conditions, so that for each English sentence S, A(S) is true iff S is meaningful/has truth-conditions. Let S be the sentence ~A(S). Then S has truth-conditions iff A(S) iff ~~A(S) iff ~S. Contradiction. Nowhere was it assumed that the contradictory sentence was meaningful.
When you state A(S) iff ~S, you are formally substituting S for ~A(S), but the meaning of “A(S) iff ~S” is “the set of truth-conditions for ~~A(S) is the same as the set of truth-conditions for ~S”. But this assumes that there exists a set of truth-conditions for ~S, which assumes that there exists a set of truth-conditions for S, i.e. that S is meaningful, by your definition.
O.K., I don’t know how to italicize here.
Next time you comment, try the Help link (lower right).
Ah, thanks.
Interesting idea. But what is it that shifts us into a new universe? A clause of the form “___ is true”? The use of an indicative “this”? I like the idea of a universe disconnected from the rest of reality. But what puts us there, and what can we talk about while in residence?
You might enjoy Vicious Circles which sketches a resolution of the Liar which seems similar to what you are suggesting. Your idea may also be very similar to the “relevance logic” and “paraconsistency” approach sketched in the article linked by the OP.
There’s nothing inherent in a statement that makes it true or false. It’s just useful to think that way.
I’d say that it’s really just somebody vibrating the air, but even that is an abstraction, and has no more real truth than anything else.
http://yudkowsky.net/rational/the-simple-truth
Let me rephrase this. There’s a reality. The universe is what it is. There are no logical truths. There is not a “1+1=2”. There is not even a “there is a reality” or a “there are no logical truths”.
Logical truths exist within a system, and while in one sense that system does not exist in the universe, we can still note that our induction tells us the concept ‘logical system’ and its subconcepts ‘logical truth’ and ‘logical falsity’ apply with a very high probability to the universe.
There are a series of observations where one unit and another unit are combined and the result is two units. There are no observations where one unit and another unit are combined and the result is one unit, or three units, or any number of units other than two. There isn’t a general law that says 1+1=2. These assertions seem disingenuous when considered together. It appears you want more out of ‘general law’ than ‘applies in every case’.
What, in your mind, distinguishes ‘There is a reality’ from ’The sentence “there is a reality” is true”?
‘There is a reality’ is the closest I can get to expressing the physical truth. It’s still a failure. It’s a map, not the territory, but it’s the closest I can get to the territory. ‘The sentence “there is a reality” is true’ is more like a map of a map. It’s clearly supposed to be a map. It’s an obvious attempt at a logical truth.
Put another way, if I draw a picture of a pipe, and it’s not convenient to actually give you a pipe, I probably mean a pipe. If I draw a picture of a picture of a pipe, I couldn’t have been referring to anything but a picture.
When I say reality is true, I mean it’s there. “There is reality” is only there in the sense that you wrote it. If I wrote “Colorless green ideas sleep furiously”, (which I did), it would be there.
http://en.wikipedia.org/wiki/Liar%27s_paradox#Possible_resolutions
Tarski and Prior both have good approaches. I wouldn’t call this problem unsolved.
I don’t mean to throw away all the wonderful complexity and intricacy of the argument, but it seems like they had it just about right when they added “is meaningless”.
The trick is just not to write “Everything on Board 33 is meaningless” on that same board. Honestly, that board is just a bad choice for this task.
Which is to say, we can see, from our vantage point outside the sentence, that the sentence is meaningless (in the sense of “can’t tell us anything”). That seems like it ought to be enough. Why try to inject our vantage point into this whirlpool of contradiction when we can just notice, remaining outside the whirlpool, that the question is meaningless, and move on anyway?
Because I could write “This sentence is not true.” So if it’s meaningless, it’s not true, so it’s true, which would make it false, which would make it true, etc.
Since that doesn’t help at all, you appear to be just advocating giving up.
Well, sort of. You might say I’m advocating acknowledging that it’s a paradox and considering that the end of it. Remember, I’m using the term meaningless (perhaps I should have said “useless”) to mean that it tells us nothing. Not in the sense that it makes no claims, but in the sense that it contains no information. It’s not clear to me that this kick starts the paradoxical loop like you’re implying.
Dissolving the Liar’s Paradox seems to me like trying to falsify a proof by contradiction.