What the paradox tells me is that our understanding of the nature of language, logic, and mathematics is seriously incomplete, which might lead to disaster if we do anything whose success depends on such understanding.
The paradox is related to the fact that we don’t have a formal language that can talk about all of of the content of math/logic, for example, the truth value (or meaningfulness, if some sentences are allowed to be meaningless) of sentences in the language itself, which is obviously part of math or logic.
Since our current best ideas about how to let an AI do math is through formal languages, this implies that we are still far from having an AI achieve the same kind of understanding of math as us. We humans use natural language which does have these paradoxes which we don’t know how to resolve, but at least we are not (or at least not obviously) constrained in which parts of math we can even talk, or think about.
What is your algorithm for determining which sentences are meaningless? Since we don’t have such an algorithm (without serious flaws), I’m guessing your algorithm is probably flawed also, and I can perhaps exploit such flaws if I knew what your algorithm is. See also this quote from the IEP:
Many people, when first encountering the Liar Paradox, react by saying that the Liar Sentence must be meaningless. This popular solution does stop the argument of the paradox, but it is not an adequate solution if it answers the question, “Why is the Liar Sentence meaningless?” simply with the ad hoc remark, “Otherwise we get a paradox.” An adequate solution should offer a more systematic treatment. For example, the sentence, “This sentence is in English,” is very similar to the Liar Sentence. Is it meaningless, too? What ingredients of the Liar Sentence make it meaningless such that other sentences with those same ingredients will also be meaningless? Are disjunctions with the Liar Sentence meaningless? The questions continue, and an adequate solution should address them systematically.
What is your algorithm for determining which sentences are meaningless? Since we don’t have such an algorithm (without serious flaws), I’m guessing your algorithm is probably flawed also,
The “beliefs should pay rent” heuristic mentioned by User:Tiiba already answers this. My method (not strictly an algorithm[1], but sufficient to avoid paperclip-pumps) is to identify what constraint such an expression places on my expectations. This method [2] has been thoroughly discussed on this internet and is already invoked here as the de facto standard for what is and is not “meaningless”, though such a characterisation might go by different names (“fake explanation”, “maximum entropy probability distribution”, “not a belief”, “just belief as attire”, “empty symbol”, etc.).
Is your claim, then, that the “beliefs should pay rent” heuristic has serious enough flaws that it leaves an agent such as a human vulnerable to money-pumping? Typically, beliefs with such a failure mode immediately suggest an exploitable outcome, even in the absence of detailed knowledge of the belief holder’s epistemology and decision theory, yet that is not the case here.
With that in mind, the excerpt you posted does not pose significant challenges. Observe:
it is not an adequate solution if it answers the question, “Why is the Liar Sentence meaningless?” simply with the ad hoc remark, “Otherwise we get a paradox.”
This was not the justification that I or User:Tiiba gave.
For example, the sentence, “This sentence is in English,” is very similar to the Liar Sentence. Is it meaningless, too?
The claim that a symbol string “is in English” suggests observable expectations of that symbol string—for example, whether native speakers can read it, if most of its words are found in an English dictionary, etc. This is a crucial difference from the Liar Sentence.
What ingredients of the Liar Sentence make it meaningless such that other sentences with those same ingredients will also be meaningless?
Again, lack of a mapping to a probability distribution that diverges from maximum entropy.
Are disjunctions with the Liar Sentence meaningless?
The non-Liar Sentence part of them is not.
The questions continue, and an adequate solution should address them systematically.
The requirement that beliefs imply anticipations is systematic, and prevents such a continuation.
[1] and your insistence on an algorithm rather than mere heuristic is too strict here
[2] which is also an integral part of the Clippy Language Interface Protocol (CLIP)
What the paradox tells me is that our understanding of the nature of language, logic, and mathematics is seriously incomplete, which might lead to disaster if we do anything whose success depends on such understanding.
The paradox is related to the fact that we don’t have a formal language that can talk about all of of the content of math/logic, for example, the truth value (or meaningfulness, if some sentences are allowed to be meaningless) of sentences in the language itself, which is obviously part of math or logic.
Since our current best ideas about how to let an AI do math is through formal languages, this implies that we are still far from having an AI achieve the same kind of understanding of math as us. We humans use natural language which does have these paradoxes which we don’t know how to resolve, but at least we are not (or at least not obviously) constrained in which parts of math we can even talk, or think about.
I deem “this sentence is false” as meaningless and unworthy of further scrutiny from me.
Challenge: On the basis of the above, paperclip-pump me. (Or assume I’m a human and money-pump me.)
What is your algorithm for determining which sentences are meaningless? Since we don’t have such an algorithm (without serious flaws), I’m guessing your algorithm is probably flawed also, and I can perhaps exploit such flaws if I knew what your algorithm is. See also this quote from the IEP:
The “beliefs should pay rent” heuristic mentioned by User:Tiiba already answers this. My method (not strictly an algorithm[1], but sufficient to avoid paperclip-pumps) is to identify what constraint such an expression places on my expectations. This method [2] has been thoroughly discussed on this internet and is already invoked here as the de facto standard for what is and is not “meaningless”, though such a characterisation might go by different names (“fake explanation”, “maximum entropy probability distribution”, “not a belief”, “just belief as attire”, “empty symbol”, etc.).
Is your claim, then, that the “beliefs should pay rent” heuristic has serious enough flaws that it leaves an agent such as a human vulnerable to money-pumping? Typically, beliefs with such a failure mode immediately suggest an exploitable outcome, even in the absence of detailed knowledge of the belief holder’s epistemology and decision theory, yet that is not the case here.
With that in mind, the excerpt you posted does not pose significant challenges. Observe:
This was not the justification that I or User:Tiiba gave.
The claim that a symbol string “is in English” suggests observable expectations of that symbol string—for example, whether native speakers can read it, if most of its words are found in an English dictionary, etc. This is a crucial difference from the Liar Sentence.
Again, lack of a mapping to a probability distribution that diverges from maximum entropy.
The non-Liar Sentence part of them is not.
The requirement that beliefs imply anticipations is systematic, and prevents such a continuation.
[1] and your insistence on an algorithm rather than mere heuristic is too strict here
[2] which is also an integral part of the Clippy Language Interface Protocol (CLIP)
I can’t argue with that!