Well, not “the way humans do,” specifically—the fact that humans do it is just a way to motivate making logical systems that can do it too. Hopefully we can find how to do it better than humans do it by standards like consistency.
In other words, 0 and 1 are not probabilities.
Well, the problem with probabilities of 0 and 1 is more complicated than “they’re not probabilities.” But I see your point.
the thing to do is move away from representing categorical propositions at all.
That seems tricky. All the input into our brains seems to be translatable into categorical propositions with one extra parameter of probability. But we want to make our logical system deterministic, so that probability is just an ordinary extra parameter, which we’ve already seen doesn’t help resolve the liar’s paradox in simple applications. So are you just proposing making a system that’s like the human brain in that we can’t pick out the influence of individual parts? I think this would be a bad approach, even ignoring the appeal of simplicity, since it likely wouldn’t solve the problem, it would just prevent us from knowing what the problems with the system were.
Well, not “the way humans do,” specifically—the fact that humans do it is just a way to motivate making logical systems that can do it too. Hopefully we can find how to do it better than humans do it by standards like consistency.
Fair enough.
If you’re not actually trying to build a system that interprets the Liar’s Paradox the way humans do, but rather a system that (for example) interprets the Liar’s Paradox as a categorical probabilistic proposition without immediately believing everything (perhaps using human cognition as an inspiration, but then again perhaps not), then none of what I said is relevant to what you’re doing.
I was misled by your original phrasing, which I now realize you meant more as an illustrative analogy. I apologize for the confusion.
All the input into our brains seems to be translatable into categorical propositions with one extra parameter of probability.
Huh. I’m not sure I understand that.
If you say to me “It’s going to rain,” that evokes all kinds of symbols in my head, not just <probability P that it’s going to rain>.
Admittedly most of that isn’t input, strictly speaking… the fact that those symbols are activated is a fact about the receiver, not about the signal. (Though in many cases, an expectation of those facts about the receiver was instrumental in choosing the form of the signal in the first place, and in some cases the probability of that intention is one of the activated symbols, and in some cases an expectation of that was causal to the choice of symbol, and so forth. There are potentially many levels of communication. The receiver isn’t acting in isolation.)
In practice I don’t see how you can separate thinking about what the receiver does from thinking about what the input is. When I talk about “It’s going to rain” as a meaningful signal, I’m implicitly assuming loads of things about the receiver.
So are you just proposing making a system that’s like the human brain in that we can’t pick out the influence of individual parts?
I’m asserting that if you actually want to build a system that understands utterances the way humans do (which, as I say, I now realize wasn’t your goal to begin with, which is fine), there are parts of human cognition that are non-optional, and some notion of pragmatics rooted in a model of the world is one of those.
In other words: yes, of course, the steering wheel is different from the engine, and if you don’t understand that you don’t really have any idea what’s going on. Agreed 100%. We want debugging tools that let us trace the stack through the various subsystems and confim what they’re doing, otherwise (as you say) we don’t know what’s going on.
On the other hand, if all you have is the steering wheel, you may understand it perfectly, but it won’t actually go anywhere. Which is fine, if what you’re working on is a better-designed steering wheel.
Well, not “the way humans do,” specifically—the fact that humans do it is just a way to motivate making logical systems that can do it too. Hopefully we can find how to do it better than humans do it by standards like consistency.
Well, the problem with probabilities of 0 and 1 is more complicated than “they’re not probabilities.” But I see your point.
That seems tricky. All the input into our brains seems to be translatable into categorical propositions with one extra parameter of probability. But we want to make our logical system deterministic, so that probability is just an ordinary extra parameter, which we’ve already seen doesn’t help resolve the liar’s paradox in simple applications. So are you just proposing making a system that’s like the human brain in that we can’t pick out the influence of individual parts? I think this would be a bad approach, even ignoring the appeal of simplicity, since it likely wouldn’t solve the problem, it would just prevent us from knowing what the problems with the system were.
Fair enough.
If you’re not actually trying to build a system that interprets the Liar’s Paradox the way humans do, but rather a system that (for example) interprets the Liar’s Paradox as a categorical probabilistic proposition without immediately believing everything (perhaps using human cognition as an inspiration, but then again perhaps not), then none of what I said is relevant to what you’re doing.
I was misled by your original phrasing, which I now realize you meant more as an illustrative analogy. I apologize for the confusion.
Huh. I’m not sure I understand that.
If you say to me “It’s going to rain,” that evokes all kinds of symbols in my head, not just <probability P that it’s going to rain>.
Admittedly most of that isn’t input, strictly speaking… the fact that those symbols are activated is a fact about the receiver, not about the signal. (Though in many cases, an expectation of those facts about the receiver was instrumental in choosing the form of the signal in the first place, and in some cases the probability of that intention is one of the activated symbols, and in some cases an expectation of that was causal to the choice of symbol, and so forth. There are potentially many levels of communication. The receiver isn’t acting in isolation.)
In practice I don’t see how you can separate thinking about what the receiver does from thinking about what the input is. When I talk about “It’s going to rain” as a meaningful signal, I’m implicitly assuming loads of things about the receiver.
I’m asserting that if you actually want to build a system that understands utterances the way humans do (which, as I say, I now realize wasn’t your goal to begin with, which is fine), there are parts of human cognition that are non-optional, and some notion of pragmatics rooted in a model of the world is one of those.
In other words: yes, of course, the steering wheel is different from the engine, and if you don’t understand that you don’t really have any idea what’s going on. Agreed 100%. We want debugging tools that let us trace the stack through the various subsystems and confim what they’re doing, otherwise (as you say) we don’t know what’s going on.
On the other hand, if all you have is the steering wheel, you may understand it perfectly, but it won’t actually go anywhere. Which is fine, if what you’re working on is a better-designed steering wheel.