For Less Wrong, the best use of relevance logic would be to articulate counterfactuals without falling into the Löbian/self-confirming trap and blowing up.
When you ask, “What does it do?” what is “it” referring to? Modal logic? Counterfactuals? Stuart’s specific application of modal logic and counterfactuals?
Counterfactuals have found special application in causal inference. Lewis’ approach to counterfactuals provides a semantics for the Neyman-Rubin approach to causal inference. (See, for more detail, Glymour’s discussion piece following Holland’s well-known 1986 paper on causal inference.) Pearl takes it a step further by proving that the do() calculus is equivalent to Rubin’s and to Lewis’ approaches.
Your last question is a whole lot harder to answer. Maybe all of these things can be implemented in some way that does not require any modal logic or anything formally equivalent to modal logic. One might try a non-counterfactual approach to causal inference, like that suggested by Dawid, for example. (However, Dawid’s approach is not equivalent to the Neyman-Rubin-Lewis-Pearl approach: they don’t always endorse the same inferences.) I don’t know enough about AI approaches to the other problems to say whether or not modal logics have serious competitors. Maybe you could point me to some further reading(s)?
I personally think that taking a probabilistic approach to logical truths (and using P(A|B) rather than B→A) is going to be the best way of fixing the Loebian and logical uncertainty problems of UDT. But before doing this, I thought I should at least glance at the work that logicians have been doing, and put it out on Less Wrong, in case someone gets inspired brainwaves from it.
If I try to build an AI that lacks all this stuff, what sort of real-world task can it not solve?
Paraconsistent logics are weaker than classical logics, so an AI with this has less stuff than a classical AI, so it can solve less. The hope is that it would still be able to solve a lot, and wouldn’t be able to solve the problematic Loebian spurious proofs.
… is going to be the best way of fixing the Loebian and logical uncertainty problems of UDT
To repeat my complaint here, the problems with UDT are conundrums, not obstacles. Finding workarounds doesn’t obviously explain what was wrong and why workarounds are supposed to work. This would only be useful to the extent a workaround gets understood better than an original prototype.
What was wrong is that we had yet another version of Russell’s/liar/self-reference paradox. Things reasoning about themselves (even implicitly) causes problems. So looking at systems designed to avoid those paradoxes is probably worth doing.
So looking at systems designed to avoid those paradoxes is probably worth doing.
The distinction I’m making is between techniques designed to avoid problems (refuse to consider the situations that contain them, or reduce the damage they cause, symptomatic treatment) and those allowing to resolve/understand them. For example, Goedel numbering is the kind of technique that significantly clarified what was going on with self-reference paradoxes, at which point you deal with complicated structure rather than confusing paradoxes.
What does it do?
What goal is all this intended to accomplish?
If I try to build an AI that lacks all this stuff, what sort of real-world task can it not solve?
Stuart explained his motivations in Paraconsistency and relevance: avoid logical explosions:
When you ask, “What does it do?” what is “it” referring to? Modal logic? Counterfactuals? Stuart’s specific application of modal logic and counterfactuals?
I’ll guess that you mean to ask about the whole apparatus of modal logic. Aside from Stuart’s stated goals—having to do with relevance and reactions to explosion and certain paradoxes, like the liar—modal logics have been used to study provability, knowledge and belief, moral obligation, tense, and action, just to name a few. You might also take a look at Section 3 of Benthem’s book Modal Logic for Open Minds for some applications of modal logic.
Counterfactuals have found special application in causal inference. Lewis’ approach to counterfactuals provides a semantics for the Neyman-Rubin approach to causal inference. (See, for more detail, Glymour’s discussion piece following Holland’s well-known 1986 paper on causal inference.) Pearl takes it a step further by proving that the do() calculus is equivalent to Rubin’s and to Lewis’ approaches.
Your last question is a whole lot harder to answer. Maybe all of these things can be implemented in some way that does not require any modal logic or anything formally equivalent to modal logic. One might try a non-counterfactual approach to causal inference, like that suggested by Dawid, for example. (However, Dawid’s approach is not equivalent to the Neyman-Rubin-Lewis-Pearl approach: they don’t always endorse the same inferences.) I don’t know enough about AI approaches to the other problems to say whether or not modal logics have serious competitors. Maybe you could point me to some further reading(s)?
Wot Wei_Dai said.
I personally think that taking a probabilistic approach to logical truths (and using P(A|B) rather than B→A) is going to be the best way of fixing the Loebian and logical uncertainty problems of UDT. But before doing this, I thought I should at least glance at the work that logicians have been doing, and put it out on Less Wrong, in case someone gets inspired brainwaves from it.
Paraconsistent logics are weaker than classical logics, so an AI with this has less stuff than a classical AI, so it can solve less. The hope is that it would still be able to solve a lot, and wouldn’t be able to solve the problematic Loebian spurious proofs.
To repeat my complaint here, the problems with UDT are conundrums, not obstacles. Finding workarounds doesn’t obviously explain what was wrong and why workarounds are supposed to work. This would only be useful to the extent a workaround gets understood better than an original prototype.
What was wrong is that we had yet another version of Russell’s/liar/self-reference paradox. Things reasoning about themselves (even implicitly) causes problems. So looking at systems designed to avoid those paradoxes is probably worth doing.
The distinction I’m making is between techniques designed to avoid problems (refuse to consider the situations that contain them, or reduce the damage they cause, symptomatic treatment) and those allowing to resolve/understand them. For example, Goedel numbering is the kind of technique that significantly clarified what was going on with self-reference paradoxes, at which point you deal with complicated structure rather than confusing paradoxes.