I personally think that taking a probabilistic approach to logical truths (and using P(A|B) rather than B→A) is going to be the best way of fixing the Loebian and logical uncertainty problems of UDT. But before doing this, I thought I should at least glance at the work that logicians have been doing, and put it out on Less Wrong, in case someone gets inspired brainwaves from it.
If I try to build an AI that lacks all this stuff, what sort of real-world task can it not solve?
Paraconsistent logics are weaker than classical logics, so an AI with this has less stuff than a classical AI, so it can solve less. The hope is that it would still be able to solve a lot, and wouldn’t be able to solve the problematic Loebian spurious proofs.
… is going to be the best way of fixing the Loebian and logical uncertainty problems of UDT
To repeat my complaint here, the problems with UDT are conundrums, not obstacles. Finding workarounds doesn’t obviously explain what was wrong and why workarounds are supposed to work. This would only be useful to the extent a workaround gets understood better than an original prototype.
What was wrong is that we had yet another version of Russell’s/liar/self-reference paradox. Things reasoning about themselves (even implicitly) causes problems. So looking at systems designed to avoid those paradoxes is probably worth doing.
So looking at systems designed to avoid those paradoxes is probably worth doing.
The distinction I’m making is between techniques designed to avoid problems (refuse to consider the situations that contain them, or reduce the damage they cause, symptomatic treatment) and those allowing to resolve/understand them. For example, Goedel numbering is the kind of technique that significantly clarified what was going on with self-reference paradoxes, at which point you deal with complicated structure rather than confusing paradoxes.
Wot Wei_Dai said.
I personally think that taking a probabilistic approach to logical truths (and using P(A|B) rather than B→A) is going to be the best way of fixing the Loebian and logical uncertainty problems of UDT. But before doing this, I thought I should at least glance at the work that logicians have been doing, and put it out on Less Wrong, in case someone gets inspired brainwaves from it.
Paraconsistent logics are weaker than classical logics, so an AI with this has less stuff than a classical AI, so it can solve less. The hope is that it would still be able to solve a lot, and wouldn’t be able to solve the problematic Loebian spurious proofs.
To repeat my complaint here, the problems with UDT are conundrums, not obstacles. Finding workarounds doesn’t obviously explain what was wrong and why workarounds are supposed to work. This would only be useful to the extent a workaround gets understood better than an original prototype.
What was wrong is that we had yet another version of Russell’s/liar/self-reference paradox. Things reasoning about themselves (even implicitly) causes problems. So looking at systems designed to avoid those paradoxes is probably worth doing.
The distinction I’m making is between techniques designed to avoid problems (refuse to consider the situations that contain them, or reduce the damage they cause, symptomatic treatment) and those allowing to resolve/understand them. For example, Goedel numbering is the kind of technique that significantly clarified what was going on with self-reference paradoxes, at which point you deal with complicated structure rather than confusing paradoxes.