The Definability of Truth paper says that Kleene’s logic makes it difficult to judge which statements are undefined because that answer also comes out as undefined. Does this mean the probabilistic approach adopted by MIRI is capable of separating cases where the truth of a statement is not infinitely certain because of purely verbal paradoxes from statements whose truth is probabilistic for other reasons? In particular, I’m interested to know whether it can discriminate between those and scientifically interesting paradoxes, but it’s too soon to be asking questions like that if I’m not mistaken.
It is possible to construct probabilistic logics to normatively characterize the behavior of ideal goal-oriented agents, but the actual human brain probably strings together all sorts of partial, ad hoc, redundant and/or multiply realized implementations of abstract languages in a variety of ways. It is difficult to prove that an intelligence with an architecture like that will never do certain things in the future. In fact, it is probably a better idea to model a given brain physically than to describe the abstract mathematical reasoning followed by its workings, because the relevant wiring actually changes over time, and the same calculation could be performed in different ways.
It occurs to me that humans might learn languages with all sorts of “essential richness” by generalizing from the rules needed to achieve certain tasks. We may be born with the potential to learn some of these languages in this way, but can an AI running a pure probabilistic logic learn to generalize other abstract languages? It may not need to, mind you.
The Definability of Truth paper says that Kleene’s logic makes it difficult to judge which statements are undefined because that answer also comes out as undefined. Does this mean the probabilistic approach adopted by MIRI is capable of separating cases where the truth of a statement is not infinitely certain because of purely verbal paradoxes from statements whose truth is probabilistic for other reasons? In particular, I’m interested to know whether it can discriminate between those and scientifically interesting paradoxes, but it’s too soon to be asking questions like that if I’m not mistaken.
It is possible to construct probabilistic logics to normatively characterize the behavior of ideal goal-oriented agents, but the actual human brain probably strings together all sorts of partial, ad hoc, redundant and/or multiply realized implementations of abstract languages in a variety of ways. It is difficult to prove that an intelligence with an architecture like that will never do certain things in the future. In fact, it is probably a better idea to model a given brain physically than to describe the abstract mathematical reasoning followed by its workings, because the relevant wiring actually changes over time, and the same calculation could be performed in different ways.
It occurs to me that humans might learn languages with all sorts of “essential richness” by generalizing from the rules needed to achieve certain tasks. We may be born with the potential to learn some of these languages in this way, but can an AI running a pure probabilistic logic learn to generalize other abstract languages? It may not need to, mind you.