Yes, a real-life reasoner would have to use probabilistic reasoning to carry out these sorts of inference. We do not have a real understanding yet of how to do probabilistic reasoning about logical statements, though, although there has been a bit of work about it in the past. This is one topic MIRI is currently doing research on. In the meantime, we also examine problems of self-reference in ordinary deductive logic, since we understand it very well. It’s not certain that the results there will carry over in any way into the probabilistic setting, and it’s possible that these problems simply disappear if you go to probabilistic reasoning, but there’s no reason to consider this overwhelmingly likely, and if they don’t it seems likely that at least some insights gained while thinking about deductive logic will carry over. In addition, when an AI reasons about another AI, it seems likely that it will use deductive logic when reasoning about the other AI’s source code, even if it also has to use probabilistic reasoning to connect the results obtained that way to the real world, where the other AI runs on an imperfect processor and its source code isn’t known with certainty.
I’ll accept that doing everything probabilistically is expensive, but I really don’t see how it wouldn’t solve the problem to at least assign probabilities to imported statements. The more elements in the chain of trust, the weaker it is. Eventually, someone needs it reliably enough that it becomes necessary to check it.
And of course any chain of trust like that ought to have a system for providing proof upon demand, which will be invoked roughly every N steps of trust. The recipients of the proof would then become nodes of authority on the issue.
This seems rather how actual people operate (though we often skip the ‘where to get proof of this’ step), and so any proof that it will become unworkable has a bit of a steep hill to climb.
Yes, a real-life reasoner would have to use probabilistic reasoning to carry out these sorts of inference. We do not have a real understanding yet of how to do probabilistic reasoning about logical statements, though, although there has been a bit of work about it in the past. This is one topic MIRI is currently doing research on. In the meantime, we also examine problems of self-reference in ordinary deductive logic, since we understand it very well. It’s not certain that the results there will carry over in any way into the probabilistic setting, and it’s possible that these problems simply disappear if you go to probabilistic reasoning, but there’s no reason to consider this overwhelmingly likely, and if they don’t it seems likely that at least some insights gained while thinking about deductive logic will carry over. In addition, when an AI reasons about another AI, it seems likely that it will use deductive logic when reasoning about the other AI’s source code, even if it also has to use probabilistic reasoning to connect the results obtained that way to the real world, where the other AI runs on an imperfect processor and its source code isn’t known with certainty.
More here.
I’ll accept that doing everything probabilistically is expensive, but I really don’t see how it wouldn’t solve the problem to at least assign probabilities to imported statements. The more elements in the chain of trust, the weaker it is. Eventually, someone needs it reliably enough that it becomes necessary to check it.
And of course any chain of trust like that ought to have a system for providing proof upon demand, which will be invoked roughly every N steps of trust. The recipients of the proof would then become nodes of authority on the issue.
This seems rather how actual people operate (though we often skip the ‘where to get proof of this’ step), and so any proof that it will become unworkable has a bit of a steep hill to climb.
I see. Thanks for the link.