At least we can query the Bayes net to ask “what it believes about X,” whereas we can’t necessarily do so with the logic-based system.
That assumes that the net contains a node corresponding exactly to what we mean by “X”, that we know which node corresponds exactly to “X”, and that we know how we know it. With logical rules we can at least have a formal proof that “X” in some model is equivalent to a particular term X in the logical language, and then ask “What can we prove from the logical rules about X?”
Do the categories above really “carve reality at its joints” with respect to transparency? Does a system’s status as a logic-based system or a Bayes net reliably predict its transparency, given that in principle we can use either one to express a probabilistic model of the world?
My intuition is that the ability to write down valid proofs of how a system behaves constitutes transparency, and the lack of that ability constitutes a black box.
How much of a system’s transparency is “intrinsic” to the system, and how much of it depends on the quality of the user interface used to inspect it? How much of a “transparency boost” can different kinds of systems get from excellently designed user interfaces?
Systems that are not amenable to formal proofs or that are have numerous edge cases will have less transparency. Tools for building succinct, orthogonal, modular, formalized systems will probably result in much more transparent systems. The most amazing tool in the world for training artificial neural networks will still produce a black box (unless it also happens to provide a formal model of the network’s behavior in an accessible and meaningful format; in which case why even bother running the ANN?).
That assumes that the net contains a node corresponding exactly to what we mean by “X”, that we know which node corresponds exactly to “X”, and that we know how we know it. With logical rules we can at least have a formal proof that “X” in some model is equivalent to a particular term X in the logical language, and then ask “What can we prove from the logical rules about X?”
My intuition is that the ability to write down valid proofs of how a system behaves constitutes transparency, and the lack of that ability constitutes a black box.
Systems that are not amenable to formal proofs or that are have numerous edge cases will have less transparency. Tools for building succinct, orthogonal, modular, formalized systems will probably result in much more transparent systems. The most amazing tool in the world for training artificial neural networks will still produce a black box (unless it also happens to provide a formal model of the network’s behavior in an accessible and meaningful format; in which case why even bother running the ANN?).
Bayes nets can answer many queries not corresponding to any one node, most famously of the form P(A|B).