DP:I’m not saying that hardware is infinitely reliable, or confusing a camera for direct access to reality, or anything like that. But, at some point, in practice, we get what we get, and we have to take it for granted. Maybe you consider the camera unreliable, but you still directly observe what the camera tells you. Then you would make probabilistic inferences about what light hit the camera, based on definite observations of what the camera tells you. Or maybe it’s one level more indirect from that, because your communication channel with the camera is itself imperfect. Nonetheless, at some point, you know what you saw—the bits make it through the peripheral systems, and enter the main AI system as direct observations, of which we can be certain. Hardware failures inside the core system can happen, but you shouldn’t be trying to plan for that in the reasoning of the core system itself—reasoning about that would be intractable. Instead, to address that concern, you use high-reliability computational methods at a lower level, such as redundant computations on separate hardware to check the integrity of each computation.
RJ:Then the error-checking at the lower level must be seen as part of the rational machinery.
DP:True, but all the error-checking procedures I know of can also be dealt with in a classical bayesian framework.
RJ:Can they? I wonder. But, I must admit, to me, this is a theory of rationality for human beings. It’s possible that the massively parallel hardware of the brain performs error-correction at a separated, lower level. However, it is also quite possible that it does not. An abstract theory of rationality should capture both possibilities. And is this flexibility really useless for AI? You mention running computations on different hardware in order to check everything. But this requires a rigid setup, where all computations are re-run a set number of times. We could also have a more flexible setup, where computations have confidence attached, and running on different machines creates increased confidence. This would allow for finer-grained control, re-running computations when the confidence is really important. And need I remind you that belief prop in Bayesian networks can be understood in radical probabilist terms? In this view, a belief network can be seen as a network of experts communicating with one another. This perspective has been, as I understand it, fruitful.
DP:Sure, but we can also see belief prop as just an efficient way of computing the regular Bayesian math. The efficiency can come from nowhere special, rather than coming from a core insight about rationality. Algorithms are like that all the time—I don’t see the fast fourier transform as coming from some basic insight about rationality.
RJ:The “factor graph” community says that belief prop and fast fourier actually come from the same insight! But I concede the point; we don’t actually need to be radical probabilists to understand and use belief prop. But why are you so resistant? Why are you so eager to posit a well-defined boundary between the “core system” and the environment?
DP:It just seems like good engineering. We want to deal with a cleanly defined boundary if possible, and it seems possible. And this way we can reason explicitly about the meaning of sensory observations, rather than implicitly being given the meaning by way of uncertain updates which stipulate a given likelihood ratio with no model. And it doesn’t seem like you’ve given me a full alternative—how do you propose to, really truly, specify a system without a boundary? At some point, messages have to be interpreted as uncertain evidence. It’s not like you have a camera automatically feeding you virtual evidence, unless you’ve designed the hardware to do that. In which case, the boundary would be the camera—the light waves don’t give you virtual evidence in the format the system accepts, even if light is “fundamentally uncertain” in some quantum sense or whatever. So you have this boundary, where the system translates input into evidence (be it uncertain or not) -- you haven’t eliminated it.
RJ:That’s true, but you’re supposing the boundary is represented in the AI itself as a special class of “sensory” propositions. Part of my argument is that, due to logical uncertainty, we can’t really make this distinction between sensory observations and internal propositions. And, once we make that concession, we might as well allow the programmer/teacher to introduce virtual evidence about whatever they want; this allows direct feedback on abstract matters such as “how to think about this”, which can’t be modeled easily in classic Bayesian settings such as Solomonoff induction, and may be important for AI safety.
DP:Very well, I concede that while I still hold out hope for a fully Bayesian treatment of logical uncertainty, I can’t provide you with one. And, sure, providing virtual evidence about arbitrary propositions does seem like a useful way to train a system. I’m just suspicious that there’s a fully Bayesian way to do everything you might want to do...
DP: I’m not saying that hardware is infinitely reliable, or confusing a camera for direct access to reality, or anything like that. But, at some point, in practice, we get what we get, and we have to take it for granted. Maybe you consider the camera unreliable, but you still directly observe what the camera tells you. Then you would make probabilistic inferences about what light hit the camera, based on definite observations of what the camera tells you. Or maybe it’s one level more indirect from that, because your communication channel with the camera is itself imperfect. Nonetheless, at some point, you know what you saw—the bits make it through the peripheral systems, and enter the main AI system as direct observations, of which we can be certain. Hardware failures inside the core system can happen, but you shouldn’t be trying to plan for that in the reasoning of the core system itself—reasoning about that would be intractable. Instead, to address that concern, you use high-reliability computational methods at a lower level, such as redundant computations on separate hardware to check the integrity of each computation.
RJ: Then the error-checking at the lower level must be seen as part of the rational machinery.
DP: True, but all the error-checking procedures I know of can also be dealt with in a classical bayesian framework.
RJ: Can they? I wonder. But, I must admit, to me, this is a theory of rationality for human beings. It’s possible that the massively parallel hardware of the brain performs error-correction at a separated, lower level. However, it is also quite possible that it does not. An abstract theory of rationality should capture both possibilities. And is this flexibility really useless for AI? You mention running computations on different hardware in order to check everything. But this requires a rigid setup, where all computations are re-run a set number of times. We could also have a more flexible setup, where computations have confidence attached, and running on different machines creates increased confidence. This would allow for finer-grained control, re-running computations when the confidence is really important. And need I remind you that belief prop in Bayesian networks can be understood in radical probabilist terms? In this view, a belief network can be seen as a network of experts communicating with one another. This perspective has been, as I understand it, fruitful.
DP: Sure, but we can also see belief prop as just an efficient way of computing the regular Bayesian math. The efficiency can come from nowhere special, rather than coming from a core insight about rationality. Algorithms are like that all the time—I don’t see the fast fourier transform as coming from some basic insight about rationality.
RJ: The “factor graph” community says that belief prop and fast fourier actually come from the same insight! But I concede the point; we don’t actually need to be radical probabilists to understand and use belief prop. But why are you so resistant? Why are you so eager to posit a well-defined boundary between the “core system” and the environment?
DP: It just seems like good engineering. We want to deal with a cleanly defined boundary if possible, and it seems possible. And this way we can reason explicitly about the meaning of sensory observations, rather than implicitly being given the meaning by way of uncertain updates which stipulate a given likelihood ratio with no model. And it doesn’t seem like you’ve given me a full alternative—how do you propose to, really truly, specify a system without a boundary? At some point, messages have to be interpreted as uncertain evidence. It’s not like you have a camera automatically feeding you virtual evidence, unless you’ve designed the hardware to do that. In which case, the boundary would be the camera—the light waves don’t give you virtual evidence in the format the system accepts, even if light is “fundamentally uncertain” in some quantum sense or whatever. So you have this boundary, where the system translates input into evidence (be it uncertain or not) -- you haven’t eliminated it.
RJ: That’s true, but you’re supposing the boundary is represented in the AI itself as a special class of “sensory” propositions. Part of my argument is that, due to logical uncertainty, we can’t really make this distinction between sensory observations and internal propositions. And, once we make that concession, we might as well allow the programmer/teacher to introduce virtual evidence about whatever they want; this allows direct feedback on abstract matters such as “how to think about this”, which can’t be modeled easily in classic Bayesian settings such as Solomonoff induction, and may be important for AI safety.
DP: Very well, I concede that while I still hold out hope for a fully Bayesian treatment of logical uncertainty, I can’t provide you with one. And, sure, providing virtual evidence about arbitrary propositions does seem like a useful way to train a system. I’m just suspicious that there’s a fully Bayesian way to do everything you might want to do...